Uncertainty Quantification for Multimodal Imagery
Abstract not provided.
Abstract not provided.
Abstract not provided.
This three-year Laboratory Directed Research and Development (LDRD) project aimed at developing a developed prototype data collection system and analysis techniques to enable the measurement and analysis of user-driven dynamic workflows. Over 3 years, our team developed software, algorithms, and analysis technique to explore the feasibility of capturing and automatically associating eye tracking data with geospatial content, in a user-directed, dynamic visual search task. Although this was a small LDRD, we demonstrated the feasibility of automatically capturing, associating, and expressing gaze events in terms of geospatial image coordinates, even as the human "analyst" is given complete freedom to manipulate the stimulus image during a visual search task. This report describes the problem under examination, our approach, the techniques and software we developed, key achievements, ideas that did not work as we had hoped, and unsolved problems we hope to tackle in future projects.
Data-driven modeling, including machine learning methods, continue to play an increas- ing role in society. Data-driven methods impact decision making for applications ranging from everyday determinations about which news people see and control of self-driving cars to high-consequence national security situations related to cyber security and analysis of nuclear weapons reliability. Although modern machine learning methods have made great strides in model induction and show excellent performance in a broad variety of complex domains, uncertainty remains an inherent aspect of any data-driven model. In this report, we provide an update to the preliminary results on uncertainty quantifi- cation for machine learning presented in SAND2017-6776. Specifically, we improve upon the general problem definition and expand upon the experiments conducted for the earlier re- port. Most importantly, we summarize key lessons learned about how and when uncertainty quantification can inform decision making and provide valuable insights into the quality of learned models and potential improvements to them. Acknowledgements The authors thank Kristina Czuchlewski, John Feddema, Todd Jones, Chris Young, Rudy Garcia, Rich Field, Ann Speed, Randy Brost, Stephen Dauphin, and countless others for providing helpful discussion and comments throughout the life of this project. This work was funded by the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) program.
In this report, we present preliminary research into nonparametric clustering methods for multi-source imagery data and quantifying the performance of these models. In many domain areas, data sets do not necessarily follow well-defined and well-known probability distributions, such as the normal, gamma, and exponential. This is especially true when combining data from multiple sources describing a common set of objects (which we call multimodal analysis), where the data in each source can follow different distributions and need to be analyzed in conjunction with one another. This necessitates nonparametric den- sity estimation methods, which allow the data to better dictate the distribution of the data. One prominent example of multimodal analysis is multimodal image analysis, when we an- alyze multiple images taken using different radar systems of the same scene of interest. We develop uncertainty analysis methods, which are inherent in the use of probabilistic models but often not taken advance of, to assess the performance of probabilistic clustering methods used for analyzing multimodal images. This added information helps assess model perfor- mance and how much trust decision-makers should have in the obtained analysis results. The developed methods illustrate some ways in which uncertainty can inform decisions that arise when designing and using machine learning models. Acknowledgements This work was funded by the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) program.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
We discuss uncertainty quantification in multisensor data integration and analysis, including estimation methods and the role of uncertainty in decision making and trust in automated analytics. The challenges associated with automatically aggregating information across multiple images, identifying subtle contextual cues, and detecting small changes in noisy activity patterns are well-established in the intelligence, surveillance, and reconnaissance (ISR) community. In practice, such questions cannot be adequately addressed with discrete counting, hard classifications, or yes/no answers. For a variety of reasons ranging from data quality to modeling assumptions to inadequate definitions of what constitutes "interesting" activity, variability is inherent in the output of automated analytics, yet it is rarely reported. Consideration of these uncertainties can provide nuance to automated analyses and engender trust in their results. In this work, we assert the importance of uncertainty quantification for automated data analytics and outline a research agenda. We begin by defining uncertainty in the context of machine learning and statistical data analysis, identify its sources, and motivate the importance and impact of its quantification. We then illustrate these issues and discuss methods for data-driven uncertainty quantification in the context of a multi-source image analysis example. We conclude by identifying several specific research issues and by discussing the potential long-term implications of uncertainty quantification for data analytics, including sensor tasking and analyst trust in automated analytics.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
With the rise of electronic and high-dimensional data, new and innovative feature detection and statistical methods are required to perform accurate and meaningful statistical analysis of these datasets that provide unique statistical challenges. In the area of feature detection, much of the recent feature detection research in the computer vision community has focused on deep learning methods, which require large amounts of labeled training data. However, in many application areas, training data is very limited and often difficult to obtain. We develop methods for fast, unsupervised, precise feature detection for video data based on optical flows, edge detection, and clustering methods. We also use pretrained neural networks and interpretable linear models to extract features using very limited training data. In the area of statistics, while high-dimensional data analysis has been a main focus of recent statistical methodological research, much focus has been on populations of high-dimensional vectors, rather than populations of high-dimensional tensors, which are three- dimensional arrays that can be used to model dependent images, such as images taken of the same person or ripped video frames. Our feature detection method is a non-model-based method that fusses information from dense optical flow, raw image pixels, and frame differences to generate detections. Our hypothesis testing methods are based on the assumption that dependent images are concatenated into a tensor that follows a tensor normal distribution, and from this assumption, we derive likelihood-ratio, score, and regression-based tests for one- and multiple-sample testing problems. Our methods will be illustrated on simulated and real datasets. We conclude this report with comments on the relationship between feature detection and hypothesis testing methods. Acknowledgements This work was funded by the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) pro- gram.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
This report summarizes preliminary research into uncertainty quantification for pattern ana- lytics within the context of the Pattern Analytics to Support High-Performance Exploitation and Reasoning (PANTHER) project. The primary focus of PANTHER was to make large quantities of remote sensing data searchable by analysts. The work described in this re- port adds nuance to both the initial data preparation steps and the search process. Search queries are transformed from does the specified pattern exist in the data? to how certain is the system that the returned results match the query? We show example results for both data processing and search, and discuss a number of possible improvements for each.