Publications

Results 1–25 of 86
Skip to search filters

Estimating the Value of Automation for Concentrating Solar Power Industry Operations (Final Report)

McNamara, Laura A.; Brost, Randolph B.; Small, Daniel E.

This report summarizes findings from a small, mixed-method research study examining industry perspectives on the potential for new forms of automation to invigorate the concentrating solar power (CSP) industry. In Fall 2021, the Solar Energy Technologies Office (SETO) of the United States Department of Energy (DOE) funded Sandia National Laboratories to elicit industry stakeholder perspectives on the potential role of automated systems in CSP operations. We interviewed eleven CSP professionals from five countries, using a combination of structured and open comment response modes. Respondents indicated a preference for automated systems that support heliostat manufacturing and installation, calibration, and responsiveness to shifting weather conditions. This pilot study demonstrates the importance of engaging industry stakeholders in discussions of technology research and development, to promote adoptable, useful innovation.

More Details

Challenges in Eye Tracking for Dynamic User-Driven Workflows

McNamara, Laura A.; Divis, Kristin; Morrow, James D.; Chen, Maximillian G.; Perkins, David P.

This three-year Laboratory Directed Research and Development (LDRD) project aimed at developing a developed prototype data collection system and analysis techniques to enable the measurement and analysis of user-driven dynamic workflows. Over 3 years, our team developed software, algorithms, and analysis technique to explore the feasibility of capturing and automatically associating eye tracking data with geospatial content, in a user-directed, dynamic visual search task. Although this was a small LDRD, we demonstrated the feasibility of automatically capturing, associating, and expressing gaze events in terms of geospatial image coordinates, even as the human "analyst" is given complete freedom to manipulate the stimulus image during a visual search task. This report describes the problem under examination, our approach, the techniques and software we developed, key achievements, ideas that did not work as we had hoped, and unsolved problems we hope to tackle in future projects.

More Details

Feature Selection and Inferential Procedures for Video Data

Chen, Maximillian G.; Bapst, Aleksander B.; Busche, Kirk B.; Do, Minh D.; Matzen, Laura E.; McNamara, Laura A.; Yeh, Raymond Y.

With the rise of electronic and high-dimensional data, new and innovative feature detection and statistical methods are required to perform accurate and meaningful statistical analysis of these datasets that provide unique statistical challenges. In the area of feature detection, much of the recent feature detection research in the computer vision community has focused on deep learning methods, which require large amounts of labeled training data. However, in many application areas, training data is very limited and often difficult to obtain. We develop methods for fast, unsupervised, precise feature detection for video data based on optical flows, edge detection, and clustering methods. We also use pretrained neural networks and interpretable linear models to extract features using very limited training data. In the area of statistics, while high-dimensional data analysis has been a main focus of recent statistical methodological research, much focus has been on populations of high-dimensional vectors, rather than populations of high-dimensional tensors, which are three- dimensional arrays that can be used to model dependent images, such as images taken of the same person or ripped video frames. Our feature detection method is a non-model-based method that fusses information from dense optical flow, raw image pixels, and frame differences to generate detections. Our hypothesis testing methods are based on the assumption that dependent images are concatenated into a tensor that follows a tensor normal distribution, and from this assumption, we derive likelihood-ratio, score, and regression-based tests for one- and multiple-sample testing problems. Our methods will be illustrated on simulated and real datasets. We conclude this report with comments on the relationship between feature detection and hypothesis testing methods. Acknowledgements This work was funded by the Sandia National Laboratories Laboratory Directed Research and Development (LDRD) pro- gram.

More Details

Does this interface make my sensor look bad? Basic principles for designing usable, useful interfaces for sensor technology operators

Proceedings of SPIE - The International Society for Optical Engineering

McNamara, Laura A.; Berg, Leif; Butler, Karin B.; Klein, Laura M.

Even as remote sensing technology has advanced in leaps and bounds over the past decade∗the remote sensing community lacks interfaces and interaction models that facilitate effective human operation of our sensor platforms. Interfaces that make great sense to electrical engineers and flight test crews can be anxiety-inducing to operational users who lack professional experience in the design and testing of sophisticated remote sensing platforms. In this paper, we reflect on an 18-month collaboration which our Sandia National Laboratory research team partnered with an industry software team to identify and fix critical issues in a widely-used sensor interface. Drawing on basic principles from cognitive and perceptual psychology and interaction design, we provide simple, easily learned guidance for minimizing common barriers to system learnability, memorability, and user engagement.

More Details

The need for separate operational and engineering user interfaces for command and control of airborne

Proceedings of SPIE - The International Society for Optical Engineering

Klein, Laura M.; McNamara, Laura A.

In this paper, we address the needed components to create usable engineering and operational user interfaces (UIs) for airborne Synthetic Aperture Radar (SAR) systems. As airborne SAR technology gains wider acceptance in the remote sensing and Intelligence, Surveillance, and Reconnaissance (ISR) communities, the need for effective and appropriate UIs to command and control these sensors has also increased. However, despite the growing demand for SAR in operational environments, the technology still faces an adoption roadblock, in large part due to the lack of effective UIs. It is common to find operational interfaces that have barely grown beyond the disparate tools engineers and technologists developed to demonstrate an initial concept or system. While sensor usability and utility are common requirements to engineers and operators, their objectives for interacting with the sensor are different. As such, the amount and type of information presented ought to be tailored to the specific application.

More Details

Eye tracking for dynamic, user-driven workflows

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

McNamara, Laura A.; Divis, Kristin M.; Morrow, J.D.; Perkins, David

Researchers at Sandia National Laboratories in Albuquerque, New Mexico, are engaged in the empirical study of human-information interaction in high-consequence national security environments. This focus emerged from our longstanding interactions with military and civilian intelligence analysts working across a broad array of domains, from signals intelligence to cybersecurity to geospatial imagery analysis. In this paper, we discuss how several years’ of work with Synthetic Aperture Radar (SAR) imagery analysts revealed the limitations of eye tracking systems for capturing gaze events in the dynamic, user-driven problem-solving strategies characteristic of geospatial analytic workflows. We also explain the need for eye tracking systems capable of supporting inductive study of dynamic, user-driven problem-solving strategies characteristic of geospatial analytic workflows. We then discuss an ongoing project in which we are leveraging some of the unique properties of SAR image products to develop a prototype eyetracking data collection and analysis system that will support inductive studies of visual workflows in SAR image analysis environments.

More Details
Results 1–25 of 86
Results 1–25 of 86