The research team developed models of Attentional Control (AC) that are unique to existing modeling approaches in the literature. The goal was to enable the research team to (1) make predictions about AC and human performance in real-world scenarios and (2) to make predictions about individual characteristics based on human data. First, the team developed a proof-of-concept approach for representing an experimental design and human subjects data in a Bayesian model, then demonstrated an ability to draw inferences about conditions of interest relevant to real-world scenarios. Ultimately, this effort was successful, and we were able to make reasonable (meaning supported by behavioral data) inferences about conditions of interest to develop a risk model for AC (where risk is defined as a mismatch between AC and attentional demand). The team additionally defined a path forward for a human-constrained machine learning (HCML) approach to make predictions about an individual's state based on performance data. The effort represents a successful first step in both modeling efforts and serves as a basis for future work activities. Numerous opportunities for future work have been defined.
To date, disinformation research has focused largely on the production of false information ignoring the suppression of select information. We term this alternative form of disinformation information suppression. Information suppression occurs when facts are withheld with the intent to mislead. In order to detect information suppression, we focus on understanding the actors who withhold information. In this research, we use knowledge of human behavior to find signatures of different gatekeeping behaviors found in text. Specifically, we build a model to classify the different types of edits on Wikipedia using the added text alone and compare a human-informed feature engineering approach to a featureless algorithm. Being able to computationally distinguish gatekeeping behaviors is a first step towards identifying when information suppression is occurring.
Data science includes a variety of scientific methods and processes to extract data from various sources. The integration of interdisciplinary fields such as mathematics, statistics, information science, and computer science affords techniques to analyze large volumes of data to arrive at unique insights and make data-driven decisions (Sinelnikov et al., 2015) in real time. The technique lends itself to other applications across many domains including hazard assessments, analysis of near-miss data, identification of leading and lagging indicators from past accidents, and others. Benefits of this technique include efficiency due to improved data acquisition. Near-miss data represents an important source to identify conditions that lead to accidents to develop strategies to prevent them. Analysis of near-miss data sets can involve various techniques. This paper will explore the use of data science to mine accident reports, with a special emphasis on near misses to uncover occurrences that were not initially identified in the documentation. Data-science techniques such as text analyses facilitate searching large volumes of data to uncover patterns for more informed decisions. Regarding near-miss data, data science techniques can be used to test the ability to uncover new hazards/ hazardous preconditions and the accuracy of those findings. With the benefits of crunching large data sets and uncovering new hazards, considerations and implications are also made regarding how that might influence safety culture.
There are differences in how cyber - attack, sabotage, or discrete component failure mechanisms manifest within power plants and what these events would look like within the control room from an operator's perspective. This research focuses on understanding how a cyber event would affect the operation of the plant, how an operator would perceive the event, and if the operator's actions based on those perceptions will allow him/her to maintain plant safety. This research is funded as part of Sandia's Laborator y Directed Research and Development (LDRD) program to develop scenarios with cyber induced failure of plant systems coupled with a generic pressurized water reactor plant training simulator. The cyber scenario s w ere developed separately and injected into the simulator operational state to simulate an attack. These scenarios will determine if Nuclear Power Plant (NPP) operators can 1) recognize that the control room indicators were presenting incorrect or erroneous i nformation and 2) take appropriate actions to keep the plant safe. This will also provide the opportunity to assess the operator cognitive workload during such events and identify where improvements might be made. This paper will review results of a pilot study run with NPP operators to investigate performance under various cyber scenarios. The d iscussion will provide an overview of the approach, scenario selection, metrics captured , resulting insights into operator actions and plant response to multiple sc enarios of the NPP system .
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of the risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.
Cyber defense is an asymmetric battle today. We need to understand better what options are available for providing defenders with possible advantages. Our project combines machine learning, optimization, and game theory to obscure our defensive posture from the information the adversaries are able to observe. The main conceptual contribution of this research is to separate the problem of prediction, for which machine learning is used, and the problem of computing optimal operational decisions based on such predictions, coup led with a model of adversarial response. This research includes modeling of the attacker and defender, formulation of useful optimization models for studying adversarial interactions, and user studies to meas ure the impact of the modeling approaches in re alistic settings.