Publications

32 Results
Skip to search filters

Qualitative human reliability analysis-informed insights on cask drops

10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010

Brewer, Jeffrey D.; Hendrickson, Stacey M.; Boring, Ronald L.; Cooper, Susan E.

Human Reliability Analysis (HRA) methods have been developed primarily to provide information for use in probabilistic risk assessments analyzing nuclear power plant (NPP) operations. Despite this historical focus on the control room, there has been growing interest in applying HRA methods to other NPP activities such as dry cask storage operations (DCSOs) in which spent fuel is transferred into dry cask storage systems. This paper describes a successful application of aspects of the "A Technique for Human Event Analysis" (ATHEANA) HRA approach [1, 2] in performing qualitative HRA activities that generated insights on the potential for dropping a spent fuel cask during DCSOs. This paper provides a description of the process followed during the analysis, a description of the human failure event (HFE) scenario groupings, discussion of inferred human performance vulnerabilities, a detailed examination of one HFE scenario and illustrative approaches for avoiding or mitigating human performance vulnerabilities that may contribute to dropping a spent fuel cask.

More Details

Multidimensional confusability matrices enhance systematic analysis of unsafe actions and human failure events considered in psas of nuclear power plants

Proceedings of the 8th International Conference on Probabilistic Safety Assessment and Management, PSAM 2006

Brewer, Jeffrey D.

In conducting a probabilistic safety assessment of a nuclear power plant, it is important to identify unsafe actions (UAs) and human failure events (HFEs) that can lead to or exacerbate conditions during a range of incidents initiated by internal or external events. Identification and analysis of UAs and HFEs during a human reliability analysis can be a daunting process that often depends completely on subject matter experts attempting to divine a list of plant conditions and performance shaping factors (PSFs) that may influence incident outcomes. Key to this process of including the most important UAs and resulting HFEs is to speculate upon deviations of specific circumstances from a base case definition of a scenario that may present confusion regarding system diagnosis and appropriate actions (i.e., due to procedures, training, informal rules, etc.). Intuiting the location and impact of such system weaknesses is challenging and careful organization of analyst's approach to this process is critical for defending any argument for completeness of the analysis. Two dimensional distinguishability-confusability matrices were introduced as a tool to test symbol distinguishability for information displays. This paper expands on the tool by presenting multidimensional confusability matrices as a very helpful, pragmatic tool for organizing the process of combining expert judgment regarding system weaknesses, human performance and highly targeted experimentation in a manner that strengthens the quantitative justification for why particular UAs and HFEs were incorporated into a PSA. Furthermore, the particular approach presented here helps to strengthen the justification for specific likelihood determinations (i.e., human error probabilities) that end up being inserted into a probabilistic risk assessment (PRA) or other numerical description of system safety. This paper first introduces the multidimensional confusability matrix (MCM) approach and then applies it to a set of hypothetical loss of coolant accidents (LOCAs) for which a detailed human reliability analysis is desired. The basic structure of the MCM approach involves showing how actual plant states can be mapped to information available to the operators, and then mapping the information available to operator diagnoses and responses. Finally, there is a mapping of actual plant states to operator performance-each mapping is shown to vary along temporally grounded levels of dominant PSFs (e.g., stress, time available, procedures, training, etc.). MCM facilitates comprehensive analysis of the critical signals/information guiding operator diagnoses and actions. Particular manipulations of plant states, available information and PSFs and resulting operator performance may be experimentally gathered using targeted simulator studies, table top exercises with operators, or thought experiments among analysts. It is suggested that targeted simulator studies will provide the best quantitative mappings across the surfaces generated using the MCMs and the best aid to uncovering unanticipated pieces of 'critical' information used by operators. Details of quantifying overall operator performance using the MCM technique are provided. It is important to note that the MCM tool should be considered neutral regarding the issue of so-called 'reductionist' HRA methods (e.g., THERP-type) versus 'holistic' HRA methods (e.g., ATHEANA, MERMOS). If the analyst's support 'reductionist' approaches, then the MCM will represent more of a traditional interval-type, quantitative response surface in their analysis (i.e., more quantitative resolution and generalizability). If the analysis team places more emphasis on 'holistic' approaches, then the MCM will represent more of a nominal cataloging or ordinal ranking of factors influencing their specific analysis. In both types of analyses, the MCM tool helps in organizing, documenting and facilitating quantification of expert judgments and, when resources allow, targeted experimental data to support human reliability analyses. © 2006 by ASME.

More Details

Risk perception and strategic decision making: A new framework for understanding and mitigating biases with examples tailored to the nuclear power industry

Proceedings of the 8th International Conference on Probabilistic Safety Assessment and Management, PSAM 2006

Brewer, Jeffrey D.

As the economic and environmental impact arguments for increasing the use of nuclear energy for electricity generation and hydrogen production strengthen, it becomes important to better understand human biases, critical thinking skills, and individual specific characteristics that influence decisions made during probabilistic safety assessments (PSAs), decisions regarding nuclear energy among the general public (e.g., trust of risk assessments, acceptance of new plants, etc.), and nuclear energy decisions made by high-level decision makers (e.g., energy policy makers & government regulators). To promote increased understanding and hopefully to improve decision making capacities, this paper provides four key elements. The foundation of these elements builds on decades of research and associated experimental data regarding risk perception and decision making. The first element is a unique taxonomy of twenty-six recognized biases. Examples of biases were generated by reviewing the relevant literature in nuclear safety, cognitive psychology, economics, science education, and neural science (to name a few) and customizing superficial elements of those examples to the nuclear energy domain. The second element is a listing of ten critical thinking skills (with precise definitions) applicable to risk perception and decision making. Third, three brief hypothetical decision making examples are presented and decomposed relative to the unique, decision making bias framework and critical thinking set. The fourth element is a briefly outlined strategy which may enable one to make better decisions in domains that demand careful reflection and strong adherence to the best available data (i.e., avoiding 'unhelpful biases' that conflict with proper interpretation of available data). The elements concisely summarized in this paper (and additional elements) are available in detail in an unclassified, unlimited release Sandia National Laboratories report (SAND2005-5730). The proposed taxonomy of biases contains the headings of normative knowledge, availability, and individual specific biases. Normative knowledge involves a person's skills in combinatorics, probability theory, and statistics. Research has shown that training and experience in these quantitative fields can improve one's ability to accurately determine event likelihoods. Those trained in statistics tend to seek appropriate data sources when assessing the frequency and severity of an event. The availability category of biases includes those which result from the structure of human cognitive machinery. Two examples of biases in the availability category include the anchoring bias and the retrievability bias. The anchoring bias causes a decision maker to bias subsequent values or items toward the first value or item presented to them. The retrievability bias refers to the bias that drives people to believe those values or items which are easier to retrieve from memory are more likely to occur. Individual specific biases include a particular person's values, personality, interests, group identity, and substantive knowledge (i.e., specific domain knowledge related to the decision to be made). Critical thinking skills are also offered as foundational for competent risk perception and decision making as they can mute the impact of undesirable biases, regulate the application of one's knowledge to a decision, and guide information gathering activities. The list of critical thinking skills presented here was originally articulated by the late Arnold B. Arons, a distinguished physicist and esteemed researcher of learning processes. Finally, in addition to borrowing insights from the literature domains mentioned above, the formal decision making approach supported in this paper incorporates methods used in multi-attribute utility theory. © 2006 by ASME.

More Details

Risk perception & strategic decision making :general insights, a framework, and specific application to electricity generation using nuclear energy

Brewer, Jeffrey D.

The objective of this report is to promote increased understanding of decision making processes and hopefully to enable improved decision making regarding high-consequence, highly sophisticated technological systems. This report brings together insights regarding risk perception and decision making across domains ranging from nuclear power technology safety, cognitive psychology, economics, science education, public policy, and neural science (to name a few). It forms them into a unique, coherent, concise framework, and list of strategies to aid in decision making. It is suggested that all decision makers, whether ordinary citizens, academics, or political leaders, ought to cultivate their abilities to separate the wheat from the chaff in these types of decision making instances. The wheat includes proper data sources and helpful human decision making heuristics; these should be sought. The chaff includes ''unhelpful biases'' that hinder proper interpretation of available data and lead people unwittingly toward inappropriate decision making ''strategies''; obviously, these should be avoided. It is further proposed that successfully accomplishing the wheat vs. chaff separation is very difficult, yet tenable. This report hopes to expose and facilitate navigation away from decision-making traps which often ensnare the unwary. Furthermore, it is emphasized that one's personal decision making biases can be examined, and tools can be provided allowing better means to generate, evaluate, and select among decision options. Many examples in this report are tailored to the energy domain (esp. nuclear power for electricity generation). The decision making framework and approach presented here are applicable to any high-consequence, highly sophisticated technological system.

More Details

Final report: mathematical method for quantifying the effectiveness of management strategies

Robinett, R.D.; Brewer, Jeffrey D.

Large complex teams (e.g., DOE labs) must achieve sustained productivity in critical operations (e.g., weapons and reactor development) while maintaining safety for involved personnel, the public, and physical assets, as well as security for property and information. This requires informed management decisions that depend on tradeoffs of factors such as the mode and extent of personnel protection, potential accident consequences, the extent of information and physical asset protection, and communication with and motivation of involved personnel. All of these interact (and potentially interfere) with each other and must be weighed against financial resources and implementation time. Existing risk analysis tools can successfully treat physical response, component failure, and routine human actions. However, many ''soft'' factors involving human motivation and interaction among weakly related factors have proved analytically problematic. There has been a need for an effective software tool capable of quantifying these tradeoffs and helping make rational choices. This type of tool, developed during this project, facilitates improvements in safety, security, and productivity, and enables measurement of improvements as a function of resources expended. Operational safety, security, and motivation are significantly influenced by ''latent effects'', which are pre-occurring influences. One example of these is that an atmosphere of excessive fear can suppress open and frank disclosures, which can in turn hide problems, impede correction, and prevent lessons learned. Another is that a cultural mind-set of commitment, self-responsibility, and passion for an activity is a significant contributor to the activity's success. This project pursued an innovative approach for quantitatively analyzing latent effects in order to link the above types of factors, aggregating available information into quantitative metrics that can contribute to strategic management decisions, and measuring the results. The approach also evaluates the inherent uncertainties, and allows for tracking dynamics for early response and assessing developing trends. The model development is based on how factors combine and influence other factors in real time and over extended time periods. Potential strategies for improvement can be simulated and measured. Input information can be determined by quantification of qualitative information in a structured derivation process. This has proved to be a promising new approach for research and development applied to personnel performance and risk management.

More Details

Multi-attribute criteria applied to electric generation energy system analysis LDRD

Tatro, Marjorie L.; Drennen, Thomas E.; Tsao, Jeffrey Y.; Kuswa, Glenn W.; Valdez, Maximo M.; Brewer, Jeffrey D.; Zuffranieri, Jason Z.

This report began with a Laboratory-Directed Research and Development (LDRD) project to improve Sandia National Laboratories multidisciplinary capabilities in energy systems analysis. The aim is to understand how various electricity generating options can best serve needs in the United States. The initial product is documented in a series of white papers that span a broad range of topics, including the successes and failures of past modeling studies, sustainability, oil dependence, energy security, and nuclear power. Summaries of these projects are included here. These projects have provided a background and discussion framework for the Energy Systems Analysis LDRD team to carry out an inter-comparison of many of the commonly available electric power sources in present use, comparisons of those options, and efforts needed to realize progress towards those options. A computer aid has been developed to compare various options based on cost and other attributes such as technological, social, and policy constraints. The Energy Systems Analysis team has developed a multi-criteria framework that will allow comparison of energy options with a set of metrics that can be used across all technologies. This report discusses several evaluation techniques and introduces the set of criteria developed for this LDRD.

More Details
32 Results
32 Results