Publications

Results 1–25 of 32
Skip to search filters

Qualitative human reliability analysis-informed insights on cask drops

10th International Conference on Probabilistic Safety Assessment and Management 2010, PSAM 2010

Brewer, Jeffrey D.; Hendrickson, Stacey M.; Boring, Ronald L.; Cooper, Susan E.

Human Reliability Analysis (HRA) methods have been developed primarily to provide information for use in probabilistic risk assessments analyzing nuclear power plant (NPP) operations. Despite this historical focus on the control room, there has been growing interest in applying HRA methods to other NPP activities such as dry cask storage operations (DCSOs) in which spent fuel is transferred into dry cask storage systems. This paper describes a successful application of aspects of the "A Technique for Human Event Analysis" (ATHEANA) HRA approach [1, 2] in performing qualitative HRA activities that generated insights on the potential for dropping a spent fuel cask during DCSOs. This paper provides a description of the process followed during the analysis, a description of the human failure event (HFE) scenario groupings, discussion of inferred human performance vulnerabilities, a detailed examination of one HFE scenario and illustrative approaches for avoiding or mitigating human performance vulnerabilities that may contribute to dropping a spent fuel cask.

More Details

Multidimensional confusability matrices enhance systematic analysis of unsafe actions and human failure events considered in psas of nuclear power plants

Proceedings of the 8th International Conference on Probabilistic Safety Assessment and Management, PSAM 2006

Brewer, Jeffrey D.

In conducting a probabilistic safety assessment of a nuclear power plant, it is important to identify unsafe actions (UAs) and human failure events (HFEs) that can lead to or exacerbate conditions during a range of incidents initiated by internal or external events. Identification and analysis of UAs and HFEs during a human reliability analysis can be a daunting process that often depends completely on subject matter experts attempting to divine a list of plant conditions and performance shaping factors (PSFs) that may influence incident outcomes. Key to this process of including the most important UAs and resulting HFEs is to speculate upon deviations of specific circumstances from a base case definition of a scenario that may present confusion regarding system diagnosis and appropriate actions (i.e., due to procedures, training, informal rules, etc.). Intuiting the location and impact of such system weaknesses is challenging and careful organization of analyst's approach to this process is critical for defending any argument for completeness of the analysis. Two dimensional distinguishability-confusability matrices were introduced as a tool to test symbol distinguishability for information displays. This paper expands on the tool by presenting multidimensional confusability matrices as a very helpful, pragmatic tool for organizing the process of combining expert judgment regarding system weaknesses, human performance and highly targeted experimentation in a manner that strengthens the quantitative justification for why particular UAs and HFEs were incorporated into a PSA. Furthermore, the particular approach presented here helps to strengthen the justification for specific likelihood determinations (i.e., human error probabilities) that end up being inserted into a probabilistic risk assessment (PRA) or other numerical description of system safety. This paper first introduces the multidimensional confusability matrix (MCM) approach and then applies it to a set of hypothetical loss of coolant accidents (LOCAs) for which a detailed human reliability analysis is desired. The basic structure of the MCM approach involves showing how actual plant states can be mapped to information available to the operators, and then mapping the information available to operator diagnoses and responses. Finally, there is a mapping of actual plant states to operator performance-each mapping is shown to vary along temporally grounded levels of dominant PSFs (e.g., stress, time available, procedures, training, etc.). MCM facilitates comprehensive analysis of the critical signals/information guiding operator diagnoses and actions. Particular manipulations of plant states, available information and PSFs and resulting operator performance may be experimentally gathered using targeted simulator studies, table top exercises with operators, or thought experiments among analysts. It is suggested that targeted simulator studies will provide the best quantitative mappings across the surfaces generated using the MCMs and the best aid to uncovering unanticipated pieces of 'critical' information used by operators. Details of quantifying overall operator performance using the MCM technique are provided. It is important to note that the MCM tool should be considered neutral regarding the issue of so-called 'reductionist' HRA methods (e.g., THERP-type) versus 'holistic' HRA methods (e.g., ATHEANA, MERMOS). If the analyst's support 'reductionist' approaches, then the MCM will represent more of a traditional interval-type, quantitative response surface in their analysis (i.e., more quantitative resolution and generalizability). If the analysis team places more emphasis on 'holistic' approaches, then the MCM will represent more of a nominal cataloging or ordinal ranking of factors influencing their specific analysis. In both types of analyses, the MCM tool helps in organizing, documenting and facilitating quantification of expert judgments and, when resources allow, targeted experimental data to support human reliability analyses. © 2006 by ASME.

More Details

Risk perception and strategic decision making: A new framework for understanding and mitigating biases with examples tailored to the nuclear power industry

Proceedings of the 8th International Conference on Probabilistic Safety Assessment and Management, PSAM 2006

Brewer, Jeffrey D.

As the economic and environmental impact arguments for increasing the use of nuclear energy for electricity generation and hydrogen production strengthen, it becomes important to better understand human biases, critical thinking skills, and individual specific characteristics that influence decisions made during probabilistic safety assessments (PSAs), decisions regarding nuclear energy among the general public (e.g., trust of risk assessments, acceptance of new plants, etc.), and nuclear energy decisions made by high-level decision makers (e.g., energy policy makers & government regulators). To promote increased understanding and hopefully to improve decision making capacities, this paper provides four key elements. The foundation of these elements builds on decades of research and associated experimental data regarding risk perception and decision making. The first element is a unique taxonomy of twenty-six recognized biases. Examples of biases were generated by reviewing the relevant literature in nuclear safety, cognitive psychology, economics, science education, and neural science (to name a few) and customizing superficial elements of those examples to the nuclear energy domain. The second element is a listing of ten critical thinking skills (with precise definitions) applicable to risk perception and decision making. Third, three brief hypothetical decision making examples are presented and decomposed relative to the unique, decision making bias framework and critical thinking set. The fourth element is a briefly outlined strategy which may enable one to make better decisions in domains that demand careful reflection and strong adherence to the best available data (i.e., avoiding 'unhelpful biases' that conflict with proper interpretation of available data). The elements concisely summarized in this paper (and additional elements) are available in detail in an unclassified, unlimited release Sandia National Laboratories report (SAND2005-5730). The proposed taxonomy of biases contains the headings of normative knowledge, availability, and individual specific biases. Normative knowledge involves a person's skills in combinatorics, probability theory, and statistics. Research has shown that training and experience in these quantitative fields can improve one's ability to accurately determine event likelihoods. Those trained in statistics tend to seek appropriate data sources when assessing the frequency and severity of an event. The availability category of biases includes those which result from the structure of human cognitive machinery. Two examples of biases in the availability category include the anchoring bias and the retrievability bias. The anchoring bias causes a decision maker to bias subsequent values or items toward the first value or item presented to them. The retrievability bias refers to the bias that drives people to believe those values or items which are easier to retrieve from memory are more likely to occur. Individual specific biases include a particular person's values, personality, interests, group identity, and substantive knowledge (i.e., specific domain knowledge related to the decision to be made). Critical thinking skills are also offered as foundational for competent risk perception and decision making as they can mute the impact of undesirable biases, regulate the application of one's knowledge to a decision, and guide information gathering activities. The list of critical thinking skills presented here was originally articulated by the late Arnold B. Arons, a distinguished physicist and esteemed researcher of learning processes. Finally, in addition to borrowing insights from the literature domains mentioned above, the formal decision making approach supported in this paper incorporates methods used in multi-attribute utility theory. © 2006 by ASME.

More Details
Results 1–25 of 32
Results 1–25 of 32