Human Reliability Analysis in the Electric Grid Domain
Abstract not provided.
Abstract not provided.
The cybersecurity research community has focused primarily on the analysis and automation of intrusion detection systems by examining network traffic behaviors. Expanding on this expertise, advanced cyber defense analysis is turning to host-based data to use in research and development to produce the next generation network defense tools. The ability to perform deep packet inspection of network traffic is increasingly harder with most boundary network traffic moving to HTTPS. Additionally, network data alone does not provide a full picture of end-to-end activity. These are some of the reasons that necessitate looking at other data sources such as host data. We outline our investigation into the processing, formatting, and storing of the data along with the preliminary results from our exploratory data analysis. In writing this report, it is our goal to aid in guiding future research by providing foundational understanding for an area of cybersecurity that is rich with a variety of complex, categorical, and sparse data, with a strong human influence component. Including suggestions for guiding potential directions for future research.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Chemical Engineering Transactions
Malicious cyber-attacks are becoming increasingly prominent due to the advance of technology and attack methods over the last decade. These attacks have the potential to bring down critical infrastructures, such as nuclear power plants (NPP’s), which are so vital to the country that their incapacitation would have debilitating effects on national security, public health, or safety. Despite the devastating effects a cyber-attack could have on NPP’s, it is unclear how control room operations would be affected in such a situation. In this project, the authors are collaborating with NPP operators to discern the impact of cyber-attacks on control room operations and lay out a framework to better understand the control room operators’ tasks and decision points. A cyber emulation of a digital control system was developed and coupled with a generic pressurized water reactor (GPWR) training simulator at Idaho National Laboratories. Licensed operators were asked to complete a series of scenarios on the simulator in which some of the scenarios were purposely obfuscated; that is, in which indicators were purposely displaying inaccurate information. Of interest is how this obfuscation impacts the ability to keep the plant safe and how it affects operators’ perceptions of workload and performance. Results, conclusions and lessons learned from this pilot experiment will be discussed. This research sheds light onto about how cyber events impact plant operations.
Abstract not provided.
Abstract not provided.
There are differences in how cyber - attack, sabotage, or discrete component failure mechanisms manifest within power plants and what these events would look like within the control room from an operator's perspective. This research focuses on understanding how a cyber event would affect the operation of the plant, how an operator would perceive the event, and if the operator's actions based on those perceptions will allow him/her to maintain plant safety. This research is funded as part of Sandia's Laborator y Directed Research and Development (LDRD) program to develop scenarios with cyber induced failure of plant systems coupled with a generic pressurized water reactor plant training simulator. The cyber scenario s w ere developed separately and injected into the simulator operational state to simulate an attack. These scenarios will determine if Nuclear Power Plant (NPP) operators can 1) recognize that the control room indicators were presenting incorrect or erroneous i nformation and 2) take appropriate actions to keep the plant safe. This will also provide the opportunity to assess the operator cognitive workload during such events and identify where improvements might be made. This paper will review results of a pilot study run with NPP operators to investigate performance under various cyber scenarios. The d iscussion will provide an overview of the approach, scenario selection, metrics captured , resulting insights into operator actions and plant response to multiple sc enarios of the NPP system .
Abstract not provided.
Abstract not provided.
Advances in Intelligent Systems and Computing
Malicious cyber-attacks are becoming increasingly prominent due to the advance of technology and methods over the last decade. These attacks have the potential to bring down critical infrastructures, such as nuclear power plants (NPP’s), which are so vital to the country that their incapacitation would have debilitating effects on national security, public health, or safety. Despite the devastating effects a cyber-attack could have on NPP’s, there is a lack of understanding as to the effects on the plant from a discreet failure or surreptitious sabotage of components and a lack of knowledge in how the control room operators would react to such a situation. In this project, the authors are collaborating with NPP operators to discern the impact of cyber-attacks on control room operations and lay out a framework to better understand the control room operators’ tasks and decision points.
Abstract not provided.
Abstract not provided.
10th International Topical Meeting on Nuclear Plant Instrumentation, Control, and Human-Machine Interface Technologies, NPIC and HMIT 2017
There are gaps in understanding how a cyber-attack would manifest itself within power plants and what these events would look like within the control room from an operator’s perspective. This is especially true for nuclear power plants where safety has much broader consequences than nonnuclear plants. The operating and emergency procedures that operators currently use are likely inadequate for targeted cyber-attacks. This research focuses on understanding how a cyber event would affect the operation of the plant, how an operator would perceive the event, and if the operator’s actions would keep the plant in a safe condition. This research is part of Sandia’s Laboratory Directed Research and Development program where a nuclear power plant cyber model of the control system digital architecture is coupled with a generic pressurized water reactor plant training simulator. Cyber event scenarios will be performed on the coupled system with plant operators. The scenarios simulate plant conditions that may exist during a cyber-attack, component failure, or insider sabotage, and provide an understanding of the displayed information and the actual plant conditions. These scenarios will determine if plant operators can 1) recognize that they are under cyber-attack and 2) take appropriate actions to keep the plant safe. This will also provide the opportunity to assess the operator cognitive workload during such events and identify where improvements might be made. Experiments with nuclear power plant operators will be carried out over FY 2018 and results of the research are expected by the end of FY 2018.
Advances in Intelligent Systems and Computing
Electric distribution utilities are on the brink of a paradigm shift to smart grids, which will incorporate new technologies and fundamentally change control room operations. Expertise in the control room, which has never been well defined, must be characterized in order to understand how this shift will impact control room operations and operator performance. In this study, the authors collaborated with a utility company in Vermont to define and understand expertise in distribution control room operations. The authors interviewed distribution control room operators, HR personnel, and managers and concluded that a control room expert is someone who has 7–9 years’ experience in the control room and possesses certain traits, such as the ability to remain calm under pressure, effectively multi-task and quickly synthesize large amounts of data. This work has implications for control room operator training and how expertise is defined in the control room domain.
Memory and Cognition
There is a great deal of debate concerning the benefits of working memory (WM) training and whether that training can transfer to other tasks. Although a consistent finding is that WM training programs elicit a short-term near-transfer effect (i.e., improvement in WM skills), results are inconsistent when considering persistence of such improvement and far transfer effects. In this study, we compared three groups of participants: a group that received WM training, a group that received training on how to use a mental imagery memory strategy, and a control group that received no training. Although the WM training group improved on the trained task, their posttraining performance on nontrained WM tasks did not differ from that of the other two groups. In addition, although the imagery training group’s performance on a recognition memory task increased after training, the WM training group’s performance on the task decreased after training. Participants’ descriptions of the strategies they used to remember the studied items indicated that WM training may lead people to adopt memory strategies that are less effective for other types of memory tasks. These results indicate that WM training may have unintended consequences for other types of memory performance.
Electricity Journal
Abstract not provided.
The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for the foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
‘Big data’ is a phrase that has gained much traction recently. It has been defined as ‘a broad term for data sets so large or complex that traditional data processing applications are inadequate and there are challenges with analysis, searching and visualization’ [1]. Many domains struggle with providing experts accurate visualizations of massive data sets so that the experts can understand and make decisions about the data e.g., [2, 3, 4, 5]. Abductive reasoning is the process of forming a conclusion that best explains observed facts and this type of reasoning plays an important role in process and product engineering. Throughout a production lifecycle, engineers will test subsystems for critical functions and use the test results to diagnose and improve production processes. This paper describes a value-driven evaluation study [7] for expert analyst interactions with big data for a complex visual abductive reasoning task. Participants were asked to perform different tasks using a new tool, while eye tracking data of their interactions with the tool was collected. The participants were also asked to give their feedback and assessments regarding the usability of the tool. The results showed that the interactive nature of the new tool allowed the participants to gain new insights into their data sets, and all participants indicated that they would begin using the tool in its current state.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Researchers at Sandia National Laboratories are integrating qualitative and quantitative methods from anthropology, human factors and cognitive psychology in the study of military and civilian intelligence analyst workflows in the United States’ national security community. Researchers who study human work processes often use qualitative theory and methods, including grounded theory, cognitive work analysis, and ethnography, to generate rich descriptive models of human behavior in context. In contrast, experimental psychologists typically do not receive training in qualitative induction, nor are they likely to practice ethnographic methods in their work, since experimental psychology tends to emphasize generalizability and quantitative hypothesis testing over qualitative description. However, qualitative frameworks and methods from anthropology, sociology, and human factors can play an important role in enhancing the ecological validity of experimental research designs.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Vision is one of the dominant human senses and most human-computer interfaces rely heavily on the capabilities of the human visual system. An enormous amount of effort is devoted to finding ways to visualize information so that humans can understand and make sense of it. By studying how professionals engage in these visual search tasks, we can develop insights into their cognitive processes and the influence of experience on those processes. This can advance our understanding of visual cognition in addition to providing information that can be applied to designing improved data visualizations or training new analysts. In this study, we investigated the role of expertise on performance in a Synthetic Aperture Radar (SAR) target detection task. SAR imagery differs substantially from optical imagery, making it a useful domain for investigating expert-novice differences. The participants in this study included professional SAR imagery analysts, radar engineers with experience working with SAR imagery, and novices who had little or no prior exposure to SAR imagery. Participants from all three groups completed a domain-specific visual search task in which they searched for targets within pairs of SAR images. They also completed a battery of domain-general visual search and cognitive tasks that measured factors such as mental rotation ability, spatial working memory, and useful field of view. The results revealed marked differences between the professional imagery analysts and the other groups, both for the domain-specific task and for some domain-general tasks. These results indicate that experience with visual search in non-optical imagery can influence performance on other domains.
Procedia Manufacturing
Electric distribution utilities, the companies that feed electricity to end users, are overseeing a technological transformation of their networks, installing sensors and other automated equipment, that are fundamentally changing the way the grid operates. These grid modernization efforts will allow utilities to incorporate some of the newer technology available to the home user – such as solar panels and electric cars – which will result in a bi-directional flow of energy and information. How will this new flow of information affect control room operations? How will the increased automation associated with smart grid technologies influence control room operators’ decisions? And how will changes in control room operations and operator decision making impact grid resilience? These questions have not been thoroughly studied, despite the enormous changes that are taking place. In this study, which involved collaborating with utility companies in the state of Vermont, the authors proposed to advance the science of control-room decision making by understanding the impact of distribution grid modernization on operator performance. Distribution control room operators were interviewed to understand daily tasks and decisions and to gain an understanding of how these impending changes will impact control room operations. Situation awareness was found to be a major contributor to successful control room operations. However, the impact of growing levels of automation due to smart grid technology on operators’ situation awareness is not well understood. Future work includes performing a naturalistic field study in which operator situation awareness will be measured in real-time during normal operations and correlated with the technological changes that are underway. The results of this future study will inform tools and strategies that will help system operators adapt to a changing grid, respond to critical incidents and maintain critical performance skills.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
The potential for bias to affect the results of knowledge elicitation studies is well recognized. Researchers and knowledge engineers attempt to control for bias through careful selection of elicitation and analysis methods. Recently, the development of a wide range of physiological sensors, coupled with fast, portable and inexpensive computing platforms, has added an additional dimension of objective measurement that can reduce bias effects. In the case of an abductive reasoning task, bias can be introduced through design of the stimuli, cues from researchers, or omissions by the experts. We describe a knowledge elicitation methodology robust to various sources of bias, incorporating objective and cross-referenced measurements. The methodology was applied in a study of engineers who use multivariate time series data to diagnose mance of devices throughout the production lifecycle. For visual reasoning tasks, eye tracking is particularly effective at controlling for biases of omission by providing a record of the subject’s attention allocation.
Procedia Manufacturing
The impact of automation on human performance has been studied by human factors researchers for over 35 years. One unresolved facet of this research is measurement of the level of automation across and within engineered systems. Repeatable methods of observing, measuring and documenting the level of automation are critical to the creation and validation of generalized theories of automation's impact on the reliability and resilience of human-in-the-loop systems. Numerous qualitative scales for measuring automation have been proposed. However these methods require subjective assessments based on the researcher's knowledge and experience, or through expert knowledge elicitation involving highly experienced individuals from each work domain. More recently, quantitative scales have been proposed, but have yet to be widely adopted, likely due to the difficulty associated with obtaining a sufficient number of empirical measurements from each system component. Our research suggests the need for a quantitative method that enables rapid measurement of a system's level of automation, is applicable across domains, and can be used by human factors practitioners in field studies or by system engineers as part of their technical planning processes. In this paper we present our research methodology and early research results from studies of electricity grid distribution control rooms. Using a system analysis approach based on quantitative measures of level of automation, we provide an illustrative analysis of select grid modernization efforts. This measure of the level of automation can be displayed as either a static, historical view of the system's automation dynamics (the dynamic interplay between human and automation required to maintain system performance) or it can be incorporated into real-time visualization systems already present in control rooms.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Reliability Engineering and System Safety
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.
Abstract not provided.
Abstract not provided.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
The purpose of the current study was to analyze the work of imagery analysts associated with Sagebrush, a Synthetic Aperture Radar (SAR) imaging system, using an adapted version of cognitive work analysis (CWA). This was achieved by conducting a work domain analysis (WDA) for the system under consideration. Another purpose of this study was to describe how we adapted the WDA framework to include a sequential component and a means to explicitly represent relationships between components. Lastly, we present a simplified work domain representation that we have found effective in communicating the importance of analysts' adaptive strategies to inform the research strategies of computational science researchers who want to develop useful algorithms, but who have little or no familiarity with sensor data analysis work. © 2014 Springer International Publishing.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Imagery analysts are given the difficult task of determining, post-hoc, if particular events of importance had occurred, employing Synthetic Aperture Radar (SAR) images, written reports and PowerPoint presentations to make their decision. We were asked to evaluate the current system analysis process and make recommendations for a future temporal geospatial analysis prototype that is envisioned to allow analysts to quickly search for temporal and spatial relationships between image-derived features. As such, we conducted a Hierarchical task analysis (HTA; [3], [6]) to understand the analysts' tasks and subtasks. We also implemented a timeline analysis and workload assessment [4] to better understand which tasks were the most time-consuming and perceived as the most effortful. Our results gave the team clear recommendations and requirements for a prototype. © 2014 Springer International Publishing.
Reliability Engineering and System Safety
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Workplace safety has been historically neglected by organizations in order to enhance profitability. Over the past 30 years, safety concerns and attention to safety have increased due to a series of disastrous events occurring across many different industries (e.g., Chernobyl, Upper Big-Branch Mine, Davis-Besse etc.). Many organizations have focused on promoting a healthy safety culture as a way to understand past incidents, and to prevent future disasters. There is an extensive academic literature devoted to safety culture, and the Department of Energy has also published a significant number of documents related to safety culture. The purpose of the current endeavor was to conduct a review of the safety culture literature in order to understand definitions, methodologies, models, and successful interventions for improving safety culture. After reviewing the literature, we observed four emerging themes. First, it was apparent that although safety culture is a valuable construct, it has some inherent weaknesses. For example, there is no common definition of safety culture and no standard way for assessing the construct. Second, it is apparent that researchers know how to measure particular components of safety culture, with specific focus on individual and organizational factors. Such existing methodologies can be leveraged for future assessments. Third, based on the published literature, the relationship between safety culture and performance is tenuous at best. There are few empirical studies that examine the relationship between safety culture and safety performance metrics. Further, most of these studies do not include a description of the implementation of interventions to improve safety culture, or do not measure the effect of these interventions on safety culture or performance. Fourth, safety culture is best viewed as a dynamic, multi-faceted overall system composed of individual, engineered and organizational models. By addressing all three components of safety culture, organizations have a better chance of understanding, evaluating, and making positive changes towards safety within their own organization.
Within cyber security, the human element represents one of the greatest untapped opportunities for increasing the effectiveness of network defenses. However, there has been little research to understand the human dimension in cyber operations. To better understand the needs and priorities for research and development to address these issues, a workshop was conducted August 28-29, 2012 in Washington DC. A synthesis was developed that captured the key issues and associated research questions. Research and development needs were identified that fell into three parallel paths: (1) human factors analysis and scientific studies to establish foundational knowledge concerning factors underlying the performance of cyber defenders; (2) development of models that capture key processes that mediate interactions between defenders, users, adversaries and the public; and (3) development of a multi-purpose test environment for conducting controlled experiments that enables systems and human performance measurement. These research and development investments would transform cyber operations from an art to a science, enabling systems solutions to be engineered to address a range of situations. Organizations would be able to move beyond the current state where key decisions (e.g. personnel assignment) are made on a largely ad hoc basis to a state in which there exist institutionalized processes for assuring the right people are doing the right jobs in the right way. These developments lay the groundwork for emergence of a professional class of cyber defenders with defined roles and career progressions, with higher levels of personnel commitment and retention. Finally, the operational impact would be evident in improved performance, accompanied by a shift to a more proactive response in which defenders have the capacity to exert greater control over the cyber battlespace.
Abstract not provided.
Abstract not provided.
This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.
Communications in Computer and Information Science
Information visualization tools are being promoted to aid decision support. These tools assist in the analysis and comprehension of ambiguous and conflicting data sets. Formal evaluations are necessary to demonstrate the effectiveness of visualization tools, yet conducting these studies is difficult. Objective metrics that allow designers to compare the amount of work required for users to operate a particular interface are lacking. This in turn makes it difficult to compare workload across different interfaces, which is problematic for complicated information visualization and visual analytics packages. We believe that measures of working memory load can provide a more objective and consistent way of assessing visualizations and user interfaces across a range of applications. We present initial findings from a study using measures of working memory load to compare the usability of two graph representations. © 2011 Springer-Verlag.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
In this paper we performed analysis of speech communications in order to determine if we can differentiate between expert and novice teams based on communication patterns. Two pairs of experts and novices performed numerous test sessions on the E-2 Enhanced Deployable Readiness Trainer (EDRT) which is a medium-fidelity simulator of the Naval Flight Officer (NFO) stations positioned at bank end of the E-2 Hawkeye. Results indicate that experts and novices can be differentiated based on communication patterns. First, experts and novices differ significantly with regard to the frequency of utterances, with both expert teams making many fewer radio calls than both novice teams. Next, the semantic content of utterances was considered. Using both manual and automated speech-to-text conversion, the resulting text documents were compared. For 7 of 8 subjects, the two most similar subjects (using cosine-similarity of term vectors) were in the same category of expertise (novice/expert). This means that the semantic content of utterances by experts was more similar to other experts, than novices, and vice versa. Finally, using machine learning techniques we constructed a classifier that, given as input the text of the speech of a subject, could identify whether the individual was an expert or novice with a very low error rate. By looking at the parameters of the machine learning algorithm we were also able to identify terms that are strongly associated with novices and experts. © 2011 Springer-Verlag.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. In this work, we follow-up on previous evaluations of the Automated Expert Modeling and Automated Student Evaluation (AEMASE) system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three domain-specific performance metrics.
Training simulators have become increasingly popular tools for instructing humans on performance in complex environments. However, the question of how to provide individualized and scenario-specific assessment and feedback to students remains largely an open question. To maximize training efficiency, new technologies are required that assist instructors in providing individually relevant instruction. Sandia National Laboratories has shown the feasibility of automated performance assessment tools, such as the Sandia-developed Automated Expert Modeling and Student Evaluation (AEMASE) software, through proof-of-concept demonstrations, a pilot study, and an experiment. In the pilot study, the AEMASE system, which automatically assesses student performance based on observed examples of good and bad performance in a given domain, achieved a high degree of agreement with a human grader (89%) in assessing tactical air engagement scenarios. In more recent work, we found that AEMASE achieved a high degree of agreement with human graders (83-99%) for three Navy E-2 domain-relevant performance metrics. The current study provides a rigorous empirical evaluation of the enhanced training effectiveness achievable with this technology. In particular, we assessed whether giving students feedback based on automated metrics would enhance training effectiveness and improve student performance. We trained two groups of employees (differentiated by type of feedback) on a Navy E-2 simulator and assessed their performance on three domain-specific performance metrics. We found that students given feedback via the AEMASE-based debrief tool performed significantly better than students given only instructor feedback on two out of three metrics. Future work will focus on extending these developments for automated assessment of teamwork.
An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The present experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Employees and contractors at a national security laboratory participated, either in a group setting or individually, in an electronic brainstorm to pose solutions to a 'wickedly' difficult problem. The data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out-performed the group working together. When idea quality is used as the benchmark of success, these data indicate that work-relevant challenges are better solved by aggregating electronic individual responses, rather than electronically convening a group. This research suggests that industrial reliance upon electronic problem solving groups should be tempered, and large nominal groups might be the more appropriate vehicle for solving wicked corporate issues.
Abstract not provided.
The present paper explores group dynamics and electronic communication, two components of wicked problem solving that are inherent to the national security environment (as well as many other business environments). First, because there can be no ''right'' answer or solution without first having agreement about the definition of the problem and the social meaning of a ''right solution'', these problems (often) fundamentally relate to the social aspects of groups, an area with much empirical research and application still needed. Second, as computer networks have been increasingly used to conduct business with decreased costs, increased information accessibility, and rapid document, database, and message exchange, electronic communication enables a new form of problem solving group that has yet to be well understood, especially as it relates to solving wicked problems.
An experiment is proposed which will compare the effectiveness of individual versus group brainstorming in addressing difficult, real world challenges. Previous research into electronic brainstorming has largely been limited to laboratory experiments using small groups of students answering questions irrelevant to an industrial setting. The proposed experiment attempts to extend current findings to real-world employees and organization-relevant challenges. Our employees will brainstorm ideas over the course of several days, echoing the real-world scenario in an industrial setting. The methodology and hypotheses to be tested are presented along with two questions for the experimental brainstorming sessions. One question has been used in prior work and will allow calibration of the new results with existing work. The second question qualifies as a complicated, perhaps even wickedly hard, question, with relevance to modern management practices.
An experiment was conducted comparing the effectiveness of individual versus group electronic brainstorming in order to address difficult, real world challenges. While industrial reliance on electronic communications has become ubiquitous, empirical and theoretical understanding of the bounds of its effectiveness have been limited. Previous research using short-term, laboratory experiments have engaged small groups of students in answering questions irrelevant to an industrial setting. The current experiment extends current findings beyond the laboratory to larger groups of real-world employees addressing organization-relevant challenges over the course of four days. Findings are twofold. First, the data demonstrate that (for this design) individuals perform at least as well as groups in producing quantity of electronic ideas, regardless of brainstorming duration. However, when judged with respect to quality along three dimensions (originality, feasibility, and effectiveness), the individuals significantly (p<0.05) out performed the group working together. The theoretical and applied (e.g., cost effectiveness) implications of this finding are discussed. Second, the current experiment yielded several viable solutions to the wickedly difficult problem that was posed.