Publications

135 Results
Skip to search filters

What can simulation test beds teach us about social science? Results of the ground truth program

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Krofcheck, Daniel J.; Warrender, Christina E.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Emery, Ben; Murdock, Jaimie; Bernard, Michael L.; Romero, Vicente J.

The ground truth program used simulations as test beds for social science research methods. The simulations had known ground truth and were capable of producing large amounts of data. This allowed research teams to run experiments and ask questions of these simulations similar to social scientists studying real-world systems, and enabled robust evaluation of their causal inference, prediction, and prescription capabilities. We tested three hypotheses about research effectiveness using data from the ground truth program, specifically looking at the influence of complexity, causal understanding, and data collection on performance. We found some evidence that system complexity and causal understanding influenced research performance, but no evidence that data availability contributed. The ground truth program may be the first robust coupling of simulation test beds with an experimental framework capable of teasing out factors that determine the success of social science research.

More Details

Feedback density and causal complexity of simulation model structure

Journal of Simulation

Naugle, Asmeret B.; Verzi, Stephen J.; Lakkaraju, Kiran L.; Swiler, Laura P.; Warrender, Christina E.; Bernard, Michael L.; Romero, Vicente J.

Measures of simulation model complexity generally focus on outputs; we propose measuring the complexity of a model’s causal structure to gain insight into its fundamental character. This article introduces tools for measuring causal complexity. First, we introduce a method for developing a model’s causal structure diagram, which characterises the causal interactions present in the code. Causal structure diagrams facilitate comparison of simulation models, including those from different paradigms. Next, we develop metrics for evaluating a model’s causal complexity using its causal structure diagram. We discuss cyclomatic complexity as a measure of the intricacy of causal structure and introduce two new metrics that incorporate the concept of feedback, a fundamental component of causal structure. The first new metric introduced here is feedback density, a measure of the cycle-based interconnectedness of causal structure. The second metric combines cyclomatic complexity and feedback density into a comprehensive causal complexity measure. Finally, we demonstrate these complexity metrics on simulation models from multiple paradigms and discuss potential uses and interpretations. These tools enable direct comparison of models across paradigms and provide a mechanism for measuring and discussing complexity based on a model’s fundamental assumptions and design.

More Details

Conflicting Information and Compliance With COVID-19 Behavioral Recommendations

Naugle, Asmeret B.; Rothganger, Fredrick R.; Verzi, Stephen J.; Doyle, Casey L.

The prevalence of COVID-19 is shaped by behavioral responses to recommendations and warnings. Available information on the disease determines the population’s perception of danger and thus its behavior; this information changes dynamically, and different sources may report conflicting information. We study the feedback between disease, information, and stay-at-home behavior using a hybrid agent-based-system dynamics model that incorporates evolving trust in sources of information. We use this model to investigate how divergent reporting and conflicting information can alter the trajectory of a public health crisis. The model shows that divergent reporting not only alters disease prevalence over time, but also increases polarization of the population’s behaviors and trust in different sources of information.

More Details

MalGen: Malware Generation with Specific Behaviors to Improve Machine Learning-based Detectors

Smith, Michael R.; Carbajal, Armida J.; Domschot, Eva D.; Johnson, Nicholas J.; Goyal, Akul A.; Lamb, Christopher L.; Lubars, Joseph L.; Kegelmeyer, William P.; Krishnakumar, Raga K.; Quynn, Sophie Q.; Ramyaa, Ramyaa R.; Verzi, Stephen J.; Zhou, Xin Z.

In recent years, infections and damage caused by malware have increased at exponential rates. At the same time, machine learning (ML) techniques have shown tremendous promise in many domains, often out performing human efforts by learning from large amounts of data. Results in the open literature suggest that ML is able to provide similar results for malware detection, achieving greater than 99% classifcation accuracy [49]. However, the same detection rates when applied in deployed settings have not been achieved. Malware is distinct from many other domains in which ML has shown success in that (1) it purposefully tries to hide, leading to noisy labels and (2) often its behavior is similar to benign software only differing in intent, among other complicating factors. This report details the reasons for the diffcultly of detecting novel malware by ML methods and offers solutions to improve the detection of novel malware.

More Details

Graph-Based Similarity Metrics for Comparing Simulation Model Causal Structures

Naugle, Asmeret B.; Swiler, Laura P.; Lakkaraju, Kiran L.; Verzi, Stephen J.; Warrender, Christina E.; Romero, Vicente J.

The causal structure of a simulation is a major determinant of both its character and behavior, yet most methods we use to compare simulations focus only on simulation outputs. We introduce a method that combines graphical representation with information theoretic metrics to quantitatively compare the causal structures of models. The method applies to agent-based simulations as well as system dynamics models and facilitates comparison within and between types. Comparing models based on their causal structures can illuminate differences in assumptions made by the models, allowing modelers to (1) better situate their models in the context of existing work, including highlighting novelty, (2) explicitly compare conceptual theory and assumptions to simulated theory and assumptions, and (3) investigate potential causal drivers of divergent behavior between models. We demonstrate the method by comparing two epidemiology models at different levels of aggregation.

More Details

The Ground Truth Program: Simulations as Test Beds for Social Science Research Methods.

Computational and Mathematical Organization Theory

Naugle, Asmeret B.; Russell, Adam R.; Lakkaraju, Kiran L.; Swiler, Laura P.; Verzi, Stephen J.; Romero, Vicente J.

Social systems are uniquely complex and difficult to study, but understanding them is vital to solving the world’s problems. The Ground Truth program developed a new way of testing the research methods that attempt to understand and leverage the Human Domain and its associated complexities. The program developed simulations of social systems as virtual world test beds. Not only were these simulations able to produce data on future states of the system under various circumstances and scenarios, but their causal ground truth was also explicitly known. Research teams studied these virtual worlds, facilitating deep validation of causal inference, prediction, and prescription methods. The Ground Truth program model provides a way to test and validate research methods to an extent previously impossible, and to study the intricacies and interactions of different components of research.

More Details

Data Science and Machine Learning for Genome Security

Verzi, Stephen J.; Krishnakumar, Raga K.; Levin, Drew L.; Krofcheck, Daniel J.; Williams, Kelly P.

This report describes research conducted to use data science and machine learning methods to distinguish targeted genome editing versus natural mutation and sequencer machine noise. Genome editing capabilities have been around for more than 20 years, and the efficiencies of these techniques has improved dramatically in the last 5+ years, notably with the rise of CRISPR-Cas technology. Whether or not a specific genome has been the target of an edit is concern for U.S. national security. The research detailed in this report provides first steps to address this concern. A large amount of data is necessary in our research, thus we invested considerable time collecting and processing it. We use an ensemble of decision tree and deep neural network machine learning methods as well as anomaly detection to detect genome edits given either whole exome or genome DNA reads. The edit detection results we obtained with our algorithms tested against samples held out during training of our methods are significantly better than random guessing, achieving high F1 and recall scores as well as with precision overall.

More Details

A Theoretical Approach for Reliability Within Information Supply Chains with Cycles and Negations

IEEE Transactions on Reliability

Livesay, Michael L.; Pless, Daniel J.; Verzi, Stephen J.; Stamber, Kevin L.; Lilje, Anne

Complex networks of information processing systems, or information supply chains, present challenges for performance analysis. We establish a mathematical setting, in which a process within an information supply chain can be analyzed in terms of the functionality of the system's components. Principles of this methodology are rigorously defended and induce a model for determining the reliability for the various products in these networks. Our model does not limit us from having cycles in the network, as long as the cycles do not contain negation. It is shown that our approach to reliability resolves the nonuniqueness caused by cycles in a probabilistic Boolean network. An iterative algorithm is given to find the reliability values of the model, using a process that can be fully automated. This automated method of discerning reliability is beneficial for systems managers. As a systems manager considers systems modification, such as the replacement of owned and maintained hardware systems with cloud computing resources, the need for comparative analysis of system reliability is paramount. The model is extended to handle conditional knowledge about the network, allowing one to make predictions of weaknesses in the system. Finally, to illustrate the model's flexibility over different forms, it is demonstrated on a system of components and subcomponents.

More Details

Predictive Data-driven Platform for Subsurface Energy Production

Yoon, Hongkyu Y.; Verzi, Stephen J.; Cauthen, Katherine R.; Musuvathy, Srideep M.; Melander, Darryl J.; Norland, Kyle N.; Morales, Adriana M.; Lee, Jonghyun H.; Sun, Alexander Y.

Subsurface energy activities such as unconventional resource recovery, enhanced geothermal energy systems, and geologic carbon storage require fast and reliable methods to account for complex, multiphysical processes in heterogeneous fractured and porous media. Although reservoir simulation is considered the industry standard for simulating these subsurface systems with injection and/or extraction operations, reservoir simulation requires spatio-temporal “Big Data” into the simulation model, which is typically a major challenge during model development and computational phase. In this work, we developed and applied various deep neural network-based approaches to (1) process multiscale image segmentation, (2) generate ensemble members of drainage networks, flow channels, and porous media using deep convolutional generative adversarial network, (3) construct multiple hybrid neural networks such as convolutional LSTM and convolutional neural network-LSTM to develop fast and accurate reduced order models for shale gas extraction, and (4) physics-informed neural network and deep Q-learning for flow and energy production. We hypothesized that physicsbased machine learning/deep learning can overcome the shortcomings of traditional machine learning methods where data-driven models have faltered beyond the data and physical conditions used for training and validation. We improved and developed novel approaches to demonstrate that physics-based ML can allow us to incorporate physical constraints (e.g., scientific domain knowledge) into ML framework. Outcomes of this project will be readily applicable for many energy and national security problems that are particularly defined by multiscale features and network systems.

More Details

Advanced Detection of Wellbore Failure for Safe and Secure Utilization of Subsurface Infrastructure

Matteo, Edward N.; Conley, Donald M.; Verzi, Stephen J.; Roberts, Barry L.; Doyle, Casey L.; Sobolik, Steven R.; Gilletly, Samuel G.; Bauer, Stephen J.; Pyrak-Nolte, L.P.; Reda Taha, M.M.; Stormont, J.C.; Crandall, D.C.; Moriarty, Dylan; John, Esther W.; Wilson, Jennifer E.; Bettin, Giorgia B.; Hogancamp, Joshua H.; Fernandez, S.G.; Anwar, I.A.; Abdellatef, M.A.; Murcia, D.H.; Bland, J.B.

The main goal of this project was to create a state-of-the-art predictive capability that screens and identifies wellbores that are at the highest risk of catastrophic failure. This capability is critical to a host of subsurface applications, including gas storage, hydrocarbon extraction and storage, geothermal energy development, and waste disposal, which depend on seal integrity to meet U.S. energy demands in a safe and secure manner. In addition to the screening tool, this project also developed several other supporting capabilities to help understand fundamental processes involved in wellbore failure. This included novel experimental methods to characterize permeability and porosity evolution during compressive failure of cement, as well as methods and capabilities for understanding two-phase flow in damaged wellbore systems, and novel fracture-resistant cements made from recycled fibers.

More Details

Emergent Recursive Multiscale Interaction in Complex Systems

Naugle, Asmeret B.; Doyle, Casey L.; Sweitzer, Matthew; Rothganger, Fredrick R.; Verzi, Stephen J.; Lakkaraju, Kiran L.; Kittinger, Robert; Bernard, Michael L.; Chen, Yuguo C.; Loyal, Joshua L.; Mueen, Abdullah M.

This project studied the potential for multiscale group dynamics in complex social systems, including emergent recursive interaction. Current social theory on group formation and interaction focuses on a single scale (individuals forming groups) and is largely qualitative in its explanation of mechanisms. We combined theory, modeling, and data analysis to find evidence that these multiscale phenomena exist, and to investigate their potential consequences and develop predictive capabilities. In this report, we discuss the results of data analysis showing that some group dynamics theory holds at multiple scales. We introduce a new theory on communicative vibration that uses social network dynamics to predict group life cycle events. We discuss a model of behavioral responses to the COVID-19 pandemic that incorporates influence and social pressures. Finally, we discuss a set of modeling techniques that can be used to simulate multiscale group phenomena.

More Details

Integrating Machine Learning into a Methodology for Early Detection of Wellbore Failure [Slides]

Matteo, Edward N.; Roberts, Barry L.; Sobolik, Steven R.; Gilletly, Samuel G.; Doyle, Casey L.; John, Esther W.; Verzi, Stephen J.

Approximately 93% of US total energy supply is dependent on wellbores in some form. The industry will drill more wells in next ten years than in the last 100 years (King, 2014). Global well population is around 1.8 million of which approximately 35% has some signs of leakage (i.e. sustained casing pressure). Around 5% of offshore oil and gas wells “fail” early, more with age and most with maturity. 8.9% of “shale gas” wells in the Marcellus play have experienced failure (120 out of 1,346 wells drilled in 2012) (Ingraffea et al., 2014). Current methods for identifying wells that are at highest priority for increased monitoring and/or at highest risk for failure consists of “hand” analysis of multi-arm caliper (MAC) well logging data and geomechanical models. Machine learning (ML) methods are of interest to explore feasibility for increasing analysis efficiency and/or enhanced detection of precursors to failure (e.g. deformations). MAC datasets used to train ML algorithms and preliminary tests were run for “predicting” casing collar locations and performed above 90% in classification and identifying of casing collar locations.

More Details

Integrating Machine Learning into a Methodology for Early Detection of Wellbore Failure [Slides]

Matteo, Edward N.; Roberts, Barry L.; Sobolik, Steven R.; Gilletly, Samuel G.; Doyle, Casey L.; John, Esther W.; Verzi, Stephen J.

Approximately 93% of US total energy supply is dependent on wellbores in some form. The industry will drill more wells in next ten years than in the last 100 years (King, 2014). Global well population is around 1.8 million of which approximately 35% has some signs of leakage (i.e. sustained casing pressure). Around 5% of offshore oil and gas wells “fail” early, more with age and most with maturity. 8.9% of “shale gas” wells in the Marcellus play have experienced failure (120 out of 1,346 wells drilled in 2012) (Ingraffea et al., 2014). Current methods for identifying wells that are at highest priority for increased monitoring and/or at highest risk for failure consists of “hand” analysis of multi-arm caliper (MAC) well logging data and geomechanical models. Machine learning (ML) methods are of interest to explore feasibility for increasing analysis efficiency and/or enhanced detection of precursors to failure (e.g. deformations). MAC datasets used to train ML algorithms and preliminary tests were run for “predicting” casing collar locations and performed above 90% in classification and identifying of casing collar locations.

More Details

Machine learning application for permeability estimation of three-dimensional rock images

CEUR Workshop Proceedings

Yoon, Hongkyu Y.; Melander, Darryl J.; Verzi, Stephen J.

Estimation of permeability in porous media is fundamental to understanding coupled multi-physics processes critical to various geoscience and environmental applications. Recent emerging machine learning methods with physics-based constraints and/or physical properties can provide a new means to improve computational efficiency while improving machine learning-based prediction by accounting for physical information during training. Here we first used three-dimensional (3D) real rock images to estimate permeability of fractured and porous media using 3D convolutional neural networks (CNNs) coupled with physics-informed pore topology characteristics (e.g., porosity, surface area, connectivity) during the training stage. Training data of permeability were generated using lattice Boltzmann simulations of segmented real rock 3D images. Our preliminary results show that neural network architecture and usage of physical properties strongly impact the accuracy of permeability predictions. In the future we can adjust our methodology to other rock types by choosing the appropriate architecture and proper physical properties, and optimizing the hyperparameters.

More Details

Mind the Gap: On Bridging the Semantic Gap between Machine Learning and Malware Analysis

AISec 2020 - Proceedings of the 13th ACM Workshop on Artificial Intelligence and Security

Smith, Michael R.; Johnson, Nicholas T.; Ingram, Joey; Carbajal, Armida J.; Haus, Bridget I.; Domschot, Eva; Ramyaa, Ramyaa; Lamb, Christopher L.; Verzi, Stephen J.; Kegelmeyer, William P.

Machine learning (ML) techniques are being used to detect increasing amounts of malware and variants. Despite successful applications of ML, we hypothesize that the full potential of ML is not realized in malware analysis (MA) due to a semantic gap between the ML and MA communities-as demonstrated in the data that is used. Due in part to the available data, ML has primarily focused on detection whereas MA is also interested in identifying behaviors. We review existing open-source malware datasets used in ML and find a lack of behavioral information that could facilitate stronger impact by ML in MA. As a first step in bridging this gap, we label existing data with behavioral information using open-source MA reports-1) altering the analysis from identifying malware to identifying behaviors, 2)~aligning ML better with MA, and 3)~allowing ML models to generalize to novel malware in a zero/few-shot learning manner. We classify the behavior of a malware family not seen during training using transfer learning from a state-of-the-art model for malware family classification and achieve 57%-84% accuracy on behavioral identification but fail to outperform the baseline set by a majority class predictor. This highlights opportunities for improvement on this task related to the data representation, the need for malware specific ML techniques, and a larger training set of malware samples labeled with behaviors.

More Details

Permeability prediction of porous media using convolutional neural networks with physical properties

CEUR Workshop Proceedings

Yoon, Hongkyu Y.; Melander, Darryl J.; Verzi, Stephen J.

Permeability prediction of porous media system is very important in many engineering and science domains including earth materials, bio-, solid-materials, and energy applications. In this work we evaluated how machine learning can be used to predict the permeability of porous media with physical properties. An emerging challenge for machine learning/deep learning in engineering and scientific research is the ability to incorporate physics into machine learning process. We used convolutional neural networks (CNNs) to train a set of image data of bead packing and additional physical properties such as porosity and surface area of porous media are used as training data either by feeding them to the fully connected network directly or through the multilayer perception network. Our results clearly show that the optimal neural network architecture and implementation of physics-informed constraints are important to properly improve the model prediction of permeability. A comprehensive analysis of hyperparameters with different CNN architectures and the data implementation scheme of the physical properties need to be performed to optimize our learning system for various porous media system.

More Details

Physical Security Assessment Using Temporal Machine Learning

Proceedings - International Carnahan Conference on Security Technology

Galiardi, Meghan A.; Verzi, Stephen J.; Birch, Gabriel C.; Stubbs, Jaclynn J.; Woo, Bryana L.; Kouhestani, Camron G.

Nuisance and false alarms are prevalent in modern physical security systems and often overwhelm the alarm station operators. Deep learning has shown progress in detection and classification tasks, however, it has rarely been implemented as a solution to reduce the nuisance and false alarm rates in a physical security systems. Previous work has shown that transfer learning using a convolutional neural network can provide benefit to physical security systems by achieving high accuracy of physical security targets [10]. We leverage this work by coupling the convolutional neural network, which operates on a frame-by-frame basis, with temporal algorithms which evaluate a sequence of such frames (e.g. video analytics). We discuss several alternatives for performing this temporal analysis, in particular Long Short-Term Memory and Liquid State Machine, and demonstrate their respective value on exemplar physical security videos. We also outline an architecture for developing an ensemble learner which leverages the strength of each individual algorithm in its aggregation. The incorporation of these algorithms into physical security systems creates a new paradigm in which we aim to decrease the volume of nuisance and false alarms in order to allow the alarm station operators to focus on the most relevant threats.

More Details

Integrated Cyber/Physical Grid Resiliency Modeling

Dawson, Lon A.; Verzi, Stephen J.; Levin, Drew L.; Melander, Darryl J.; Sorensen, Asael H.; Cauthen, Katherine R.; Wilches-Bernal, Felipe; Berg, Timothy M.; Lavrova, Olga A.; Guttromson, Ross G.

This project explored coupling modeling and analysis methods from multiple domains to address complex hybrid (cyber and physical) attacks on mission critical infrastructure. Robust methods to integrate these complex systems are necessary to enable large trade-space exploration including dynamic and evolving cyber threats and mitigations. Reinforcement learning employing deep neural networks, as in the AlphaGo Zero solution, was used to identify "best" (or approximately optimal) resilience strategies for operation of a cyber/physical grid model. A prototype platform was developed and the machine learning (ML) algorithm was made to play itself in a game of 'Hurt the Grid'. This proof of concept shows that machine learning optimization can help us understand and control complex, multi-dimensional grid space. A simple, yet high-fidelity model proves that the data have spatial correlation which is necessary for any optimization or control. Our prototype analysis showed that the reinforcement learning successfully improved adversary and defender knowledge to manipulate the grid. When expanded to more representative models, this exact type of machine learning will inform grid operations and defense - supporting mitigation development to defend the grid from complex cyber attacks! This same research can be expanded to similar complex domains.

More Details

Computing with spikes: The advantage of fine-grained timing

Neural Computation

Verzi, Stephen J.; Rothganger, Fredrick R.; Parekh, Ojas D.; Quach, Tu-Thach Q.; Miner, Nadine E.; Vineyard, Craig M.; James, Conrad D.; Aimone, James B.

Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventionalmethods, and underwhat circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum ormedian of a set of numbers.We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.

More Details

Neural-Inspired Anomaly Detection

Springer Proceedings in Complexity

Verzi, Stephen J.; Vineyard, Craig M.; Aimone, James B.

Anomaly detection is an important problem in various fields of complex systems research including image processing, data analysis, physical security and cybersecurity. In image processing, it is used for removing noise while preserving image quality, and in data analysis, physical security and cybersecurity, it is used to find interesting data points, objects or events in a vast sea of information. Anomaly detection will continue to be an important problem in domains intersecting with “Big Data”. In this paper we provide a novel algorithm for anomaly detection that uses phase-coded spiking neurons as basic computational elements.

More Details

A spike-Timing neuromorphic architecture

2017 IEEE International Conference on Rebooting Computing, ICRC 2017 - Proceedings

Hill, Aaron J.; Donaldson, Jonathon W.; Rothganger, Fredrick R.; Vineyard, Craig M.; Follett, David R.; Follett, Pamela L.; Smith, Michael R.; Verzi, Stephen J.; Severa, William M.; Wang, Felix W.; Aimone, James B.; Naegle, John H.; James, Conrad D.

Unlike general purpose computer architectures that are comprised of complex processor cores and sequential computation, the brain is innately parallel and contains highly complex connections between computational units (neurons). Key to the architecture of the brain is a functionality enabled by the combined effect of spiking communication and sparse connectivity with unique variable efficacies and temporal latencies. Utilizing these neuroscience principles, we have developed the Spiking Temporal Processing Unit (STPU) architecture which is well-suited for areas such as pattern recognition and natural language processing. In this paper, we formally describe the STPU, implement the STPU on a field programmable gate array, and show measured performance data.

More Details

A Case Study on Neural Inspired Dynamic Memory Management Strategies for High Performance Computing

Vineyard, Craig M.; Verzi, Stephen J.

As high performance computing architectures pursue more computational power there is a need for increased memory capacity and bandwidth as well. A multi-level memory (MLM) architecture addresses this need by combining multiple memory types with different characteristics as varying levels of the same architecture. How to efficiently utilize this memory infrastructure is an unknown challenge, and in this research we sought to investigate whether neural inspired approaches can meaningfully help with memory management. In particular we explored neurogenesis inspired re- source allocation, and were able to show a neural inspired mixed controller policy can beneficially impact how MLM architectures utilize memory.

More Details

Optimization-based computation with spiking neurons

Proceedings of the International Joint Conference on Neural Networks

Verzi, Stephen J.; Vineyard, Craig M.; Vugrin, Eric D.; Galiardi, Meghan; James, Conrad D.; Aimone, James B.

Considerable effort is currently being spent designing neuromorphic hardware for addressing challenging problems in a variety of pattern-matching applications. These neuromorphic systems offer low power architectures with intrinsically parallel and simple spiking neuron processing elements. Unfortunately, these new hardware architectures have been largely developed without a clear justification for using spiking neurons to compute quantities for problems of interest. Specifically, the use of spiking for encoding information in time has not been explored theoretically with complexity analysis to examine the operating conditions under which neuromorphic computing provides a computational advantage (time, space, power, etc.) In this paper, we present and formally analyze the use of temporal coding in a neural-inspired algorithm for optimization-based computation in neural spiking architectures.

More Details

Recommended Research Directions for Improving the Validation of Complex Systems Models

Vugrin, Eric D.; Trucano, Timothy G.; Swiler, Laura P.; Finley, Patrick D.; Flanagan, Tatiana P.; Naugle, Asmeret B.; Tsao, Jeffrey Y.; Verzi, Stephen J.

More Details

Quantifying neural information content: A case study of the impact of hippocampal adult neurogenesis

Proceedings of the International Joint Conference on Neural Networks

Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.

Through various means of structural and synaptic plasticity enabling online learning, neural networks are constantly reconfiguring their computational functionality. Neural information content is embodied within the configurations, representations, and computations of neural networks. To explore neural information content, we have developed metrics and computational paradigms to quantify neural information content. We have observed that conventional compression methods may help overcome some of the limiting factors of standard information theoretic techniques employed in neuroscience, and allows us to approximate information in neural data. To do so we have used compressibility as a measure of complexity in order to estimate entropy to quantitatively assess information content of neural ensembles. Using Lempel-Ziv compression we are able to assess the rate of generation of new patterns across a neural ensemble's firing activity over time to approximate the information content encoded by a neural circuit. As a specific case study, we have been investigating the effect of neural mixed coding schemes due to hippocampal adult neurogenesis.

More Details

Improving Grid Resilience through Informed Decision-making (IGRID)

Burnham, Laurie B.; Stamber, Kevin L.; Jeffers, Robert F.; Adams, Susan S.; Verzi, Stephen J.; Sahakian, Meghan A.; Haass, Michael J.; Cauthen, Katherine R.

The transformation of the distribution grid from a centralized to decentralized architecture, with bi-directional power and data flows, is made possible by a surge in network intelligence and grid automation. While changes are largely beneficial, the interface between grid operator and automated technologies is not well understood, nor are the benefits and risks of automation. Quantifying and understanding the latter is an important facet of grid resilience that needs to be fully investigated. The work described in this document represents the first empirical study aimed at identifying and mitigating the vulnerabilities posed by automation for a grid that for the foreseeable future will remain a human-in-the-loop critical infrastructure. Our scenario-based methodology enabled us to conduct a series of experimental studies to identify causal relationships between grid-operator performance and automated technologies and to collect measurements of human performance as a function of automation. Our findings, though preliminary, suggest there are predictive patterns in the interplay between human operators and automation, patterns that can inform the rollout of distribution automation and the hiring and training of operators, and contribute in multiple and significant ways to the field of grid resilience.

More Details

Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, S.D.; Naugle, Asmeret B.; Verzi, Stephen J.; Swiler, Laura P.; Johnson, Curtis M.; Smith, Mark A.; Flanagan, Tatiana P.; Vugrin, Eric D.; Gabert, Kasimir G.; Lave, Matthew S.; Chen, Wei C.; DeLaurentis, Daniel D.; Hubler, Alfred H.; Oberkampf, Bill O.

This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledge gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?

More Details

Simulating smoking behaviors based on cognition-determined, opinion-based system dynamics

Proceedings - Winter Simulation Conference

Naugle, Asmeret B.; Miner, Nadine E.; Aamir, Munaf S.; Jeffers, Robert F.; Verzi, Stephen J.; Bernard, Michael L.

We created a cognition-focused system dynamics model to simulate the dynamics of smoking tendencies based on media influences and communication of opinions. We based this model on the premise that the dynamics of attitudes about smoking can be more deeply understood by combining opinion dynamics with more in-depth psychological models that explicitly explore the root causes of behaviors of interest. Results of the model show the relative effectiveness of two different policies as compared to a baseline: A decrease in advertising spending, and an increase in educational spending. The initial results presented here indicate the utility of this type of simulation for analyzing various policies meant to influence the dynamics of opinions in a population.

More Details

Repeated play of the SVM game as a means of adaptive classification

Proceedings of the International Joint Conference on Neural Networks

Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; Aimone, James B.; Heileman, Gregory L.

The field of machine learning strives to develop algorithms that, through learning, lead to generalization; that is, the ability of a machine to perform a task that it was not explicitly trained for. An added challenge arises when the problem domain is dynamic or non-stationary with the data distributions or categorizations changing over time. This phenomenon is known as concept drift. Game-theoretic algorithms are often iterative by nature, consisting of repeated game play rather than a single interaction. Effectively, rather than requiring extensive retraining to update a learning model, a game-theoretic approach can adjust strategies as a novel approach to concept drift. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in an adaptive manner with repeated play to address concept drift, and show results of applying this algorithm to synthetic as well as real data.

More Details

Evaluating Moving Target Defense with PLADD

Jones, Stephen T.; Outkin, Alexander V.; Gearhart, Jared L.; Hobbs, Jacob A.; Siirola, John D.; Phillips, Cynthia A.; Verzi, Stephen J.; Tauritz, Daniel T.; Mulder, Samuel A.; Naugle, Asmeret B.

This project evaluates the effectiveness of moving target defense (MTD) techniques using a new game we have designed, called PLADD, inspired by the game FlipIt [28]. PLADD extends FlipIt by incorporating what we believe are key MTD concepts. We have analyzed PLADD and proven the existence of a defender strategy that pushes a rational attacker out of the game, demonstrated how limited the strategies available to an attacker are in PLADD, and derived analytic expressions for the expected utility of the game’s players in multiple game variants. We have created an algorithm for finding a defender’s optimal PLADD strategy. We show that in the special case of achieving deterrence in PLADD, MTD is not always cost effective and that its optimal deployment may shift abruptly from not using MTD at all to using it as aggressively as possible. We believe our effort provides basic, fundamental insights into the use of MTD, but conclude that a truly practical analysis requires model selection and calibration based on real scenarios and empirical data. We propose several avenues for further inquiry, including (1) agents with adaptive capabilities more reflective of real world adversaries, (2) the presence of multiple, heterogeneous adversaries, (3) computational game theory-based approaches such as coevolution to allow scaling to the real world beyond the limitations of analytical analysis and classical game theory, (4) mapping the game to real-world scenarios, (5) taking player risk into account when designing a strategy (in addition to expected payoff), (6) improving our understanding of the dynamic nature of MTD-inspired games by using a martingale representation, defensive forecasting, and techniques from signal processing, and (7) using adversarial games to develop inherently resilient cyber systems.

More Details

Modeling the potential effects of new tobacco products and policies: A dynamic population model for multiple product use and harm

PLoS ONE

Vugrin, Eric D.; Rostron, Brian L.; Verzi, Stephen J.; Brodsky, Nancy S.; Brown, Theresa J.; Choiniere, Conrad J.; Coleman, Blair N.; Paredes, Antonio; Apelberg, Benjamin J.

Background Recent declines in US cigarette smoking prevalence have coincided with increases in use of other tobacco products. Multiple product tobacco models can help assess the population health impacts associated with use of a wide range of tobacco products. Methods and Findings We present a multi-state, dynamical systems population structure model that can be used to assess the effects of tobacco product use behaviors on population health. The model incorporates transition behaviors, such as initiation, cessation, switching, and dual use, related to the use of multiple products. The model tracks product use prevalence and mortality attributable to tobacco use for the overall population and by sex and age group. The model can also be used to estimate differences in these outcomes between scenarios by varying input parameter values. We demonstrate model capabilities by projecting future cigarette smoking prevalence and smoking-attributable mortality and then simulating the effects of introduction of a hypothetical new lower-risk tobacco product under a variety of assumptions about product use. Sensitivity analyses were conducted to examine the range of population impacts that could occur due to differences in input values for product use and risk. We demonstrate that potential benefits from cigarette smokers switching to the lower-risk product can be offset over time through increased initiation of this product. Model results show that population health benefits are particularly sensitive to product risks and initiation, switching, and dual use behaviors. Conclusion Our model incorporates the variety of tobacco use behaviors and risks that occur with multiple products. As such, it can evaluate the population health impacts associated with the introduction of new tobacco products or policies that may result in product switching or dual use. Further model development will include refinement of data inputs for non-cigarette tobacco products and inclusion of health outcomes such as morbidity and disability.

More Details

Modeling Evacuation of a Hospital without Electric Power

Prehospital and Disaster Medicine

Vugrin, Eric D.; Verzi, Stephen J.; Finley, Patrick D.; Turnquist, Mark A.; Griffin, Anne R.; Ricci, Karen A.; Wyte-Lake, Tamar

Hospital evacuations that occur during, or as a result of, infrastructure outages are complicated and demanding. Loss of infrastructure services can initiate a chain of events with corresponding management challenges. This report describes a modeling case study of the 2001 evacuation of the Memorial Hermann Hospital in Houston, Texas (USA). The study uses a model designed to track such cascading events following loss of infrastructure services and to identify the staff, resources, and operational adaptations required to sustain patient care and/or conduct an evacuation. The model is based on the assumption that a hospital's primary mission is to provide necessary medical care to all of its patients, even when critical infrastructure services to the hospital and surrounding areas are disrupted. Model logic evaluates the hospital's ability to provide an adequate level of care for all of its patients throughout a period of disruption. If hospital resources are insufficient to provide such care, the model recommends an evacuation. Model features also provide information to support evacuation and resource allocation decisions for optimizing care over the entire population of patients. This report documents the application of the model to a scenario designed to resemble the 2001 evacuation of the Memorial Hermann Hospital, demonstrating the model's ability to recreate the timeline of an actual evacuation. The model is also applied to scenarios demonstrating how its output can inform evacuation planning activities and timing.

More Details

Resource Requirements Planning for Hospitals Treating Serious Infectious Disease Cases

Vugrin, Eric D.; Verzi, Stephen J.; Finley, Patrick D.; Turnquist, Mark A.; Wyte-Lake, Tamar W.; Griffin, Ann R.; Ricci, Karen J.; Plotinsky, Rachel P.

This report presents a mathematical model of the way in which a hospital uses a variety of resources, utilities and consumables to provide care to a set of in-patients, and how that hospital might adapt to provide treatment to a few patients with a serious infectious disease, like the Ebola virus. The intended purpose of the model is to support requirements planning studies, so that hospitals may be better prepared for situations that are likely to strain their available resources. The current model is a prototype designed to present the basic structural elements of a requirements planning analysis. Some simple illustrati ve experiments establish the mo del's general capabilities. With additional inve stment in model enhancement a nd calibration, this prototype could be developed into a useful planning tool for ho spital administrators and health care policy makers.

More Details

Resilience of Adapting Networks: Results from a Stylized Infrastructure Model

Beyeler, Walter E.; Vugrin, Eric D.; Forden, Geoffrey E.; Aamir, Munaf S.; Verzi, Stephen J.; Outkin, Alexander V.

Adaptation is believed to be a source of resilience in systems. It has been difficult to measure the contribution of adaptation to resilience, unlike other resilience mechanisms such as restoration and recovery. One difficulty comes from treating adaptation as a deus ex machina that is interjected after a disruption. This provides no basis for bounding possible adaptive responses. We can bracket the possible effects of adaptation when we recognize that it occurs continuously, and is in part responsible for the current system’s properties. In this way the dynamics of the system’s pre-disruption structure provides information about post-disruption adaptive reaction. Seen as an ongoing process, adaptation has been argued to produce “robust-yet-fragile” systems. Such systems perform well under historical stresses but become committed to specific features of those stresses in a way that makes them vulnerable to system-level collapse when those features change. In effect adaptation lessens the cost of disruptions within a certain historical range, at the expense of increased cost from disruptions outside that range. Historical adaptive responses leave a signature in the structure of the system. Studies of ecological networks have suggested structural metrics that pick out systemic resilience in the underlying ecosystems. If these metrics are generally reliable indicators of resilience they provide another strategy for gaging adaptive resilience. To progress in understanding how the process of adaptation and the property of resilience interrelate in infrastructure systems, we pose some specific questions: Does adaptation confer resilience?; Does it confer resilience to novel shocks as well, or does it tune the system to fragility?; Can structural features predict resilience to novel shocks?; Are there policies or constraints on the adaptive process that improve resilience?.

More Details

Using High Performance Computing to Examine the Processes of Neurogenesis Underlying Pattern Separation and Completion of Episodic Information

Aimone, James B.; Bernard, Michael L.; Vineyard, Craig M.; Verzi, Stephen J.

Adult neurogenesis in the hippocampus region of the brain is a neurobiological process that is believed to contribute to the brain's advanced abilities in complex pattern recognition and cognition. Here, we describe how realistic scale simulations of the neurogenesis process can offer both a unique perspective on the biological relevance of this process and confer computational insights that are suggestive of novel machine learning techniques. First, supercomputer based scaling studies of the neurogenesis process demonstrate how a small fraction of adult-born neurons have a uniquely larger impact in biologically realistic scaled networks. Second, we describe a novel technical approach by which the information content of ensembles of neurons can be estimated. Finally, we illustrate several examples of broader algorithmic impact of neurogenesis, including both extending existing machine learning approaches and novel approaches for intelligent sensing.

More Details

The impact of attitude resolve on population wide attitude change

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Vineyard, Craig M.; Lakkaraju, Kiran L.; Collard, Joseph; Verzi, Stephen J.

Attitudes play a critical role in informing resulting behavior. Extending previous work, we have developed a model of population wide attitude change that captures social factors through a social network, cognitive factors through a cognitive network and individual differences in influence. All three of these factors are supported by literature as playing a role in attitude and behavior change. In this paper we present a new computational model of attitude resolve which incorporates the affects of player interaction dynamics that uses game theory in an integrated model of socio-cognitive strategy-based individual interaction and provide preliminary experiments. © 2012 Springer-Verlag.

More Details

Augmented cognition tool for rapid military decision making

Vineyard, Craig M.; Verzi, Stephen J.; Taylor, Shawn E.; Dubicka, Irene D.; Bernard, Michael L.

This report describes the laboratory directed research and development work to model relevant areas of the brain that associate multi-modal information for long-term storage for the purpose of creating a more effective, and more automated, association mechanism to support rapid decision making. Using the biology and functionality of the hippocampus as an analogy or inspiration, we have developed an artificial neural network architecture to associate k-tuples (paired associates) of multimodal input records. The architecture is composed of coupled unimodal self-organizing neural modules that learn generalizations of unimodal components of the input record. Cross modal associations, stored as a higher-order tensor, are learned incrementally as these generalizations form. Graph algorithms are then applied to the tensor to extract multi-modal association networks formed during learning. Doing so yields a novel approach to data mining for knowledge discovery. This report describes the neurobiological inspiration, architecture, and operational characteristics of our model, and also provides a real world terrorist network example to illustrate the model's functionality.

More Details

Modeling cortical circuits

Rothganger, Fredrick R.; Rohrer, Brandon R.; Verzi, Stephen J.; Xavier, Patrick G.

The neocortex is perhaps the highest region of the human brain, where audio and visual perception takes place along with many important cognitive functions. An important research goal is to describe the mechanisms implemented by the neocortex. There is an apparent regularity in the structure of the neocortex [Brodmann 1909, Mountcastle 1957] which may help simplify this task. The work reported here addresses the problem of how to describe the putative repeated units ('cortical circuits') in a manner that is easily understood and manipulated, with the long-term goal of developing a mathematical and algorithmic description of their function. The approach is to reduce each algorithm to an enhanced perceptron-like structure and describe its computation using difference equations. We organize this algorithmic processing into larger structures based on physiological observations, and implement key modeling concepts in software which runs on parallel computing hardware.

More Details

Modeling aspects of human memory for scientific study

Bernard, Michael L.; Morrow, James D.; Taylor, Shawn E.; Verzi, Stephen J.; Vineyard, Craig M.

Working with leading experts in the field of cognitive neuroscience and computational intelligence, SNL has developed a computational architecture that represents neurocognitive mechanisms associated with how humans remember experiences in their past. The architecture represents how knowledge is organized and updated through information from individual experiences (episodes) via the cortical-hippocampal declarative memory system. We compared the simulated behavioral characteristics with those of humans measured under well established experimental standards, controlling for unmodeled aspects of human processing, such as perception. We used this knowledge to create robust simulations of & human memory behaviors that should help move the scientific community closer to understanding how humans remember information. These behaviors were experimentally validated against actual human subjects, which was published. An important outcome of the validation process will be the joining of specific experimental testing procedures from the field of neuroscience with computational representations from the field of cognitive modeling and simulation.

More Details

Yucca Mountain licensing support network archive assistant

Dunlavy, Daniel D.; Basilico, Justin D.; Verzi, Stephen J.; Bauer, Travis L.

This report describes the Licensing Support Network (LSN) Assistant--a set of tools for categorizing e-mail messages and documents, and investigating and correcting existing archives of categorized e-mail messages and documents. The two main tools in the LSN Assistant are the LSN Archive Assistant (LSNAA) tool for recategorizing manually labeled e-mail messages and documents and the LSN Realtime Assistant (LSNRA) tool for categorizing new e-mail messages and documents. This report focuses on the LSNAA tool. There are two main components of the LSNAA tool. The first is the Sandia Categorization Framework, which is responsible for providing categorizations for documents in an archive and storing them in an appropriate Categorization Database. The second is the actual user interface, which primarily interacts with the Categorization Database, providing a way for finding and correcting categorizations errors in the database. A procedure for applying the LSNAA tool and an example use case of the LSNAA tool applied to a set of e-mail messages are provided. Performance results of the categorization model designed for this example use case are presented.

More Details

Simulating human behavior for national security human interactions

Bernard, Michael L.; Glickman, Matthew R.; Hart, Derek H.; Xavier, Patrick G.; Verzi, Stephen J.; Wolfenbarger, Paul W.

This 3-year research and development effort focused on what we believe is a significant technical gap in existing modeling and simulation capabilities: the representation of plausible human cognition and behaviors within a dynamic, simulated environment. Specifically, the intent of the ''Simulating Human Behavior for National Security Human Interactions'' project was to demonstrate initial simulated human modeling capability that realistically represents intra- and inter-group interaction behaviors between simulated humans and human-controlled avatars as they respond to their environment. Significant process was made towards simulating human behaviors through the development of a framework that produces realistic characteristics and movement. The simulated humans were created from models designed to be psychologically plausible by being based on robust psychological research and theory. Progress was also made towards enhancing Sandia National Laboratories existing cognitive models to support culturally plausible behaviors that are important in representing group interactions. These models were implemented in the modular, interoperable, and commercially supported Umbra{reg_sign} simulation framework.

More Details
135 Results
135 Results