Multi-objective optimization methods can be criticized for lacking a statistically valid measure of the quality and representativeness of a solution. This stance is especially relevant to metaheuristic optimization approaches but can also apply to other methods that typically might only report a small representative subset of a Pareto frontier. Here we present a method to address this deficiency based on random sampling of a solution space to determine, with a specified level of confidence, the fraction of the solution space that is surpassed by an optimization. The Superiority of Multi-Objective Optimization to Random Sampling, or SMORS method, can evaluate quality and representativeness using dominance or other measures, e.g., a spacing measure for high-dimensional spaces. SMORS has been tested in a combinatorial optimization context using a genetic algorithm but could be useful for other optimization methods.
Schedule Management Optimization (SMO) is a tool for automatically generating a schedule of project tasks. Project scheduling is traditionally achieved with the use of commercial project management software or case-specific optimization formulations. Commercial software packages are useful tools for managing and visualizing copious amounts of project task data. However, their ability to automatically generate optimized schedules is limited. Furthermore, there are many real-world constraints and decision variables that commercial packages ignore. Case-specific optimization formulations effectively identify schedules that optimize one or more objectives for a specific problem, but they are unable to handle a diverse selection of scheduling problems. SMO enables practitioners to generate optimal project schedules automatically while considering a broad range of real-world problem characteristics. SMO has been designed to handle some of the most difficult scheduling problems -- those with resource constraints, multiple objectives, multiple inventories, and diverse ways of performing tasks. This report contains descriptions of the SMO modeling concepts and explains how they map to real-world scheduling considerations.
As system of systems (SoS) models become increasingly complex and interconnected a new approach is needed to capture the effects of humans within the SoS. Many real-life events have shown the detrimental outcomes of failing to account for humans in the loop. This research introduces a novel and cross-disciplinary methodology for modeling humans interacting with technologies to perform tasks within an SoS specifically within a layered physical security system use case. Metrics and formulations developed for this new way of looking at SoS termed sociotechnical SoS allow for the quantification of the interplay of effectiveness and efficiency seen in detection theory to measure the ability of a physical security system to detect and respond to threats. This methodology has been applied to a notional representation of a small military Forward Operating Base (FOB) as a proof-of-concept.
Upcoming weapon programs require an aggressive increase in Application Specific Integrated Circuit (ASIC) production at Sandia National Laboratories (SNL). SNL has developed unique modeling and optimization tools that have been instrumental in improving ASIC production productivity and efficiency, identifying optimal operational and tactical execution plans under resource constraints, and providing confidence in successful mission execution. With ten products and unprecedented levels of demand, a single set of shared resources, highly variable processes, and the need for external supplier task synchronization, scheduling is an integral part of successful manufacturing. The scheduler uses an iterative multi-objective genetic algorithm and a multi-dimensional performance evaluator. Schedule feasibility is assessed using a discrete event simulation (DES) that incorporates operational uncertainty, variability, and resource availability. The tools provide rapid scenario assessments and responses to variances in the operational environment, and have been used to inform major equipment investments and workforce planning decisions in multiple SNL facilities.
System-of-systems modeling has traditionally focused on physical systems rather than humans, but recent events have proved the necessity of considering the human in the loop. As technology becomes more complex and layered security continues to increase in importance, capturing humans and their interactions with technologies within the system-of-systems will be increasingly necessary. After an extensive job-task analysis, a novel type of system-ofsystems simulation model has been created to capture the human-technology interactions on an extra-small forward operating base to better understand performance, key security drivers, and the robustness of the base. In addition to the model, an innovative framework for using detection theory to calculate d’ for individual elements of the layered security system, and for the entire security system as a whole, is under development.
Our society is increasingly reliant on systems and interoperating collections of systems, known as systems of systems (SoS). Our national security is built on SoS, such as Army brigades, airport security, and nuclear weapons security. These SoS are often subject to changing budgets, changing missions (e.g., nation building, arms-control treaties), changing threats (e.g., asymmetric warfare, terrorism, WMDs), and changing natural environments (e.g., climate, weather, natural disasters). Can vital SoS adapt to these changing landscapes effectively and efficiently? This paper describes research at Sandia National Laboratories to develop metrics for measuring the adaptability of SoS.Wereport thatwecouldnotfindasingle or absolute adaptability metric, in large part duetolackof general objectives orstructures of SoS. However, we do report a set of metrics that can be applied relatively, plus a method for combining the metrics into an adaptability index, a single value that can be used to compare SoS designs. We show in a test case that these metrics can distinguish good and poor performance under a variable mission space and an uncertain threat environment. The metrics are intended to support a long-range goal of creating an analytic capability to assist in the design and operation of adaptable systems and SoS.
Our society is increasingly reliant on systems and interoperating collections of systems, known as systems of systems (SoS). These SoS are often subject to changing missions (e.g., nation- building, arms-control treaties), threats (e.g., asymmetric warfare, terrorism), natural environments (e.g., climate, weather, natural disasters) and budgets. How well can SoS adapt to these types of dynamic conditions? This report details the results of a three year Laboratory Directed Research and Development (LDRD) project aimed at developing metrics and methodologies for quantifying the adaptability of systems and SoS. Work products include: derivation of a set of adaptability metrics, a method for combining the metrics into a system of systems adaptability index (SoSAI) used to compare adaptability of SoS designs, development of a prototype dynamic SoS (proto-dSoS) simulation environment which provides the ability to investigate the validity of the adaptability metric set, and two test cases that evaluate the usefulness of a subset of the adaptability metrics and SoSAI for distinguishing good from poor adaptability in a SoS. Intellectual property results include three patents pending: A Method For Quantifying Relative System Adaptability, Method for Evaluating System Performance, and A Method for Determining Systems Re-Tasking.
The globalization of today's supply chains (e.g., information and communication technologies, military systems, etc.) has created an emerging security threat that could degrade the integrity and availability of sensitive and critical government data, control systems, and infrastructures. Commercial-off-the-shelf (COTS) and even government-off-the-self (GOTS) products often are designed, developed, and manufactured overseas. Counterfeit items, from individual chips to entire systems, have been found in commercial and government sectors. Supply chain attacks can be initiated at any point during the product or system lifecycle, and can have detrimental effects to mission success. To date, there is a lack of analytics and decision support tools used to analyze supply chain security holistically, and to perform tradeoff analyses to determine how to invest in or deploy possible mitigation options for supply chain security such that the return on investment is optimal with respect to cost, efficiency, and security. This paper discusses the development of a supply chain decision analytics framework that will assist decision makers and stakeholders in performing risk-based cost-benefit prioritization of security investments to manage supply chain risk. Key aspects of our framework include the hierarchical supply chain representation, vulnerability and mitigation modeling, risk assessment and optimization. This work is a part of a long term research effort on supply chain decision analytics for trusted systems and communications research challenge.
The International Data Centre of the Comprehensive Nuclear-Test-Ban Treaty Organization relies on automatic data processing as the first step in identifying seismic events from seismic waveform data. However, more than half of the automatically identified seismic events are eliminated by IDC analysts. Here, an IDC dataset is analyzed to determine if the number of automatically generated false positives could be reduced. Data that could be used to distinguish false positives from analyst-accepted seismic events includes the number of stations, the number of phases, the signal-to-noise ratio, and the pick error. An empirical method is devised to determine whether an automatically identified seismic event is acceptable, and the method is found to identify a significant number of the false positives in IDC data. This work could help reduce seismic analyst workload and could help improve the calibration of seismic monitoring stations. This work could also be extended to address identification of seismic events missed by automatic processing.
A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.
Terrestrial climate records and historical observations of the Sun suggest that the Sun undergoes aperiodic oscillations in radiative output and size over time periods of centuries and millenia. Such behavior can be explained by the solar convective zone acting as a nonlinear oscillator, forced at the sunspot-cycle frequency by variations in heliomagnetic field strength. A forced variant of the Lorenz equations can generate a time series with the same characteristics as the solar and climate records. The timescales and magnitudes of oscillations that could be caused by this mechanism are consistent with what is known about the Sun and terrestrial climate.