Complex networks of information processing systems, or information supply chains, present challenges for performance analysis. We establish a mathematical setting, in which a process within an information supply chain can be analyzed in terms of the functionality of the system's components. Principles of this methodology are rigorously defended and induce a model for determining the reliability for the various products in these networks. Our model does not limit us from having cycles in the network, as long as the cycles do not contain negation. It is shown that our approach to reliability resolves the nonuniqueness caused by cycles in a probabilistic Boolean network. An iterative algorithm is given to find the reliability values of the model, using a process that can be fully automated. This automated method of discerning reliability is beneficial for systems managers. As a systems manager considers systems modification, such as the replacement of owned and maintained hardware systems with cloud computing resources, the need for comparative analysis of system reliability is paramount. The model is extended to handle conditional knowledge about the network, allowing one to make predictions of weaknesses in the system. Finally, to illustrate the model's flexibility over different forms, it is demonstrated on a system of components and subcomponents.
Methods are proposed to measure the sensitivity of utility or value function to the variations of attribute values for Multi-Criteria Decision Analyses that are based on functions that cannot be expressed as a weighted sum of the attribute values. These measures, called factor Influence Metrics, can be used to examine the characteristics of the option scoring algorithm and help verify the algorithm is consistent with decision makers value structure and processes. ACKNOWLEDGEMENTS This work is based on ideas developed with a team that included Sean Derosa (Sandia), Megan Keeling (ZRA), Trisha Miller (Sandia), Dustin Ward-Dahl (ZRA), and Lynn Yang (Sandia). The authors wish to thank Gregory Wyss (Sandia), Jason Reinhardt (Sandia) and John Lathrop (Decision Strategies LLC) for reviews and comments on drafts of this paper. We also wish to thank Noel Nachtigal and Rossitza Homan for their support and encouragement for this effort.
Sandia National Laboratories performed a two-year Laboratory Directed Research and Development project to develop a new collaborative risk assessment method to enable decision makers to fully consider the interrelationships between threat, vulnerability, and consequence. A five-step Total Risk Assessment Methodology was developed to enable interdisciplinary collaborative risk assessment by experts from these disciplines. The objective of this process is promote effective risk management by enabling analysts to identify scenarios that are simultaneously achievable by an adversary, desirable to the adversary, and of concern to the system owner or to society. The basic steps are risk identification, collaborative scenario refinement and evaluation, scenario cohort identification and risk ranking, threat chain mitigation analysis, and residual risk assessment. The method is highly iterative, especially with regard to scenario refinement and evaluation. The Total Risk Assessment Methodology includes objective consideration of relative attack likelihood instead of subjective expert judgment. The 'probability of attack' is not computed, but the relative likelihood for each scenario is assessed through identifying and analyzing scenario cohort groups, which are groups of scenarios with comparable qualities to the scenario being analyzed at both this and other targets. Scenarios for the target under consideration and other targets are placed into cohort groups under an established ranking process that reflects the following three factors: known targeting, achievable consequences, and the resources required for an adversary to have a high likelihood of success. The development of these target cohort groups implements, mathematically, the idea that adversaries are actively choosing among possible attack scenarios and avoiding scenarios that would be significantly suboptimal to their objectives. An adversary who can choose among only a few comparable targets and scenarios (a small comparable target cohort group) is more likely to choose to attack the specific target under analysis because he perceives it to be a relatively unique attack opportunity. The opposite is also true. Thus, total risk is related to the number of targets that exist in each scenario cohort group. This paper describes the Total Risk Assessment Methodology and illustrates it through an example.
We have created a logic-based, Turing-complete language for stochastic modeling. Since the inference scheme for this language is based on a variant of Pearl's loopy belief propagation algorithm, we call it Loopy Logic. Traditional Bayesian networks have limited expressive power, basically constrained to finite domains as in the propositional calculus. Our language contains variables that can capture general classes of situations, events and relationships. A first-order language is also able to reason about potentially infinite classes and situations using constructs such as hidden Markov models(HMMs). Our language uses an Expectation-Maximization (EM) type learning of parameters. This has a natural fit with the Loopy Belief Propagation used for inference since both can be viewed as iterative message passing algorithms. We present the syntax and theoretical foundations for our Loopy Logic language. We then demonstrate three examples of stochastic modeling and diagnosis that explore the representational power of the language. A mechanical fault detection example displays how Loopy Logic can model time-series processes using an HMM variant. A digital circuit example exhibits the probabilistic modeling capabilities, and finally, a parameter fitting example demonstrates the power for learning unknown stochastic values.