Publications

18 Results
Skip to search filters

Supply chain lifecycle decision analytics

Proceedings - International Carnahan Conference on Security Technology

Kao, Gio K.; Lin, Han W.; Eames, Brandon; Haas, Jason; Fisher, Alexis; Michalski, John T.; Blount, Jon; Hamlet, Jason; Lee, Erik; Gauthier, John H.; Wyss, Gregory; Helinski, Ryan H.; Franklin, Dustin R.

The globalization of today's supply chains (e.g., information and communication technologies, military systems, etc.) has created an emerging security threat that could degrade the integrity and availability of sensitive and critical government data, control systems, and infrastructures. Commercial-off-the-shelf (COTS) and even government-off-the-self (GOTS) products often are designed, developed, and manufactured overseas. Counterfeit items, from individual chips to entire systems, have been found in commercial and government sectors. Supply chain attacks can be initiated at any point during the product or system lifecycle, and can have detrimental effects to mission success. To date, there is a lack of analytics and decision support tools used to analyze supply chain security holistically, and to perform tradeoff analyses to determine how to invest in or deploy possible mitigation options for supply chain security such that the return on investment is optimal with respect to cost, efficiency, and security. This paper discusses the development of a supply chain decision analytics framework that will assist decision makers and stakeholders in performing risk-based cost-benefit prioritization of security investments to manage supply chain risk. Key aspects of our framework include the hierarchical supply chain representation, vulnerability and mitigation modeling, risk assessment and optimization. This work is a part of a long term research effort on supply chain decision analytics for trusted systems and communications research challenge.

More Details

Cyber threat metrics

Mateski, Mark E.; Trevino, Cassandra M.; Veitch, Cynthia K.; Michalski, John T.; Harris, James M.; Maruoka, Les S.; Frye, Jason N.

Threats are generally much easier to list than to describe, and much easier to describe than to measure. As a result, many organizations list threats. Fewer describe them in useful terms, and still fewer measure them in meaningful ways. This is particularly true in the dynamic and nebulous domain of cyber threats - a domain that tends to resist easy measurement and, in some cases, appears to defy any measurement. We believe the problem is tractable. In this report we describe threat metrics and models for characterizing threats consistently and unambiguously. The purpose of this report is to support the Operational Threat Assessment (OTA) phase of risk and vulnerability assessment. To this end, we focus on the task of characterizing cyber threats using consistent threat metrics and models. In particular, we address threat metrics and models for describing malicious cyber threats to US FCEB agencies and systems.

More Details

A threat analysis framework as applied to critical infrastructures in the Energy Sector

Michalski, John T.; Duggan, David P.

The need to protect national critical infrastructure has led to the development of a threat analysis framework. The threat analysis framework can be used to identify the elements required to quantify threats against critical infrastructure assets and provide a means of distributing actionable threat information to critical infrastructure entities for the protection of infrastructure assets. This document identifies and describes five key elements needed to perform a comprehensive analysis of threat: the identification of an adversary, the development of generic threat profiles, the identification of generic attack paths, the discovery of adversary intent, and the identification of mitigation strategies.

More Details

National SCADA Test Bed: FY05 Progress on Virtual Control System Environment (VCSE)

Van Leeuwen, Brian P.; Michalski, John T.; Lee, Erik L.

This document provides the status of the Virtual Control System Environment (VCSE) under development at Sandia National Laboratories. This development effort is funded by the Department of Energy's (DOE) National SCADA Test Bed (NSTB) Program. Specifically the document presents a Modeling and Simulation (M&S) and software interface capability that supports the analysis of Process Control Systems (PCS) used in critical infrastructures. This document describes the development activities performed through June 2006 and the current status of the VCSE development task. Initial activities performed by the development team included researching the needs of critical infrastructure systems that depend on PCS. A primary source describing the security needs of a critical infrastructure is the Roadmap to Secure Control Systems in the Energy Sector. A literature search of PCS analysis tools was performed and we identified a void in system-wide PCS M&S capability. No existing tools provide a capability to simulate control system devices and the underlying supporting communication network. The design team identified the requirements for an analysis tool to fill this void. Since PCS are comprised of multiple subsystems, an analysis framework that is modular was selected for the VCSE. The need for a framework to support the interoperability of multiple simulators with a PCS device model library was identified. The framework supports emulation of a system that is represented by models in a simulation interacting with actual hardware via a System-in-the-Loop (SITL) interface. To identify specific features for the VCSE analysis tool the design team created a questionnaire that briefly described the range of potential capabilities the analysis tool could include and requested feedback from potential industry users. This initial industry outreach was also intended to identify several industry users that are willing to participate in a dialog through the development process so that we maximize usefulness of the VCSE to industry. Industry involvement will continue throughout the VCSE development process. The teams activities have focused on creating a modeling and simulation capability that will support the analysis of PCS. An M&S methodology that is modular in structure was selected. The framework is able to support a range of model fidelities depending on the analysis being performed. In some cases high-fidelity network communication protocol and device models are necessary which can be accomplished by including a high-fidelity communication network simulator such as OPNET Modeler. In other cases lower fidelity models could be used in which case the high-fidelity communication network simulator is not needed. In addition, the framework supports a range of control system device behavior models. The models could range from simple function models to very detailed vendor-specific models. Included in the FY05 funding milestones was a demonstration of the framework. The development team created two scenarios that demonstrated the VCSE modular framework. The first demonstration provided a co-simulation using a high-fidelity communication network simulator interoperating with a custom developed control system simulator and device library. The second scenario provided a system-in-the-loop demonstration that emulated a system with a virtual network segment interoperating with a real-device network segment.

More Details

Final report for the mobile node authentication LDRD project

Michalski, John T.; Lanzone, Andrew J.

In hostile ad hoc wireless communication environments, such as battlefield networks, end-node authentication is critical. In a wired infrastructure, this authentication service is typically facilitated by a centrally-located ''authentication certificate generator'' such as a Certificate Authority (CA) server. This centralized approach is ill-suited to meet the needs of mobile ad hoc networks, such as those required by military systems, because of the unpredictable connectivity and dynamic routing. There is a need for a secure and robust approach to mobile node authentication. Current mechanisms either assign a pre-shared key (shared by all participating parties) or require that each node retain a collection of individual keys that are used to communicate with other individual nodes. Both of these approaches have scalability issues and allow a single compromised node to jeopardize the entire mobile node community. In this report, we propose replacing the centralized CA with a distributed CA whose responsibilities are shared between a set of select network nodes. To that end, we develop a protocol that relies on threshold cryptography to perform the fundamental CA duties in a distributed fashion. The protocol is meticulously defined and is implemented it in a series of detailed models. Using these models, mobile wireless scenarios were created on a communication simulator to test the protocol in an operational environment and to gather statistics on its scalability and performance.

More Details

Enhancements for distributed certificate authority approaches for mobile wireless ad hoc networks

Van Leeuwen, Brian P.; Anderson, William E.; Michalski, John T.; Van Leeuwen, Brian P.

Mobile wireless ad hoc networks that are resistant to adversarial manipulation are necessary for distributed systems used in military and security applications. Critical to the successful operation of these networks, which operate in the presence of adversarial stressors, are robust and efficient information assurance methods. In this report we describe necessary enhancements for a distributed certificate authority (CA) used in secure wireless network architectures. Necessary cryptographic algorithms used in distributed CAs are described and implementation enhancements of these algorithms in mobile wireless ad hoc networks are developed. The enhancements support a network's ability to detect compromised nodes and facilitate distributed CA services. We provide insights to the impacts the enhancements will have on network performance with timing diagrams and preliminary network simulation studies.

More Details

Final report for the Multiprotocol Label Switching (MPLS) control plane security LDRD project

Tarman, Thomas D.; Tarman, Thomas D.; Pierson, Lyndon G.; Michalski, John T.; Black, Stephen P.; Torgerson, Mark D.

As rapid Internet growth continues, global communications becomes more dependent on Internet availability for information transfer. Recently, the Internet Engineering Task Force (IETF) introduced a new protocol, Multiple Protocol Label Switching (MPLS), to provide high-performance data flows within the Internet. MPLS emulates two major aspects of the Asynchronous Transfer Mode (ATM) technology. First, each initial IP packet is 'routed' to its destination based on previously known delay and congestion avoidance mechanisms. This allows for effective distribution of network resources and reduces the probability of congestion. Second, after route selection each subsequent packet is assigned a label at each hop, which determines the output port for the packet to reach its final destination. These labels guide the forwarding of each packet at routing nodes more efficiently and with more control than traditional IP forwarding (based on complete address information in each packet) for high-performance data flows. Label assignment is critical in the prompt and accurate delivery of user data. However, the protocols for label distribution were not adequately secured. Thus, if an adversary compromises a node by intercepting and modifying, or more simply injecting false labels into the packet-forwarding engine, the propagation of improperly labeled data flows could create instability in the entire network. In addition, some Virtual Private Network (VPN) solutions take advantage of this 'virtual channel' configuration to eliminate the need for user data encryption to provide privacy. VPN's relying on MPLS require accurate label assignment to maintain user data protection. This research developed a working distributive trust model that demonstrated how to deploy confidentiality, authentication, and non-repudiation in the global network label switching control plane. Simulation models and laboratory testbed implementations that demonstrated this concept were developed, and results from this research were transferred to industry via standards in the Optical Internetworking Forum (OIF).

More Details

Network Security Mechanisms Utilizing Dynamic Network Address Translation LDRD Project

Jung, Carrie M.; Lee, Erik L.; Michalski, John T.; Michalski, John T.

A new protocol technology is just starting to emerge from the laboratory environment. Its stated purpose is to provide an additional means in which networks, and the services that reside on them, can be protected from adversarial compromise. This report has a two-fold objective. First is to provide the reader with an overview of this emerging Dynamic Defenses technology using Dynamic Network Address Translation (Dynat). This ''structure overview'' is concentrated in the body of the report, and describes the important attributes of the technology. The second objective is to provide a framework that can be used to help in the classification and assessment of the different types of dynamic defense technologies along with some related capabilities and limitations. This information is primarily contained in the appendices.

More Details

Final Report for the Quality of Service for Networks Laboratory Directed Research and Development Project

Eldridge, John M.; Tarman, Thomas D.; Brenkosh, Joseph P.; Dillinger, John D.; Michalski, John T.; Michalski, John T.

The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.

More Details
18 Results
18 Results