Today’s networked systems utilize advanced security components such as Next Generation Firewall (NGFW), Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), and methods for network traffic classification. A fundamental aspect of these security components and methods is network packet visibility and packet inspection. To achieve packet visibility, a compute mechanism used by these security components and methods is Deep Packet Inspection (DPI). DPI is used to obtain visibility into packet fields by looking deeper inside packets, beyond just IP address, port, and protocol. However, DPI is considered extremely expensive in terms of compute processing costs and very challenging to implement on high speed network systems. The fundamental scientific paradigm addressed in this research project is the application of greater network packet visibility and packet inspection at data rates greater than 40Gbps to secure computer network systems. The greater visibility and inspection will enable detection of advanced content-based threats that exploit application vulnerabilities and are designed to bypass traditional security approaches such as firewalls and antivirus scanners. Greater visibility and inspection are achieved through identification of the application protocol (e.g., HTTP, SMTP, Skype) and, in some cases, extraction and processing of the information contained in the packet payload. Analysis is then performed on the resulting DPI data to identify potentially malicious behavior. In order to obtain visibility and inspect the application protocol and contents at high speed data rates, advanced DPI technologies and implementations are developed.
We demonstrate a new approach that yields exponential speedup of communication events in discrete event simulations (DES) of mobile wireless networks. With the trending explosive growth of wireless devices including Internet of things devices, it is important to have the capability to simulate wireless networks at large scale accurately. Unfortunately, current simulation techniques are inadequate for large-scale network simulation especially at high rate of total transmission events due to poor performance scaling. This has limited many studies to much smaller sizes than desired. We propose a method for attaining high fidelity DES of mobile wireless networks that leverages (i) path loss of transmissions and (ii) the relatively slower topology changes in a high transmission rate environment. Our approach averages a runtime of k(r)O(log N) per event, where N is the number of simulated wireless nodes, r is a factor of maximum transmission range, and k(r) is typically constant obeying k(r)≪ N.
Wireless networking and mobile communications is increasing around the world and in all sectors of our lives. With increasing use, the density and complexity of the systems increase with more base stations and advanced protocols to enable higher data throughputs. The security of data transported over wireless networks must also evolve with the advances in technologies enabling more capable wireless networks. However, means for analysis of the effectiveness of security approaches and implementations used on wireless networks are lacking. More specifically a capability to analyze the lower-layer protocols (i.e., Link and Physical layers) is a major challenge. An analysis approach that incorporates protocol implementations without the need for RF emissions is necessary. In this research paper several emulation tools and custom extensions that enable an analysis platform to perform cyber security analysis of lower layer wireless networks is presented. A use case of a published exploit in the 802.11 (i.e., WiFi) protocol family is provided to demonstrate the effectiveness of the described emulation platform.
The ability to simulate wireless networks at large-scale for meaningful amount of time is considerably lacking in today's network simulators. For this reason, many published work in this area often limit their simulation studies to less than a 1,000 nodes and either over-simplify channel characteristics or perform studies over time scales much less than a day. In this report, we show that one can overcome these limitations and study problems of high practical consequence. This work presents two key contributions to high fidelity simulation of large-scale wireless networks: (a) wireless simulations can be sped up by more than 100X in runtime using ideas from spatial indexing algorithms and clipping of negligible signals and (b) clustering and task-oriented programming paradigm can be used to reduce inter- process communication in a parallel discrete event simulation resulting in a better scaling efficiency.
A cybersecurity training environment or platform provides an excellent foundation tool for the cyber protection team (CPT) to practice and enhance their cybersecurity skills, develop and learn new knowledge, and experience advanced and emergent cyber threat concepts in information security. The cyber training platform is comprised of similar components and usage methods as system testbeds which are used for assessing system security posture as well as security devices. To enable similar cyber behaviors as in operational systems, the cyber training platforms must incorporate realism of operation for the system the cyber workforce desires to protect. The system's realism is obtained by constructing training models that include a broad range of system and specific device-level fidelity. However, for cyber training purposes the training platform must go beyond computer network topology and computer host model fidelity - it must include realistic models of cyber intrusions and attacks to enable the realism necessary for training purposes. In this position paper we discuss the benefits that such a cyber training platform provides, to include a discussion on the challenges of creating, deploying, and maintaining the platform itself. With the current availability of networked information system emulation and virtualization technologies, coupled with the capability to federate with other system simulators and emulators, including those used for training, the creation of powerful cyber training platforms are possible.
Wireless systems and networks have experienced rapid growth over the last decade with the advent of smart devices for everyday use. These systems, which include smartphones, vehicular gadgets, and internet-of-things devices, are becoming ubiquitous and ever-more important. They pose interesting research challenges for design and analysis of new network protocols due to their large scale and complexity. In this work, we focus on the challenging aspect of simulating the inter-connectivity of many of these devices in wireless networks. The quantitative study of large scale wireless networks, with counts of wireless devices in the thousands, is a very difficult problem with no known acceptable solution. By necessity, simulations of this scale have to approximate reality, but the algorithms employed in most modern-day network simulators can be improved for wireless network simulations. In this report, we present advances that we have made and propositions for continuation of progress towards a framework for high fidelity simulations of wireless networks. This work is not complete in that a final simulation framework tool is yet to be produced. However, we highlight the major bottlenecks and address them individually with initial results showing enough promise.
Moving Target Defense (MTD) is based on the notion of controlling change across various system attributes with the objective of increasing uncertainty and complexity for attackers; the promise of MTD is that this increased uncertainty and complexity will increase the costs of attack efforts and thus prevent or limit network intrusions. As MTD increases complexity of the system for the attacker, the MTD also increases complexity and cost in the desired operation of the system. This introduced complexity may result in more difficult network troubleshooting and cause network degradation or longer network outages, and may not provide an adequate defense against an adversary in the end. In this work, the authors continue MTD assessment and evaluation, this time focusing on application performance monitoring (APM) under the umbrella of Defensive Work Factors, as well as the empirical assessment of a network-based MTD under Red Team (RT) attack. APM provides the impact of the MTD from the perspective of the user, whilst the RT element provides a means to test the defense under a series of attack steps based on the LM Cyber Kill Chain.
Moving Target Defense (MTD) has received significant focus in technical publications. The publications describe MTD approaches that periodically change some attribute of the computer network system. The attribute that is changed, in most cases, is one that an adversary attempts to gain knowledge of through reconnaissance and may use its knowledge of the attribute to exploit the system. The fundamental mechanism an MTD uses to secure the system is to change the system attributes such that the adversary never gains the knowledge and cannot execute an exploit prior to the attribute changing value. Thus, the MTD keeps the adversary from gaining the knowledge of attributes necessary to exploit the system. Most papers conduct theoretical analysis or basic simulations to assess the effectiveness of the MTD approach. More effective assessment of MTD approaches should include behavioral characteristics for both the defensive actor and the adversary; however, limited research exists on running actual attacks against an implemented system with the objective of determining the security benefits and total cost of deploying the MTD approach. This paper explores empirical assessment through experimentation of MTD approaches. The cyber-kill chain is used to characterize the actions of the adversary and identify what classes of attacks were successfully thwarted by the MTD approach and what classes of attacks could not be thwarted In this research paper, we identify the experiment environments and where experiment fidelity should be focused to evaluate the effectiveness of MTD approaches. Additionally, experimentation environments that support contemporary technologies used in MTD approaches, such as software defined networking (SDN), are also identified and discussed.
Moving Target Defense (MTD) is the concept of controlling change across multiple information system dimensions with the objective of increasing uncertainty and complexity for attackers. Increased uncertainty and complexity will increase the costs of malicious probing and attack efforts and thus prevent or limit network intrusion. As MTD increases complexity of the system for the attacker, the MTD also increases complexity in the desired operation of the system. This introduced complexity results in more difficult network troubleshooting and can cause network degradation or longer network outages. In this research paper the authors describe the defensive work factor concept. Defensive work factors considers in detail the specific impact that the MTD approach has on computing resources and network resources. Measuring impacts on system performance along with identifying how network services (e.g., DHCP, DNS, in-place security mechanisms) are affected by the MTD approach are presented. Also included is a case study of an MTD deployment and the defensive work factor costs. An actual experiment is constructed and metrics are described for the use case.
Packet switched data communications networks that use distributed processing architectures have the potential to simplify the design and development of new, increasingly more sophisticated satellite payloads. In addition, the use of reconfigurable logic may reduce the amount of redundant hardware required in space-based applications without sacrificing reliability. These concepts were studied using software modeling and simulation, and the results are presented in this report. Models of the commercially available, packet switched data interconnect SpaceWire protocol were developed and used to create network simulations of data networks containing reconfigurable logic with traffic flows for timing system distribution.
Cyber security analysis tools are necessary to evaluate the security, reliability, and resilience of networked information systems against cyber attack. It is common practice in modern cyber security analysis to separately utilize real systems computers, routers, switches, firewalls, computer emulations (e.g., virtual machines) and simulation models to analyze the interplay between cyber threats and safeguards. In contrast, Sandia National Laboratories has developed new methods to combine these evaluation platforms into a cyber Live, Virtual, and Constructive (LVC) testbed. The combination of real, emulated, and simulated components enables the analysis of security features and components of a networked information system. When performing cyber security analysis on a target system, it is critical to represent realistically the subject security components in high fidelity. In some experiments, the security component may be the actual hardware and software with all the surrounding components represented in simulation or with surrogate devices. Sandia National Laboratories has developed a cyber LVC testbed that combines modeling and simulation capabilities with virtual machines and real devices to represent, in varying fidelity, secure networked information system architectures and devices. Using this capability, secure networked information system architectures can be represented in our testbed on a single computing platform. This provides an "experiment-in-a-box" capability. The result is rapidly produced, large scale, relatively low-cost, multi-fidelity representations of networked information systems. These representations enable analysts to quickly investigate cyber threats and test protection approaches and configurations.
Cyber security analysis tools are necessary to evaluate the security, reliability, and resilience of networked information systems against cyber attack. It is common practice in modern cyber security analysis to separately utilize real systems of computers, routers, switches, firewalls, computer emulations (e.g., virtual machines) and simulation models to analyze the interplay between cyber threats and safeguards. In contrast, Sandia National Laboratories has developed novel methods to combine these evaluation platforms into a hybrid testbed that combines real, emulated, and simulated components. The combination of real, emulated, and simulated components enables the analysis of security features and components of a networked information system. When performing cyber security analysis on a system of interest, it is critical to realistically represent the subject security components in high fidelity. In some experiments, the security component may be the actual hardware and software with all the surrounding components represented in simulation or with surrogate devices. Sandia National Laboratories has developed a cyber testbed that combines modeling and simulation capabilities with virtual machines and real devices to represent, in varying fidelity, secure networked information system architectures and devices. Using this capability, secure networked information system architectures can be represented in our testbed on a single, unified computing platform. This provides an 'experiment-in-a-box' capability. The result is rapidly-produced, large-scale, relatively low-cost, multi-fidelity representations of networked information systems. These representations enable analysts to quickly investigate cyber threats and test protection approaches and configurations.
This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.
This document provides the status of the Virtual Control System Environment (VCSE) under development at Sandia National Laboratories. This development effort is funded by the Department of Energy's (DOE) National SCADA Test Bed (NSTB) Program. Specifically the document presents a Modeling and Simulation (M&S) and software interface capability that supports the analysis of Process Control Systems (PCS) used in critical infrastructures. This document describes the development activities performed through June 2006 and the current status of the VCSE development task. Initial activities performed by the development team included researching the needs of critical infrastructure systems that depend on PCS. A primary source describing the security needs of a critical infrastructure is the Roadmap to Secure Control Systems in the Energy Sector. A literature search of PCS analysis tools was performed and we identified a void in system-wide PCS M&S capability. No existing tools provide a capability to simulate control system devices and the underlying supporting communication network. The design team identified the requirements for an analysis tool to fill this void. Since PCS are comprised of multiple subsystems, an analysis framework that is modular was selected for the VCSE. The need for a framework to support the interoperability of multiple simulators with a PCS device model library was identified. The framework supports emulation of a system that is represented by models in a simulation interacting with actual hardware via a System-in-the-Loop (SITL) interface. To identify specific features for the VCSE analysis tool the design team created a questionnaire that briefly described the range of potential capabilities the analysis tool could include and requested feedback from potential industry users. This initial industry outreach was also intended to identify several industry users that are willing to participate in a dialog through the development process so that we maximize usefulness of the VCSE to industry. Industry involvement will continue throughout the VCSE development process. The teams activities have focused on creating a modeling and simulation capability that will support the analysis of PCS. An M&S methodology that is modular in structure was selected. The framework is able to support a range of model fidelities depending on the analysis being performed. In some cases high-fidelity network communication protocol and device models are necessary which can be accomplished by including a high-fidelity communication network simulator such as OPNET Modeler. In other cases lower fidelity models could be used in which case the high-fidelity communication network simulator is not needed. In addition, the framework supports a range of control system device behavior models. The models could range from simple function models to very detailed vendor-specific models. Included in the FY05 funding milestones was a demonstration of the framework. The development team created two scenarios that demonstrated the VCSE modular framework. The first demonstration provided a co-simulation using a high-fidelity communication network simulator interoperating with a custom developed control system simulator and device library. The second scenario provided a system-in-the-loop demonstration that emulated a system with a virtual network segment interoperating with a real-device network segment.
This report describes an integrated approach for designing communication, sensing, and control systems for mobile distributed systems. Graph theoretic methods are used to analyze the input/output reachability and structural controllability and observability of a decentralized system. Embedded in each network node, this analysis will automatically reconfigure an ad hoc communication network for the sensing and control task at hand. The graph analysis can also be used to create the optimal communication flow control based upon the spatial distribution of the network nodes. Edge coloring algorithms tell us that the minimum number of time slots in a planar network is equal to either the maximum number of adjacent nodes (or degree) of the undirected graph plus some small number. Therefore, the more spread out that the nodes are, the fewer number of time slots are needed for communication, and the smaller the latency between nodes. In a coupled system, this results in a more responsive sensor network and control system. Network protocols are developed to propagate this information, and distributed algorithms are developed to automatically adjust the number of time slots available for communication. These protocols and algorithms must be extremely efficient and only updated as network nodes move. In addition, queuing theory is used to analyze the delay characteristics of Carrier Sense Multiple Access (CSMA) networks. This report documents the analysis, simulation, and implementation of these algorithms performed under this Laboratory Directed Research and Development (LDRD) effort.
Network-centric systems that depend on mobile wireless ad hoc networks for their information exchange require detailed analysis to support their development. In many cases, this critical analysis is best provided with high-fidelity system simulations that include the effects of network architectures and protocols. In this research, we developed a high-fidelity system simulation capability using an HLA federation. The HLA federation, consisting of the Umbra system simulator and OPNET Modeler network simulator, provides a means for the system simulator to both affect, and be affected by, events in the network simulator. Advances are also made in increasing the fidelity of the wireless communication channel and reducing simulation run-time with a dead reckoning capability. A simulation experiment is included to demonstrate the developed modeling and simulation capability.
Mobile wireless ad hoc networks that are resistant to adversarial manipulation are necessary for distributed systems used in military and security applications. Critical to the successful operation of these networks, which operate in the presence of adversarial stressors, are robust and efficient information assurance methods. In this report we describe necessary enhancements for a distributed certificate authority (CA) used in secure wireless network architectures. Necessary cryptographic algorithms used in distributed CAs are described and implementation enhancements of these algorithms in mobile wireless ad hoc networks are developed. The enhancements support a network's ability to detect compromised nodes and facilitate distributed CA services. We provide insights to the impacts the enhancements will have on network performance with timing diagrams and preliminary network simulation studies.
Distributed denial of service (DoS) attacks on cyber-resources are complex problems that are difficult to completely define, characterize, and mitigate. We recognize the process-nature of DoS attacks and view them from multiple perspectives. Identification of opportunities for mitigation and further research may result from this attempt to characterize the DoS problem space. We examine DoS attacks from the point of view of (1) a high-level that establishes common terminology and a framework for discussing the DoS process, (2) layers of the communication stack, from attack origination to the victim of the attack, (3) specific network and computer elements, and (4) attack manifestations. We also examine DoS issues associated with wireless communications. Using this collection of views, one begins to see the DoS problem in a holistic way that may lead to improved understanding, new mitigation strategies, and fruitful research.
In high consequence systems, all layers of the protocol stack need security features. If network and data-link layer control messages are not secured, a network may be open to adversarial manipulation. The open nature of the wireless channel makes mobile wireless mobile ad hoc networks (MANETs) especially vulnerable to control plane manipulation. The objective of this research is to investigate MANET performance issues when cryptographic processing delays are applied at the data-link layer. The results of analysis are combined with modeling and simulation experiments to show that network performance in MANETs is highly sensitive to the cryptographic overhead.
In this paper, we discuss several specific threats directed at the routing data of an ad hoc network. We address security issues that arise from wrapping authentication mechanisms around ad hoc routing data. We show that this bolt-on approach to security may make certain attacks more difficult, but still leaves the network routing data vulnerable. We also show that under a certain adversarial model, most existing routing protocols cannot be secured with the aid of digital signatures.