This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in the photonic domain to achieve the requisite encryption rates. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines two classes of all optical logic (SEED, gain competition) and how each discrete logic element can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of the SEED and gain competition devices in an optical circuit were modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model of the SEED or gain competition device takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. We found that a low gate count, cascadable encryption algorithm is most feasible given device and processing constraints. The modeling and simulation of optical designs using these components is proceeding in parallel with efforts to perfect the physical devices and their interconnect. We have applied these techniques to the development of a 'toy' algorithm that may pave the way for more robust optical algorithms. These design/modeling/simulation techniques are now ready to be applied to larger optical designs in advance of our ability to implement such systems in hardware.
This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.
We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instruction or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve
As rapid Internet growth continues, global communications becomes more dependent on Internet availability for information transfer. Recently, the Internet Engineering Task Force (IETF) introduced a new protocol, Multiple Protocol Label Switching (MPLS), to provide high-performance data flows within the Internet. MPLS emulates two major aspects of the Asynchronous Transfer Mode (ATM) technology. First, each initial IP packet is 'routed' to its destination based on previously known delay and congestion avoidance mechanisms. This allows for effective distribution of network resources and reduces the probability of congestion. Second, after route selection each subsequent packet is assigned a label at each hop, which determines the output port for the packet to reach its final destination. These labels guide the forwarding of each packet at routing nodes more efficiently and with more control than traditional IP forwarding (based on complete address information in each packet) for high-performance data flows. Label assignment is critical in the prompt and accurate delivery of user data. However, the protocols for label distribution were not adequately secured. Thus, if an adversary compromises a node by intercepting and modifying, or more simply injecting false labels into the packet-forwarding engine, the propagation of improperly labeled data flows could create instability in the entire network. In addition, some Virtual Private Network (VPN) solutions take advantage of this 'virtual channel' configuration to eliminate the need for user data encryption to provide privacy. VPN's relying on MPLS require accurate label assignment to maintain user data protection. This research developed a working distributive trust model that demonstrated how to deploy confidentiality, authentication, and non-repudiation in the global network label switching control plane. Simulation models and laboratory testbed implementations that demonstrated this concept were developed, and results from this research were transferred to industry via standards in the Optical Internetworking Forum (OIF).
The recent unprecedented growth of global network (Internet) usage has created an ever-increasing amount of congestion. Telecommunication companies (Telco) and Internet Service Providers (ISP's), which provide access and distribution through the network, are increasingly more aware of the need to manage this growth. Congestion, if left unmanaged, will result in a degradation of the over-all network. These access and distribution networks currently lack formal mechanisms to select Quality of Service (QoS) attributes for data transport. Network services with a requirement for expediency or consistent amounts of bandwidth cannot function properly in a communication environment without the implementation of a QoS structure. This report describes and implements such a structure that results in the ability to identify, prioritize, and police critical application flows.
The next major performance plateau for high-speed, long-haul networks is at 10 Gbps. Data visualization, high performance network storage, and Massively Parallel Processing (MPP) demand these (and higher) communication rates. MPP-to-MPP distributed processing applications and MPP-to-Network File Store applications already require single conversation communication rates in the range of 10 to 100 Gbps. MPP-to-Visualization Station applications can already utilize communication rates in the 1 to 10 Gbps range. This LDRD project examined some of the building blocks necessary for developing a 10 to 100 Gbps computer network architecture. These included technology areas such as, OS Bypass, Dense Wavelength Division Multiplexing (DWDM), IP switching and routing, Optical Amplifiers, Inverse Multiplexing of ATM, data encryption, and data compression; standards bodies activities in the ATM Forum and the Optical Internetworking Forum (OIF); and proof-of-principle laboratory prototypes. This work has not only advanced the body of knowledge in the aforementioned areas, but has generally facilitated the rapid maturation of high-speed networking and communication technology by: (1) participating in the development of pertinent standards, and (2) by promoting informal (and formal) collaboration with industrial developers of high speed communication equipment.
This document highlights the Discom{sup 2}'s Distance computing and communication team activities at the 1999 Supercomputing conference in Portland, Oregon. This conference is sponsored by the IEEE and ACM. Sandia, Lawrence Livermore and Los Alamos National laboratories have participated in this conference for eleven years. For the last four years the three laboratories have come together at the conference under the DOE's ASCI, Accelerated Strategic Computing Initiatives rubric. Communication support for the ASCI exhibit is provided by the ASCI DISCOM{sup 2} project. The DISCOM{sup 2} communication team uses this forum to demonstrate and focus communication and networking developments within the community. At SC 99, DISCOM built a prototype of the next generation ASCI network demonstrated remote clustering techniques, demonstrated the capabilities of the emerging Terabit Routers products, demonstrated the latest technologies for delivering visualization data to the scientific users, and demonstrated the latest in encryption methods including IP VPN technologies and ATM encryption research. The authors also coordinated the other production networking activities within the booth and between their demonstration partners on the exhibit floor. This paper documents those accomplishments, discusses the details of their implementation, and describes how these demonstrations support Sandia's overall strategies in ASCI networking.
A number of substantive modifications were made from Version 1.0 to Version 1.1 of the ATM Security Specification. To assist implementers in identifying these modifications, the authors propose to include a foreword to the Security 1.1 specification that lists these modifications. Typically, a revised specification provides some mechanism for implementers to determine the modifications that were made from previous versions. Since the Security 1.1 specification does not include change bars or other mechanisms that specifically direct the reader to these modifications, they proposed to include a modification table in a foreword to the document. This modification table should also be updated to include substantive modifications that are made at the San Francisco meeting.
This contribution provides Sandia's strawballot comments for the Security Version l.l specification, STR-SEC-02.01. Two major comments are addressed here that pertain to potential problems with the use of the Security Association Section digital signature, and potential inconsistencies with the allocation of relative identifiers in the initiating security agent.
As described in contribution AF99-0335, it is interesting that new security services and mechanisms are allowed to be negotiated during a connection in progress. To do that, new ''negotiation OAM cells'' dedicated to security should be defined, as well as some acknowledgment cells allowing negotiation OAM cells to be exchanged reliably. Remarks which were given at the New Orleans meeting regarding those cell formats are taken into account. This contribution presents some baseline text describing the format of the negotiation and acknowledgment cells, and the using of those cells. All the modifications brought to the specifications are reversible using the Word tools.