Publications

Results 1–25 of 28
Skip to search filters

An Analysis of Department of Defense Instruction 8500.2 'Information Assurance (IA) Implementation.'

Campbell, Philip L.

The Department of Defense (DoD) provides its standard for information assurance in its Instruction 8500.2, dated February 6, 2003. This Instruction lists 157 'IA Controls' for nine 'baseline IA levels.' Aside from distinguishing IA Controls that call for elevated levels of 'robustness' and grouping the IA Controls into eight 'subject areas' 8500.2 does not examine the nature of this set of controls, determining, for example, which controls do not vary in robustness, how this set of controls compares with other such sets, or even which controls are required for all nine baseline IA levels. This report analyzes (1) the IA Controls, (2) the subject areas, and (3) the Baseline IA levels. For example, this report notes that there are only 109 core IA Controls (which this report refers to as 'ICGs'), that 43 of these core IA Controls apply without variation to all nine baseline IA levels and that an additional 31 apply with variations. This report maps the IA Controls of 8500.2 to the controls in NIST 800-53 and ITGI's CoBIT. The result of this analysis and mapping, as shown in this report, serves as a companion to 8500.2. (An electronic spreadsheet accompanies this report.)

More Details

Peirce, pragmatism, and the right way of thinking

Campbell, Philip L.

This report is a summary of and commentary on (a) the seven lectures that C. S. Peirce presented in 1903 on pragmatism and (b) a commentary by P. A. Turrisi, both of which are included in Pragmatism as a Principle and Method of Right Thinking: The 1903 Harvard Lectures on Pragmatism, edited by Turrisi [13]. Peirce is known as the founder of the philosophy of pragmatism and these lectures, given near the end of his life, represent his mature thoughts on the philosophy. Peirce's decomposition of thinking into abduction, deduction, and induction is among the important points in the lectures.

More Details

Stephen Jay Kline on systems, or physics, complex systems, and the gap between

Campbell, Philip L.

At the end of his life, Stephen Jay Kline, longtime professor of mechanical engineering at Stanford University, completed a book on how to address complex systems. The title of the book is 'Conceptual Foundations of Multi-Disciplinary Thinking' (1995), but the topic of the book is systems. Kline first establishes certain limits that are characteristic of our conscious minds. Kline then establishes a complexity measure for systems and uses that complexity measure to develop a hierarchy of systems. Kline then argues that our minds, due to their characteristic limitations, are unable to model the complex systems in that hierarchy. Computers are of no help to us here. Our attempts at modeling these complex systems are based on the way we successfully model some simple systems, in particular, 'inert, naturally-occurring' objects and processes, such as what is the focus of physics. But complex systems overwhelm such attempts. As a result, the best we can do in working with these complex systems is to use a heuristic, what Kline calls the 'Guideline for Complex Systems.' Kline documents the problems that have developed due to 'oversimple' system models and from the inappropriate application of a system model from one domain to another. One prominent such problem is the Procrustean attempt to make the disciplines that deal with complex systems be 'physics-like.' Physics deals with simple systems, not complex ones, using Kline's complexity measure. The models that physics has developed are inappropriate for complex systems. Kline documents a number of the wasteful and dangerous fallacies of this type.

More Details

Final report and documentation for the security enabled programmable switch for protection of distributed internetworked computers LDRD

Vanrandwyk, Jamie V.; Toole, Timothy J.; Durgin, Nancy A.; Pierson, Lyndon G.; Kucera, Brent D.; Robertson, Perry J.; Campbell, Philip L.

An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way.

More Details

The evolving story of information assurance at the DoD

Campbell, Philip L.

This document is a review of five documents on information assurance from the Department of Defense (DoD), namely 5200.40, 8510.1-M, 8500.1, 8500.2, and an ''interim'' document on DIACAP [9]. The five documents divide into three sets: (1) 5200.40 & 8510.1-M, (2) 8500.1 & 8500.2, and (3) the interim DIACAP document. The first two sets describe the certification and accreditation process known as ''DITSCAP''; the last two sets describe the certification and accreditation process known as ''DIACAP'' (the second set applies to both processes). Each set of documents describes (1) a process, (2) a systems classification, and (3) a measurement standard. Appendices in this report (a) list the Phases, Activities, and Tasks of DITSCAP, (b) note the discrepancies between 5200.40 and 8510.1-M concerning DITSCAP Tasks and the System Security Authorization Agreement (SSAA), (c) analyze the DIACAP constraints on role fusion and on reporting, (d) map terms shared across the documents, and (e) review three additional documents on information assurance, namely DCID 6/3, NIST 800-37, and COBIT{reg_sign}.

More Details

A Cobit primer

Campbell, Philip L.

COBIT is a set of documents that provides guidance for computer security. This report introduces COBIT by answering the following questions, after first defining acronyms and presenting definitions: 1. Why is COBIT valuable? 2. What is COBIT?, and 3. What documents are related to COBIT? (The answer to the last question constitutes the bulk of this report.) This report also provides more detailed review of three documents. The first two documents--COBIT Security Baseline{trademark} and COBIT Quickstart{trademark}--are initial documents, designed to get people started. The third document-Control Practices-is a ''final'' document, so to speak, designed to take people all the way down into the details. Control Practices is the detail.

More Details

Securing mobile code

Beaver, Cheryl L.; Neumann, William D.; Link, Hamilton E.; Schroeppel, Richard C.; Campbell, Philip L.; Pierson, Lyndon G.; Anderson, William E.

If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements on this method as well as demonstrating its implementation for various algorithms. We also examine cryptographic techniques to achieve obfuscation including encrypted functions and offer a new application to digital signature algorithms. To better understand the lack of security proofs for obfuscation techniques, we examine in detail general theoretical models of obfuscation. We explain the need for formal models in order to obtain provable security and the progress made in this direction thus far. Finally we tackle the problem of verifying remote execution. We introduce some methods of verifying remote exponentiation computations and some insight into generic computation checking.

More Details

A classification scheme for risk assessment methods

Campbell, Philip L.; Stamp, Jason E.

This report presents a classification scheme for risk assessment methods. This scheme, like all classification schemes, provides meaning by imposing a structure that identifies relationships. Our scheme is based on two orthogonal aspects--level of detail, and approach. The resulting structure is shown in Table 1 and is explained in the body of the report. Each cell in the Table represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. This report imposes structure on the set of risk assessment methods in order to reveal their relationships and thus optimize their usage.We present a two-dimensional structure in the form of a matrix, using three abstraction levels for the rows and three approaches for the columns. For each of the nine cells in the matrix we identify the method type by name and example. The matrix helps the user understand: (1) what to expect from a given method, (2) how it relates to other methods, and (3) how best to use it. Each cell in the matrix represent a different arrangement of strengths and weaknesses. Those arrangements shift gradually as one moves through the table, each cell optimal for a particular situation. The intention of this report is to enable informed use of the methods so that a method chosen is optimal for a situation given. The matrix, with type names in the cells, is introduced in Table 2 on page 13 below. Unless otherwise stated we use the word 'method' in this report to refer to a 'risk assessment method', though often times we use the full phrase. The use of the terms 'risk assessment' and 'risk management' are close enough that we do not attempt to distinguish them in this report. The remainder of this report is organized as follows. In Section 2 we provide context for this report--what a 'method' is and where it fits. In Section 3 we present background for our classification scheme--what other schemes we have found, the fundamental nature of methods and their necessary incompleteness. In Section 4 we present our classification scheme in the form of a matrix, then we present an analogy that should provide an understanding of the scheme, concluding with an explanation of the two dimensions and the nine types in our scheme. In Section 5 we present examples of each of our classification types. In Section 6 we present conclusions.

More Details

Measures of effectiveness:an annotated bibliography

Campbell, Philip L.

The purpose of this report is to provide guidance, from the open literature, on developing a set of ''measures of effectiveness'' (MoEs) and using them to evaluate a system. Approximately twenty papers and books are reviewed. The papers that provide the clearest understanding of MoEs are identified (Sproles [46], [48], [50]). The seminal work on value-focused thinking (VFT), an approach that bridges the gap between MoEs and a system, is also identified (Keeney [25]). And finally three examples of the use of VFT in evaluating a system based on MoEs are identified (Jackson et al. [21], Kerchner & Deckro [27], and Doyle et al. [14]). Notes are provided of the papers and books to pursue in order to take this study to the next level of detail.

More Details

Prototyping Faithful Execution in a Java virtual machine

Campbell, Philip L.; Campbell, Philip L.; Pierson, Lyndon G.; Tarman, Thomas D.

This report presents the implementation of a stateless scheme for Faithful Execution, the design for which is presented in a companion report, ''Principles of Faithful Execution in the Implementation of Trusted Objects'' (SAND 2003-2328). We added a simple cryptographic capability to an already simplified class loader and its associated Java Virtual Machine (JVM) to provide a byte-level implementation of Faithful Execution. The extended class loader and JVM we refer to collectively as the Sandia Faithfully Executing Java architecture (or JavaFE for short). This prototype is intended to enable exploration of more sophisticated techniques which we intend to implement in hardware.

More Details

Principles of Faithful Execution in the implementation of trusted objects

Campbell, Philip L.; Campbell, Philip L.; Pierson, Lyndon G.; Tarman, Thomas D.

We begin with the following definitions: Definition: A trusted volume is the computing machinery (including communication lines) within which data is assumed to be physically protected from an adversary. A trusted volume provides both integrity and privacy. Definition: Program integrity consists of the protection necessary to enable the detection of changes in the bits comprising a program as specified by the developer, for the entire time that the program is outside a trusted volume. For ease of discussion we consider program integrity to be the aggregation of two elements: instruction integrity (detection of changes in the bits within an instruction or block of instructions), and sequence integrity (detection of changes in the locations of instructions within a program). Definition: Faithful Execution (FE) is a type of software protection that begins when the software leaves the control of the developer and ends within the trusted volume of a target processor. That is, FE provides program integrity, even while the program is in execution. (As we will show below, FE schemes are a function of trusted volume size.) FE is a necessary quality for computing. Without it we cannot trust computations. In the early days of computing FE came for free since the software never left a trusted volume. At that time the execution environment was the same as the development environment. In some circles that environment was referred to as a ''closed shop:'' all of the software that was used there was developed there. When an organization bought a large computer from a vendor the organization would run its own operating system on that computer, use only its own editors, only its own compilers, only its own debuggers, and so on. However, with the continuing maturity of computing technology, FE becomes increasingly difficult to achieve

More Details

Distributed Denial-of-Service Characterization

Draelos, Timothy J.; Draelos, Timothy J.; Torgerson, Mark D.; Berg, Michael J.; Campbell, Philip L.; Duggan, David P.; Van Leeuwen, Brian P.; Young, William F.; Young, Mary L.

Distributed denial of service (DoS) attacks on cyber-resources are complex problems that are difficult to completely define, characterize, and mitigate. We recognize the process-nature of DoS attacks and view them from multiple perspectives. Identification of opportunities for mitigation and further research may result from this attempt to characterize the DoS problem space. We examine DoS attacks from the point of view of (1) a high-level that establishes common terminology and a framework for discussing the DoS process, (2) layers of the communication stack, from attack origination to the victim of the attack, (3) specific network and computer elements, and (4) attack manifestations. We also examine DoS issues associated with wireless communications. Using this collection of views, one begins to see the DoS problem in a holistic way that may lead to improved understanding, new mitigation strategies, and fruitful research.

More Details

Source Code Assurance Tool: Preliminary Functional Description

Craft, Richard L.; Espinoza, Juan E.; Campbell, Philip L.; Espinoza, Juan E.

This report provides a preliminary functional description of a novel software application, the Source Code Assurance Tool, which would assist a system analyst in the software assessment process. An overview is given of the tool's functionality and design; and how the analyst would use it to assess a body of source code. This work was done as part of a Laboratory Directed Research and Development project.

More Details

Visual Structure Language

Campbell, Philip L.; Espinoza, Juan E.

In this paper we describe a new language, Visual Structure Language (VSL), designed to describe the structure of a program and explain its pieces. This new language is built on top of a general-purpose language, such as C. The language consists of three extensions: explanations, nesting, and arcs. Explanations are comments explicitly associated with code segments. These explanations can be nested. And arcs can be inserted between explanations to show data- or control-flow. The value of VSL is that it enables a developer to better control a code. The developer can represent the structure via nested explanations, using arcs to indicate the flow of data and control. The explanations provide a ''second opinion'' about the code so that at any level, the developer can confirm that the code operates as it is intended to do. We believe that VSL enables a programmer to use in a computer language the same model--a hierarchy of components--that they use in their heads when they conceptualize systems.

More Details

Source Code Assurance Tool: An Implementation

Campbell, Philip L.; Espinoza, Juan E.

We present the tool we built as part of a Laboratory Directed Research and Development (LDRD) project. This tool consists of a commercially-available, graphical editor front-end, combined with a back end ''slicer.'' The significance of the tool is that it shows how to slice across system components. This is an advance from slicing across program components.

More Details

Final Report for the 10 to 100 Gigabit/Second Networking Laboratory Directed Research and Development Project

Witzke, Edward L.; Pierson, Lyndon G.; Tarman, Thomas D.; Dean, Leslie B.; Robertson, Perry J.; Campbell, Philip L.

The next major performance plateau for high-speed, long-haul networks is at 10 Gbps. Data visualization, high performance network storage, and Massively Parallel Processing (MPP) demand these (and higher) communication rates. MPP-to-MPP distributed processing applications and MPP-to-Network File Store applications already require single conversation communication rates in the range of 10 to 100 Gbps. MPP-to-Visualization Station applications can already utilize communication rates in the 1 to 10 Gbps range. This LDRD project examined some of the building blocks necessary for developing a 10 to 100 Gbps computer network architecture. These included technology areas such as, OS Bypass, Dense Wavelength Division Multiplexing (DWDM), IP switching and routing, Optical Amplifiers, Inverse Multiplexing of ATM, data encryption, and data compression; standards bodies activities in the ATM Forum and the Optical Internetworking Forum (OIF); and proof-of-principle laboratory prototypes. This work has not only advanced the body of knowledge in the aforementioned areas, but has generally facilitated the rapid maturation of high-speed networking and communication technology by: (1) participating in the development of pertinent standards, and (2) by promoting informal (and formal) collaboration with industrial developers of high speed communication equipment.

More Details
Results 1–25 of 28
Results 1–25 of 28