Big data analytics in a national security setting
Abstract not provided.
Abstract not provided.
An increasing number of corporate security policies make it desirable to push security closer to the desktop. It is not practical or feasible to place security and monitoring software on all computing devices (e.g. printers, personal digital assistants, copy machines, legacy hardware). We have begun to prototype a hardware and software architecture that will enforce security policies by pushing security functions closer to the end user, whether in the office or home, without interfering with users' desktop environments. We are developing a specialized programmable Ethernet network switch to achieve this. Embodied in this device is the ability to detect and mitigate network attacks that would otherwise disable or compromise the end user's computing nodes. We call this device a 'Secure Programmable Switch' (SPS). The SPS is designed with the ability to be securely reprogrammed in real time to counter rapidly evolving threats such as fast moving worms, etc. This ability to remotely update the functionality of the SPS protection device is cryptographically protected from subversion. With this concept, the user cannot turn off or fail to update virus scanning and personal firewall filtering in the SPS device as he/she could if implemented on the end host. The SPS concept also provides protection to simple/dumb devices such as printers, scanners, legacy hardware, etc. This report also describes the development of a cryptographically protected processor and its internal architecture in which the SPS device is implemented. This processor executes code correctly even if an adversary holds the processor. The processor guarantees both the integrity and the confidentiality of the code: the adversary cannot determine the sequence of instructions, nor can the adversary change the instruction sequence in a goal-oriented way.
Abstract not provided.
We report on the work done in the late-start LDRDUsing Emulation and Simulation toUnderstand the Large-Scale Behavior of the Internet. We describe the creation of a researchplatform that emulates many thousands of machines to be used for the study of large-scale inter-net behavior. We describe a proof-of-concept simple attack we performed in this environment.We describe the successful capture of a Storm bot and, from the study of the bot and furtherliterature search, establish large-scale aspects we seek to understand via emulation of Storm onour research platform in possible follow-on work. Finally, we discuss possible future work.3
We present the forensic analysis repository for malware (FARM), a system for automating malware analysis. FARM leverages existing dynamic and static analysis tools and is designed in a modular fashion to provide future extensibility. We present our motivations for designing the system and give an overview of the system architecture. We also present several common scenarios that detail uses for FARM as well as illustrate how automated malware analysis saves time. Finally, we discuss future development of this tool.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the ACM/IEEE 2005 Supercomputing Conference, SC'05
We present the design and implementation of InfoStar, an adaptive visual analytics platform for mobile devices such as PDAs, laptops, Tablet PCs and mobile phones, InfoStar extends the reach of visual analytics technology beyond the traditional desktop paradigm to provide ubiquitous access to interactive visualizations of information spaces. These visualizations are critical in addressing the knowledge needs of human agents operating in the field, in areas as diverse as business, homeland security, law enforcement, protective services, emergency medical services and scientific discovery. We describe an initial real world deployment of this technology, in which the InfoStar platform has been used to offer mobile access to scheduling and venue information to conference attendees at Supercomputing 2004. © 2005 IEEE.
Wireless computer networks are increasing exponentially around the world. They are being implemented in both the unlicensed radio frequency (RF) spectrum (IEEE 802.11a/b/g) and the licensed spectrum (e.g., Firetide [1] and Motorola Canopy [2]). Wireless networks operating in the unlicensed spectrum are by far the most popular wireless computer networks in existence. The open (i.e., proprietary) nature of the IEEE 802.11 protocols and the availability of ''free'' RF spectrum have encouraged many producers of enterprise and common off-the-shelf (COTS) computer networking equipment to jump into the wireless arena. Competition between these companies has driven down the price of 802.11 wireless networking equipment and has improved user experiences with such equipment. The end result has been an increased adoption of the equipment by businesses and consumers, the establishment of the Wi-Fi Alliance [3], and widespread use of the Alliance's ''Wi-Fi'' moniker to describe these networks. Consumers use 802.11 equipment at home to reduce the burden of running wires in existing construction, facilitate the sharing of broadband Internet services with roommates or neighbors, and increase their range of ''connectedness''. Private businesses and government entities (at all levels) are deploying wireless networks to reduce wiring costs, increase employee mobility, enable non-employees to access the Internet, and create an added revenue stream to their existing business models (coffee houses, airports, hotels, etc.). Municipalities (Philadelphia; San Francisco; Grand Haven, MI) are deploying wireless networks so they can bring broadband Internet access to places lacking such access; offer limited-speed broadband access to impoverished communities; offer broadband in places, such as marinas and state parks, that are passed over by traditional broadband providers; and provide themselves with higher quality, more complete network coverage for use by emergency responders and other municipal agencies. In short, these Wi-Fi networks are being deployed everywhere. Much thought has been and is being put into evaluating cost-benefit analyses of wired vs. wireless networks and issues such as how to effectively cover an office building or municipality, how to efficiently manage a large network of wireless access points (APs), and how to save money by replacing an Internet service provider (ISP) with 802.11 technology. In comparison, very little thought and money are being focused on wireless security and monitoring for security purposes.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Effective, high-performance, networked file systems and storage is needed to solve I/O bottlenecks between large compute platforms. Frequently, parallel techniques such as PFTP, are employed to overcome the adverse effect of TCP's congestion avoidance algorithm in order to achieve reasonable aggregate throughput. These techniques can suffer from end-system bottlenecks due to the protocol processing overhead and memory copies involved in moving large amounts of data during I/O. Moreover, transferring data using PFTP requires manual operation, lacking the transparency to allow for interactive visualization and computational steering of large-scale simulations from distributed locations. This paper evaluates the emerging Internet SCSI (iSCSI) protocol [2] as the file/data transport in order that remote clients can transparently access data through a distributed global file system available to local clients. We started our work characterizing the performance behavior of iSCSI in Local Area Networks (LANs). We then proceeded to study the effect of propagation delay on throughput using remote iSCSI storage and explored optimization techniques to mitigate the adverse effects of long delay in high-bandwidth Wide Area Networks (WANs). Lastly, we evaluated iSCSI in a Storage Area Network (SAN) for a Global Parallel Filesystem. We conducted our benchmark based on typical usage model of large-scale scientific applications at Sandia. We demonstrated the benefit of high-performance parallel VO to scientific applications at the IEEE 2004 Supercomputing Conference, using experiences and knowledge gained from this study.
Abstract not provided.
Network administrators and security analysts often do not know what network services are being run in every corner of their networks. If they do have a vague grasp of the services running on their networks, they often do not know what specific versions of those services are running. Actively scanning for services and versions does not always yield complete results, and patch and service management, therefore, suffer. We present Net-State, a system for monitoring, storing, and reporting application and operating system version information for a network. NetState gives security and network administrators the ability to know what is running on their networks while allowing for user-managed machines and complex host configurations. Our architecture uses distributed modules to collect network information and a centralized server that stores and issues reports on that collected version information. We discuss some of the challenges to building and operating NetState as well as the legal issues surrounding the promiscuous capture of network data. We conclude that this tool can solve some key problems in network management and has a wide range of possibilities for future uses.