Publications

29 Results
Skip to search filters

Biotechnology development for biomedical applications

Rempe, Susan R.; Rogers, David M.; Buerger, Stephen B.; Kuehl, Michael K.; Hatch, Anson H.; Abhyankar, Vinay V.; Mai, Junyu M.; Dirk, Shawn M.; Brozik, Susan M.; De Sapio, Vincent D.; Schoeniger, Joseph S.

Sandia's scientific and engineering expertise in the fields of computational biology, high-performance prosthetic limbs, biodetection, and bioinformatics has been applied to specific problems at the forefront of cancer research. Molecular modeling was employed to design stable mutations of the enzyme L-asparaginase with improved selectivity for asparagine over other amino acids with the potential for improved cancer chemotherapy. New electrospun polymer composites with improved electrical conductivity and mechanical compliance have been demonstrated with the promise of direct interfacing between the peripheral nervous system and the control electronics of advanced prosthetics. The capture of rare circulating tumor cells has been demonstrated on a microfluidic chip produced with a versatile fabrication processes capable of integration with existing lab-on-a-chip and biosensor technology. And software tools have been developed to increase the calculation speed of clustered heat maps for the display of relationships in large arrays of protein data. All these projects were carried out in collaboration with researchers at the University of Texas M. D. Anderson Cancer Center in Houston, TX.

More Details

Quantifying effectiveness of failure prediction and response in HPC systems: Methodology and example

Proceedings of the International Conference on Dependable Systems and Networks

Brandt, James M.; Chen, Frank X.; De Sapio, Vincent D.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, Philippe; Roe, Diana C.; Thompson, David; Wong, Matthew H.

Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe contextrelevant methodologies for determining the accuracy and cost-benefit of predictors. © 2010 IEEE.

More Details

The application of quaternions and other spatial representations to the reconstruction of re-entry vehicle motion

De Sapio, Vincent D.

The analysis of spacecraft kinematics and dynamics requires an efficient scheme for spatial representation. While the representation of displacement in three dimensional Euclidean space is straightforward, orientation in three dimensions poses particular challenges. The unit quaternion provides an approach that mitigates many of the problems intrinsic in other representation approaches, including the ill-conditioning that arises from computing many successive rotations. This report focuses on the computational utility of unit quaternions and their application to the reconstruction of re-entry vehicle (RV) motion history from sensor data. To this end they will be used in conjunction with other kinematic and data processing techniques. We will present a numerical implementation for the reconstruction of RV motion solely from gyroscope and accelerometer data. This will make use of unit quaternions due to their numerical efficacy in dealing with the composition of many incremental rotations over a time series. In addition to signal processing and data conditioning procedures, algorithms for numerical quaternion-based integration of gyroscope data will be addressed, as well as accelerometer triangulation and integration to yield RV trajectory. Actual processed flight data will be presented to demonstrate the implementation of these methods.

More Details

A framework for graph-based synthesis, analysis, and visualization of HPC cluster job data

De Sapio, Vincent D.

The monitoring and system analysis of high performance computing (HPC) clusters is of increasing importance to the HPC community. Analysis of HPC job data can be used to characterize system usage and diagnose and examine failure modes and their effects. This analysis is not straightforward, however, due to the complex relationships that exist between jobs. These relationships are based on a number of factors, including shared compute nodes between jobs, proximity of jobs in time, etc. Graph-based techniques represent an approach that is particularly well suited to this problem, and provide an effective technique for discovering important relationships in job queuing and execution data. The efficacy of these techniques is rooted in the use of a semantic graph as a knowledge representation tool. In a semantic graph job data, represented in a combination of numerical and textual forms, can be flexibly processed into edges, with corresponding weights, expressing relationships between jobs, nodes, users, and other relevant entities. This graph-based representation permits formal manipulation by a number of analysis algorithms. This report presents a methodology and software implementation that leverages semantic graph-based techniques for the system-level monitoring and analysis of HPC clusters based on job queuing and execution data. Ontology development and graph synthesis is discussed with respect to the domain of HPC job data. The framework developed automates the synthesis of graphs from a database of job information. It also provides a front end, enabling visualization of the synthesized graphs. Additionally, an analysis engine is incorporated that provides performance analysis, graph-based clustering, and failure prediction capabilities for HPC systems.

More Details

Using Cloud constructs and predictive analysis to enable pre-failure process migration in HPC systems

CCGrid 2010 - 10th IEEE/ACM International Conference on Cluster, Cloud, and Grid Computing

Brandt, James M.; Chen, F.; De Sapio, Vincent D.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, P.; Roe, D.; Thompson, D.; Wong, M.

Accurate failure prediction in conjunction with efficient process migration facilities including some Cloud constructs can enable failure avoidance in large-scale high performance computing (HPC) platforms. In this work we demonstrate a prototype system that incorporates our probabilistic failure prediction system with virtualization mechanisms and techniques to provide a whole system approach to failure avoidance. This work utilizes a failure scenario based on a real-world HPC case study. © 2010 IEEE.

More Details

Combining virtualization, resource characterization, and resource management to enable efficient high performance compute platforms through intelligent dynamic resource allocation

Proceedings of the 2010 IEEE International Symposium on Parallel and Distributed Processing, Workshops and Phd Forum, IPDPSW 2010

Brandt, James M.; Chen, F.; De Sapio, Vincent D.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, P.; Roe, D.; Thompson, D.; Wong, M.

Improved resource utilization and fault tolerance of large-scale HPC systems can be achieved through fine grained, intelligent, and dynamic resource (re)allocation. We explore components and enabling technologies applicable to creating a system to provide this capability: specifically 1) Scalable fine-grained monitoring and analysis to inform resource allocation decisions, 2) Virtualization to enable dynamic reconfiguration, 3) Resource management for the combined physical and virtual resources and 4) Orchestration of the allocation, evaluation, and balancing of resources in a dynamic environment. We discuss both general and HPC-centric issues that impact the design of such a system. Finally, we present our prototype system, giving both design details and examples of its application in real-world scenarios.

More Details

The OVIS analysis architecture

Brandt, James M.; De Sapio, Vincent D.; Gentile, Ann C.; Mayo, Jackson M.; Pebay, Philippe P.; Roe, Diana C.; Wong, Matthew H.

This report summarizes the current statistical analysis capability of OVIS and how it works in conjunction with the OVIS data readers and interpolators. It also documents how to extend these capabilities. OVIS is a tool for parallel statistical analysis of sensor data to improve system reliability. Parallelism is achieved using a distributed data model: many sensors on similar components (metaphorically sheep) insert measurements into a series of databases on computers reserved for analyzing the measurements (metaphorically shepherds). Each shepherd node then processes the sheep data stored locally and the results are aggregated across all shepherds. OVIS uses the Visualization Tool Kit (VTK) statistics algorithm class hierarchy to perform analysis of each process's data but avoids VTK's model aggregation stage which uses the Message Passing Interface (MPI); this is because if a single process in an MPI job fails, the entire job will fail. Instead, OVIS uses asynchronous database replication to aggregate statistical models. OVIS has several additional features beyond those present in VTK that, first, accommodate its particular data format and, second, improve the memory and speed of the statistical analyses. First, because many statistical algorithms are multivariate in nature and sensor data is typically univariate, interpolation of data is required to provide simultaneous observations of metrics. Note that in this report, we will refer to a single value obtained from a sensor as a measurement while a collection of multiple sensor values simultaneously present in the system is an observation. A base class for interpolation is provided that abstracts the operation of converting multiple sensor measurements into simultaneous observations. A concrete implementation is provided that performs piecewise constant temporal interpolation of multiple metrics across a single component. Secondly, because calculations may summarize data too large to fit in memory OVIS analyses batches of observations at a time and aggregates these intermediate intra-process models as it goes before storing the final model for inter-process aggregation via database replication. This reduces the memory footprint of the analysis, interpolation, and the database client and server query processing. This also interleaves processing with the disk I/O required to fetch data from the database - also improving speed. This report documents how OVIS performs analyses and how to create additional analysis components that fetch measurements from the database, perform interpolation, or perform operations on streamed observations (such as model updates or assessments). The rest of this section outlines the OVIS analysis algorithm and is followed by sections specific to each subtask. Note that we are limiting our discussion for now to the creation of a model from a set of measurements, and not including the assessment of observations using a model. The same framework can be used for assessment but that use case is not detailed in this report.

More Details

Mechanical advantage: The Archimedean tradition of acquiring geometric insight from mechanical metaphor

History of Mechanism and Machine Science

De Sapio, Vincent D.; De Sapio, Robin

Archimedes’ genius was derived in no small part from his ability to effortlessly interpret problems in both geometric and mechanical ways. We explore, in a modern context, the application of mechanical reasoning to geometric problem solving. The general form of this inherently Archimedean approach is described and it’s specific use is demonstrated with regard to the problem of finding the geodesics of a surface. Archimedes’ approach to thinking about problems may be his greatest contribution, and in that spirit we present some work related to teaching Archimedes’ ideas at an elementary level. The aim is to cultivate the same sort of creative problem solving employed by Archimedes, in young students with nascent mechanical reasoning skills.

More Details
29 Results
29 Results