Publications

26 Results
Skip to search filters

Selecting an informative/discriminating multivariate response for inverse prediction

Journal of Quality Technology

Thomas, Edward V.; Lewis, John R.; Anderson-Cook, Christine M.; Burr, Tom; Hamada, Michael S.

inverse prediction is important in a variety of scientific and engineering applications, such as to predict properties/characteristics of an object by using multiple measurements obtained from it. Inverse prediction can be accomplished by inverting parameterized forward models that relate the measurements (responses) to the properties/characteristics of interest. Sometimes forward models are computational/science based; but often, forward models are empirically based response surface models, obtained by using the results of controlled experimentation. For empirical models, it is important that the experiments provide a sound basis to develop accurate forward models in terms of the properties/characteristics (factors). While nature dictates the causal relationships between factors and responses, experimenters can control the complexity, accuracy, and precision of forward models constructed via selection of factors, factor levels, and the set of trials that are performed. Recognition of the uncertainty in the estimated forward models leads to an errors-in-variables approach for inverse prediction. The forward models (estimated by experiments or science based) can also be used to analyze how well candidate responses complement one another for inverse prediction over the range of the factor space of interest. One may find that some responses are complementary, redundant, or noninformative. Simple analysis and examples illustrate how an informative and discriminating subset of responses could be selected among candidates in cases where the number of responses that can be acquired during inverse prediction is limited by difficulty, expense, and/or availability of material.

More Details

Solving Inverse Radiation Transport Problems with Multi-Sensor Data in the Presence of Correlated Measurement and Modeling Errors

Thomas, Edward V.; Stork, Chris L.

Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is traditionally solved by finding the set of transport model parameter values that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature pre dicted by the hypothesized model parameters. The weights are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. The traditional implicit (often inaccurate) assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. Here, an alternative method that accounts for correlated errors between channels is described and illustrated using an inverse problem based on the combination of gam ma and neutron multiplicity counting measurements.

More Details

A Statistical Perspective on Highly Accelerated Testing

Thomas, Edward V.

Highly accelerated life testing has been heavily promoted at Sandia (and elsewhere) as a means to rapidly identify product weaknesses caused by flaws in the product's design or manufacturing process. During product development, a small number of units are forced to fail at high stress. The failed units are then examined to determine the root causes of failure. The identification of the root causes of product failures exposed by highly accelerated life testing can instigate changes to the product's design and/or manufacturing process that result in a product with increased reliability. It is widely viewed that this qualitative use of highly accelerated life testing (often associated with the acronym HALT) can be useful. However, highly accelerated life testing has also been proposed as a quantitative means for "demonstrating" the reliability of a product where unreliability is associated with loss of margin via an identified and dominating failure mechanism. It is assumed that the dominant failure mechanism can be accelerated by changing the level of a stress factor that is assumed to be related to the dominant failure mode. In extreme cases, a minimal number of units (often from a pre-production lot) are subjected to a single highly accelerated stress relative to normal use. If no (or, sufficiently few) units fail at this high stress level, some might claim that a certain level of reliability has been demonstrated (relative to normal use conditions). Underlying this claim are assumptions regarding the level of knowledge associated with the relationship between the stress level and the probability of failure. The primary purpose of this document is to discuss (from a statistical perspective) the efficacy of using accelerated life testing protocols (and, in particular, "highly accelerated" protocols) to make quantitative inferences concerning the performance of a product (e.g., reliability) when in fact there is lack-of-knowledge and uncertainty concerning the assumed relationship between the stress level and performance. In addition, this document contains recommendations for conducting more informative accelerated tests.

More Details

Detectability of Neuronal Currents in Human Brain with Magnetic Resonance Spectroscopy

Jones, Howland D.; Thomas, Edward V.; Harper, Jason C.

Magnetic resonance spectroscopy has been used in a high-risk, high-payoff search for neuronal current (NC) signals in the free induction decay (FID) data from the visual cortex of human subjects during visual stimulation. If successful, this approach could make possible the detection of neuronal currents in the brain at high spatial and temporal resolution. Our initial experiments indicated the presence of a statistically significant change in the FID containing the NC relative to FIDs with the NC absent, and this signal was consistent with the presence of NC. Unfortunately, two follow-on experiments were not able to confirm or replicate the positive findings of the first experiment. However, even if the result from the first experiment were evidence of NC in the FID, it is clear that its effect is so small, that a true NC imaging experiment would not be possible with the current instrumentation and experimental protocol used here.

More Details

Accounting for correlated errors in inverse radiation transport problems

Thomas, Edward V.; Stork, Chris L.; Mattingly, John K.

Inverse radiation transport focuses on identifying the configuration of an unknown radiation source given its observed radiation signatures. The inverse problem is solved by finding the set of transport model variables that minimizes a weighted sum of the squared differences by channel between the observed signature and the signature predicted by the hypothesized model parameters. The weights per channel are inversely proportional to the sum of the variances of the measurement and model errors at a given channel. In the current treatment, the implicit assumption is that the errors (differences between the modeled and observed radiation signatures) are independent across channels. In this paper, an alternative method that accounts for correlated errors between channels is described and illustrated for inverse problems based on gamma spectroscopy.

More Details

Final report for the endowment of simulator agents with human-like episodic memory LDRD

Forsythe, James C.; Forsythe, James C.; Speed, Ann S.; Lippitt, Carl E.; Schaller, Mark J.; Xavier, Patrick G.; Thomas, Edward V.; Schoenwald, David A.

This report documents work undertaken to endow the cognitive framework currently under development at Sandia National Laboratories with a human-like memory for specific life episodes. Capabilities have been demonstrated within the context of three separate problem areas. The first year of the project developed a capability whereby simulated robots were able to utilize a record of shared experience to perform surveillance of a building to detect a source of smoke. The second year focused on simulations of social interactions providing a queriable record of interactions such that a time series of events could be constructed and reconstructed. The third year addressed tools to promote desktop productivity, creating a capability to query episodic logs in real time allowing the model of a user to build on itself based on observations of the user's behavior.

More Details

High throughput instruments, methods, and informatics for systems biology

Davidson, George S.; Sinclair, Michael B.; Thomas, Edward V.; Werner-Washburne, Margaret; Davidson, George S.; Boyack, Kevin W.; Wylie, Brian N.; Haaland, David M.; Timlin, Jerilyn A.; Keenan, Michael R.

High throughput instruments and analysis techniques are required in order to make good use of the genomic sequences that have recently become available for many species, including humans. These instruments and methods must work with tens of thousands of genes simultaneously, and must be able to identify the small subsets of those genes that are implicated in the observed phenotypes, or, for instance, in responses to therapies. Microarrays represent one such high throughput method, which continue to find increasingly broad application. This project has improved microarray technology in several important areas. First, we developed the hyperspectral scanner, which has discovered and diagnosed numerous flaws in techniques broadly employed by microarray researchers. Second, we used a series of statistically designed experiments to identify and correct errors in our microarray data to dramatically improve the accuracy, precision, and repeatability of the microarray gene expression data. Third, our research developed new informatics techniques to identify genes with significantly different expression levels. Finally, natural language processing techniques were applied to improve our ability to make use of online literature annotating the important genes. In combination, this research has improved the reliability and precision of laboratory methods and instruments, while also enabling substantially faster analysis and discovery.

More Details

Experimental design and analysis for accelerated degradation tests with Li-ion cells

Thomas, Edward V.; Thomas, Edward V.; Jungst, Rudolph G.; Roth, Emanuel P.; Doughty, Daniel H.

This document describes a general protocol (involving both experimental and data analytic aspects) that is designed to be a roadmap for rapidly obtaining a useful assessment of the average lifetime (at some specified use conditions) that might be expected from cells of a particular design. The proposed experimental protocol involves a series of accelerated degradation experiments. Through the acquisition of degradation data over time specified by the experimental protocol, an unambiguous assessment of the effects of accelerating factors (e.g., temperature and state of charge) on various measures of the health of a cell (e.g., power fade and capacity fade) will result. In order to assess cell lifetime, it is necessary to develop a model that accurately predicts degradation over a range of the experimental factors. In general, it is difficult to specify an appropriate model form without some preliminary analysis of the data. Nevertheless, assuming that the aging phenomenon relates to a chemical reaction with simple first-order rate kinetics, a data analysis protocol is also provided to construct a useful model that relates performance degradation to the levels of the accelerating factors. This model can then be used to make an accurate assessment of the average cell lifetime. The proposed experimental and data analysis protocols are illustrated with a case study involving the effects of accelerated aging on the power output from Gen-2 cells. For this case study, inadequacies of the simple first-order kinetics model were observed. However, a more complex model allowing for the effects of two concurrent mechanisms provided an accurate representation of the experimental data.

More Details

Experiments on Adaptive Techniques for Host-Based Intrusion Detection

Draelos, Timothy J.; Collins, Michael J.; Duggan, David P.; Thomas, Edward V.

This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerable preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.

More Details

Filling Source Feedthrus with Alumina/Molybdenum CND50 Cermet: Experimental, Theoretical, and Computational Approaches

Stuecker, John N.; Cesarano, Joseph C.; Shollenberger, K.A.; Roach, R.A.; Torczynski, J.R.; Thomas, Edward V.; Van Ornum, David J.

This report is a summary of the work completed in FY00 for science-based characterization of the processes used to fabricate cermet vias in source feedthrus. In particular, studies were completed to characterize the CND50 cermet slurry, characterize solvent imbibition, and identify critical via filling variables. These three areas of interest are important to several processes pertaining to the production of neutron generator tubes. Rheological characterization of CND50 slurry prepared with 94ND2 and Sandi94 primary powders were also compared. The 94ND2 powder was formerly produced at the GE Pinellas Plant and the Sandi94 is the new replacement powder produced at CeramTec. Processing variables that may effect the via-filling process were also studied and include: the effect of solids loading in the CND50 slurry; the effect of milling time; and the effect of Nuosperse (a slurry ''conditioner''). Imbibition characterization included a combination of experimental, theoretical, and computational strategies to determine solvent migration though complex shapes, specifically vias in the source feedthru component. Critical factors were determined using a controlled set of experiments designed to identify those variables that influence the occurrence of defects within the cermet filled via. These efforts were pursued to increase part production reliability, understand selected fundamental issues that impact the production of slurry-filled parts, and validate the ability of the computational fluid dynamics code, GOMA, to simulate these processes. Suggestions are made for improving the slurry filling of source feedthru vias.

More Details

Low-Power Public Key Cryptography

Beaver, Cheryl L.; Draelos, Timothy J.; Hamilton, Victoria A.; Schroeppel, Richard C.; Gonzales, Rita A.; Miller, Russell D.; Thomas, Edward V.

This report presents research on public key, digital signature algorithms for cryptographic authentication in low-powered, low-computation environments. We assessed algorithms for suitability based on their signature size, and computation and storage requirements. We evaluated a variety of general purpose and special purpose computing platforms to address issues such as memory, voltage requirements, and special functionality for low-powered applications. In addition, we examined custom design platforms. We found that a custom design offers the most flexibility and can be optimized for specific algorithms. Furthermore, the entire platform can exist on a single Application Specific Integrated Circuit (ASIC) or can be integrated with commercially available components to produce the desired computing platform.

More Details

Approximate Public Key Authentication with Information Hiding

Thomas, Edward V.; Draelos, Timothy J.; Draelos, Timothy J.

This paper describes a solution for the problem of authenticating the shapes of statistically variant gamma spectra while simultaneously concealing the shapes and magnitudes of the sensitive spectra. The shape of a spectrum is given by the relative magnitudes and positions of the individual spectral elements. Class-specific linear orthonormal transformations of the measured spectra are used to produce output that meet both the authentication and concealment requirements. For purposes of concealment, the n-dimensional gamma spectra are transformed into n-dimensional output spectra that are effectively indistinguishable from Gaussian white noise (independent of the class). In addition, the proposed transformations are such that statistical authentication metrics computed on the transformed spectra are identical to those computed on the original spectra.

More Details
26 Results
26 Results