X-ray image acquisition of a device undergoing pyroshock [Slides]
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Optics InfoBase Conference Papers
We present a deep learning image reconstruction method called AirNet-SNL for sparse view computed tomography. It combines iterative reconstruction and convolutional neural networks with end-to-end training. Our model reduces streak artifacts from filtered back-projection with limited data, and it trains on randomly generated shapes. This work shows promise to generalize learning image reconstruction.
Journal of Nondestructive Evaluation, Diagnostics and Prognostics of Engineering Systems
X-ray phase contrast imaging (XPCI) is a nondestructive evaluation technique that enables high-contrast detection of low-attenuation materials that are largely transparent in traditional radiography. Extending a grating-based Talbot-Lau XPCI system to three-dimensional imaging with computed tomography (CT) imposes two motion requirements: the analyzer grating must translate transverse to the optical axis to capture image sets for XPCI reconstruction, and the sample must rotate to capture angular data for CT reconstruction. The acquisition algorithm choice determines the order of movement and positioning of the two stages. The choice of the image acquisition algorithm for XPCI CT is instrumental to collecting high fidelity data for reconstruction. We investigate how data acquisition influences XPCI CT by comparing two simple data acquisition algorithms and determine that capturing a full phase-stepping image set for a CT projection before rotating the sample results in higher quality data.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
High-quality image products in an X-Ray Phase Contrast Imaging (XPCI) system can be produced with proper system hardware and data acquisition. However, it may be possible to further increase the quality of the image products by addressing subtleties and imperfections in both hardware and the data acquisition process. Noting that addressing these issues entirely in hardware and data acquisition may not be practical, a more prudent approach is to determine the balance of how the apparatus may reasonably be improved and what can be accomplished with image post-processing techniques. Given a proper signal model for XPCI data, image processing techniques can be developed to compensate for many of the image quality degradations associated with higher-order hardware and data acquisition imperfections. However, processing techniques also have limitations and cannot entirely compensate for sub-par hardware or inaccurate data acquisition practices. Understanding system and image processing technique limitations enables balancing between hardware, data acquisition, and image post-processing. In this paper, we present some of the higher-order image degradation effects we have found associated with subtle imperfections in both hardware and data acquisition. We also discuss and demonstrate how a combination of hardware, data acquisition processes, and image processing techniques can increase the quality of XPCI image products. Finally, we assess the requirements for high-quality XPCI images and propose reasonable system hardware modifications and the limits of certain image processing techniques.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
Sandia National Laboratories has developed a method that applies machine learning methods to high-energy spectral X-ray computed tomography data to identify material composition for every reconstructed voxel in the field-of-view. While initial experiments led by Koundinyan et al. demonstrated that supervised machine learning techniques perform well in identifying a variety of classes of materials, this work presents an unsupervised approach that differentiates isolated materials with highly similar properties, and can be applied on spectral computed tomography data to identify materials more accurately compared to traditional performance. Additionally, if regions of the spectrum for multiple voxels become unusable due to artifacts, this method can still reliably perform material identification. This enhanced capability can tremendously impact fields in security, industry, and medicine that leverage non-destructive evaluation for detection, verification, and validation applications.
Abstract not provided.
AIP Conference Proceedings
This work aims to validate previous exploratory work done to characterize materials by matching their attenuation profiles using a multichannel radiograph given an initial energy spectrum. The experiment was performed in order to evaluate the effects of noise on the resulting attenuation profiles, which was ignored in simulation. Spectrum measurements have also been collected from various materials of interest. Additionally, a MATLAB optimization algorithm has been applied to these candidate spectrum measurements in order to extract an estimate of the attenuation profile. Being able to characterize materials through this nondestructive method has an extensive range of applications for a wide variety of fields, including quality assessment, industry, and national security.
Proceedings of SPIE - The International Society for Optical Engineering
Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
2014 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2014
Conventional CPU-based algorithms for Computed Tomography reconstruction lack the computational efficiency necessary to process large, industrial datasets in a reasonable amount of time. Specifically, processing time for a single-pass, trillion volumetric pixel (voxel) reconstruction requires months to reconstruct using a high performance CPU-based workstation. An optimized, single workstation multi-GPU approach has shown performance increases by 2-3 orders-of-magnitude; however, reconstruction of future-size, trillion voxel datasets can still take an entire day to complete. This paper details an approach that further decreases runtime and allows for more diverse workstation environments by using a cluster of GPU-capable workstations. Due to the irregularity of the reconstruction tasks throughout the volume, using a cluster of multi-GPU nodes requires inventive topological structuring and data partitioning to avoid network bottlenecks and achieve optimal GPU utilization. This paper covers the cluster layout and non-linear weighting scheme used in this high-performance multi-GPU CT reconstruction algorithm and presents experimental results from reconstructing two large-scale datasets to evaluate this approach's performance and applicability to future-size datasets. Specifically, our approach yields up to a 20 percent improvement for large-scale data.
2014 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2014
This exploratory work investigates the feasibility of extracting linear attenuation functions with respect to energy from a multi-channel radiograph of an object of interest composed of a homogeneous material by simulating the entire imaging system combined with a digital phantom of the object of interest and leveraging this information along with the acquired multi-channel image. This synergistic combination of information allows for improved estimates on not only the attenuation for an effective energy, but for the entire spectrum of energy that is coincident with the detector elements. Material composition identification from radiographs would have wide applications in both medicine and industry. This work will focus on industrial radiography applications and will analyse a range of materials that vary in attenuative properties. This work shows that using iterative solvers holds encouraging potential to fully solve for the linear attenuation profile for the object and material of interest when the imaging system is characterized with respect to initial source x-ray energy spectrum, scan geometry, and accurate digital phantom.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
This work will investigate the imaging capabilities of the Multix multi-channel linear array detector and its potential suitability for big-data industrial and security applications versus that which is currently deployed. Multi-channel imaging data holds huge promise in not only finer resolution in materials classification, but also in materials identification and elevated data quality for various radiography and computed tomography applications. The potential pitfall is the signal quality contained within individual channels as well as the required exposure and acquisition time necessary to obtain images comparable to those of traditional configurations. This work will present results of these detector technologies as they pertain to a subset of materials of interest to the industrial and security communities; namely, water, copper, lead, polyethylene, and tin.
Abstract not provided.
Abstract not provided.
We are studying PMDI polyurethane with a fast catalyst, such that filling and polymerization occur simultaneously. The foam is over-packed to tw ice or more of its free rise density to reach the density of interest. Our approach is to co mbine model development closely with experiments to discover new physics, to parameterize models and to validate the models once they have been developed. The model must be able to repres ent the expansion, filling, curing, and final foam properties. PMDI is chemically blown foam, wh ere carbon dioxide is pr oduced via the reaction of water and isocyanate. The isocyanate also re acts with polyol in a competing reaction, which produces the polymer. A new kinetic model is developed and implemented, which follows a simplified mathematical formalism that decouple s these two reactions. The model predicts the polymerization reaction via condensation chemis try, where vitrification and glass transition temperature evolution must be included to correctly predict this quantity. The foam gas generation kinetics are determined by tracking the molar concentration of both water and carbon dioxide. Understanding the therma l history and loads on the foam due to exothermicity and oven heating is very important to the results, since the kinetics and ma terial properties are all very sensitive to temperature. The conservation eq uations, including the e quations of motion, an energy balance, and thr ee rate equations are solved via a stabilized finite element method. We assume generalized-Newtonian rheology that is dependent on the cure, gas fraction, and temperature. The conservation equations are comb ined with a level set method to determine the location of the free surface over time. Results from the model are compared to experimental flow visualization data and post-te st CT data for the density. Seve ral geometries are investigated including a mock encapsulation part, two configur ations of a mock stru ctural part, and a bar geometry to specifically test the density model. We have found that the model predicts both average density and filling profiles well. However, it under predicts density gradients, especially in the gravity direction. Thoughts on m odel improvements are also discussed.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
We are developing computational models to elucidate the expansion and dynamic filling process of a polyurethane foam, PMDI. The polyurethane of interest is chemically blown, where carbon dioxide is produced via the reaction of water, the blowing agent, and isocyanate. The isocyanate also reacts with polyol in a competing reaction, which produces the polymer. Here we detail the experiments needed to populate a processing model and provide parameters for the model based on these experiments. The model entails solving the conservation equations, including the equations of motion, an energy balance, and two rate equations for the polymerization and foaming reactions, following a simplified mathematical formalism that decouples these two reactions. Parameters for the polymerization kinetics model are reported based on infrared spectrophotometry. Parameters describing the gas generating reaction are reported based on measurements of volume, temperature and pressure evolution with time. A foam rheology model is proposed and parameters determined through steady-shear and oscillatory tests. Heat of reaction and heat capacity are determined through differential scanning calorimetry. Thermal conductivity of the foam as a function of density is measured using a transient method based on the theory of the transient plane source technique. Finally, density variations of the resulting solid foam in several simple geometries are directly measured by sectioning and sampling mass, as well as through x-ray computed tomography. These density measurements will be useful for model validation once the complete model is implemented in an engineering code.
Abstract not provided.
Abstract not provided.
Proceedings of SPIE - The International Society for Optical Engineering
Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.
Proceedings of SPIE - The International Society for Optical Engineering
This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performanceper- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.
Proceedings of SPIE - The International Society for Optical Engineering
This work describes a high-performance approach to radiograph (i.e. X-ray image for this work) simulation for arbitrary objects. The generation of radiographs is more generally known as the forward projection imaging model. The formation of radiographs is very computationally expensive and is not typically approached for large-scale applications such as industrial radiography. The approach described in this work revolves around a single GPU-based implementation that performs the attenuation calculation in a massively parallel environment. Additionally, further performance gains are realized by exploiting the GPU-specific hardware. Early results show that using a single GPU can increase computational performance by three orders-of- magnitude for volumes of 10003 voxels and images with 10002 pixels.
Abstract not provided.
Abstract not provided.
Abstract not provided.
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
Abstract not provided.
T to compute the radiography properties of various materials, the flux profiles of X-ray sources must be characterized. This report describes the characterization of X-ray beam profiles from a Kimtron industrial 450 kVp radiography system with a Comet MXC-45 HP/11 bipolar oil-cooled X-ray tube. The empirical method described here uses a detector response function to derive photon flux profiles based on data collected with a small cadmium telluride detector. The flux profiles are then reduced to a simple parametric form that enables computation of beam profiles for arbitrary accelerator energies.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Explosive growth in photovoltaic markets has fueled new creative approaches that promise to cut costs and improve reliability of system components. However, market demands require rapid development of these new and innovative technologies in order to compete with more established products and capture market share. Often times diagnostics that assist in R&D do not exist or have not been applied due to the innovative nature of the proposed products. Some diagnostics such as IR imaging, electroluminescence, light IV, dark IV, x-rays, and ultrasound have been employed in the past and continue to serve in development of new products, however, innovative products with new materials, unique geometries, and previously unused manufacturing processes require additional or improved test capabilities. This fast-track product development cycle requires diagnostic capabilities to provide the information that confirms the integrity of manufacturing techniques and provides the feedback that can spawn confidence in process control, reliability and performance. This paper explores the use of digital radiography and computed tomography (CT) with other diagnostics to support photovoltaic R&D and manufacturing applications.
A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.
Abstract not provided.
Abstract not provided.
Proposed for publication in Chemical Engineering Science.
Abstract not provided.
Abstract not provided.
Abstract not provided.
High Performance Structures and Materials
The investigation of the liquefaction and flow behavior of a thermally decomposing removable epoxy foam (REF) was discussed. It was concluded that the behavior of REF, can vary greatly depending on both physical and thermal boundary conditions as well as on decomposition chemistry. It was shown that the foam regression away from a heated surface generally involves two moving boundaries, a fluid-solid interface and a fluid-vapor interface. During thermal decomposition, the physical and chemical behaviors of the foams were coupled and can significantly affect heat transfer rates to the encapsulated units.
A Chemical-structure-based PolyUrethane Foam (CPUF) decomposition model has been developed to predict the fire-induced response of rigid, closed-cell polyurethane foam-filled systems. The model, developed for the B-61 and W-80 fireset foam, is based on a cascade of bondbreaking reactions that produce CO2. Percolation theory is used to dynamically quantify polymer fragment populations of the thermally degrading foam. The partition between condensed-phase polymer fragments and gas-phase polymer fragments (i.e. vapor-liquid split) was determined using a vapor-liquid equilibrium model. The CPUF decomposition model was implemented into the finite element (FE) heat conduction codes COYOTE and CALORE, which support chemical kinetics and enclosure radiation. Elements were removed from the computational domain when the calculated solid mass fractions within the individual finite element decrease below a set criterion. Element removal, referred to as ?element death,? creates a radiation enclosure (assumed to be non-participating) as well as a decomposition front, which separates the condensed-phase encapsulant from the gas-filled enclosure. All of the chemistry parameters as well as thermophysical properties for the CPUF model were obtained from small-scale laboratory experiments. The CPUF model was evaluated by comparing predictions to measurements. The validation experiments included several thermogravimetric experiments at pressures ranging from ambient pressure to 30 bars. Larger, component-scale experiments were also used to validate the foam response model. The effects of heat flux, bulk density, orientation, embedded components, confinement and pressure were measured and compared to model predictions. Uncertainties in the model results were evaluated using a mean value approach. The measured mass loss in the TGA experiments and the measured location of the decomposition front were within the 95% prediction limit determined using the CPUF model for all of the experiments where the decomposition gases were vented sufficiently. The CPUF model results were not as good for the partially confined radiant heat experiments where the vent area was regulated to maintain pressure. Liquefaction and flow effects, which are not considered in the CPUF model, become important when the decomposition gases are confined.
Abstract not provided.
Journal of Materials Research
The goal of this work is to develop techniques for measuring gradients in particle concentration within filled polymers, such as encapsulant. A high concentration of filler particles is added to such materials to tailor physical properties such as thermal expansion coefficient. Sedimentation and flow-induced migration of particles can produce concentration gradients that are most severe near material boundaries. Therefore, techniques for measuring local particle concentration should be accurate near boundaries. Particle gradients in an alumina-filled epoxy resin are measured with a spatial resolution of 0.2 mm using an x-ray beam attenuation technique, but an artifact related to the finite diameter of the beam reduces accuracy near the specimen's edge. Local particle concentration near an edge can be measured more reliably using microscopy coupled with image analysis. This is illustrated by measuring concentration profiles of glass particles having 40 {micro}m median diameter using images acquired by a confocal laser fluorescence microscope. The mean of the measured profiles of volume fraction agrees to better than 3% with the expected value, and the shape of the profiles agrees qualitatively with simple theory for sedimentation of monodisperse particles. Extending this microscopy technique to smaller, micron-scale filler particles used in encapsulant for microelectronic devices is illustrated by measuring the local concentration of an epoxy resin containing 0.41 volume fraction of silica.