Publications

104 Results
Skip to search filters

A Process to Colorize and Assess Visualizations of Noisy X-Ray Computed Tomography Hyperspectral Data of Materials with Similar Spectral Signatures

2021 IEEE Nuclear Science Symposium and Medical Imaging Conference Record, NSS/MIC 2021 and 28th International Symposium on Room-Temperature Semiconductor Detectors, RTSD 2022

Clifford, Joshua M.; Kemp, Emily K.; Limpanukorn, Ben L.; Jimenez, Edward S.

Dimension reduction techniques have frequently been used to summarize information from high dimensional hyperspectral data, usually done in effort to classify or visualize the materials contained in the hyperspectral image. The main challenge in applying these techniques to Hyperspectral Computed Tomography (HCT) data is that if the materials in the field of view are of similar composition then it can be difficult for a visualization of the hyperspectral image to differentiate between the materials. We propose novel alternative methods of preprocessing and summarizing HCT data in a single colorized image and novel measures to assess desired qualities in the resultant colored image, such as the contrast between different materials and the consistency of color within the same object. Proposed processes in this work include a new majority-voting method for multi-level thresholding, binary erosion, median filters, PAM clustering for grouping pixels into objects (of homogeneous materials) and mean/median assignment along the spectral dimension for representing the underlying signature, UMAP or GLMs to assign colors, and quantitative coloring assessment with developed measures. Strengths and weaknesses of various combinations of methods are discussed. These results have the potential to create more robust material identification methods from HCT data that has wide use in industrial, medical, and security-based applications for detection and quantification, including visualization methods to assist with rapid human interpretability of these complex hyperspectral signatures.

More Details

AirNet-SNL: End-to-end training of iterative reconstruction and deep neural network regularization for sparse-data XPCI CT

Optics InfoBase Conference Papers

Lee, Dennis J.; Mulcahy-Stanislawczyk, Johnathan M.; Jimenez, Edward S.; West, Roger D.; Goodner, Ryan; Epstein, Collin E.; Thompson, Kyle R.; Dagel, Amber L.

We present a deep learning image reconstruction method called AirNet-SNL for sparse view computed tomography. It combines iterative reconstruction and convolutional neural networks with end-to-end training. Our model reduces streak artifacts from filtered back-projection with limited data, and it trains on randomly generated shapes. This work shows promise to generalize learning image reconstruction.

More Details

Big-Data Multi-Energy Iterative Volumetric Reconstruction Methods for As-Built Validation & Verification Applications

Jimenez, Edward S.

This document archives the results developed by the Lab Directed R esearch and Develop- ment (LDRD) project sponsored by Sandia National Laboratories (SNL). In this work, it is shown that SNL has developed the first known high-energy hyper spectral computed to- mography system for industrial and security applications. The main results gained from this work include dramatic beam-hardening artifact reduction by using t he hyperspectral recon- struction as a bandpass filter without the need for any other comp utation or pre-processing; additionally, this work demonstrated the ability to use supervised an d unsupervised learning methods on the hyperspectral reconstruction data for the app lication of materials charac- terization and identification which is not possible using traditional com puted tomography systems or approaches.

More Details

Passenger baggage object database (PBOD)

AIP Conference Proceedings

Gittinger, Jaxon M.; Suknot, April S.; Jimenez, Edward S.; Spaulding, Terry W.; Wenrich, Steven A.

Detection of anomalies of interest in x-ray images is an ever-evolving problem that requires the rapid development of automatic detection algorithms. Automatic detection algorithms are developed using machine learning techniques, which would require developers to obtain the x-ray machine that was used to create the images being trained on, and compile all associated metadata for those images by hand. The Passenger Baggage Object Database (PBOD) and data acquisition application were designed and developed for acquiring and persisting 2-D and 3-D x-ray image data and associated metadata. PBOD was specifically created to capture simulated airline passenger "stream of commerce" luggage data, but could be applied to other areas of x-ray imaging to utilize machine-learning methods.

More Details

Unsupervised learning methods to perform material identification tasks on spectral computed tomography data

Proceedings of SPIE - The International Society for Optical Engineering

Gallegos, Isabel O.; Koundinyan, Srivathsan P.; Suknot, April S.; Jimenez, Edward S.; Thompson, Kyle R.; Goodner, Ryan N.

Sandia National Laboratories has developed a method that applies machine learning methods to high-energy spectral X-ray computed tomography data to identify material composition for every reconstructed voxel in the field-of-view. While initial experiments led by Koundinyan et al. demonstrated that supervised machine learning techniques perform well in identifying a variety of classes of materials, this work presents an unsupervised approach that differentiates isolated materials with highly similar properties, and can be applied on spectral computed tomography data to identify materials more accurately compared to traditional performance. Additionally, if regions of the spectrum for multiple voxels become unusable due to artifacts, this method can still reliably perform material identification. This enhanced capability can tremendously impact fields in security, industry, and medicine that leverage non-destructive evaluation for detection, verification, and validation applications.

More Details

Material identification with multichannel radiographs

AIP Conference Proceedings

Collins, Noelle M.; Jimenez, Edward S.; Thompson, Kyle R.

This work aims to validate previous exploratory work done to characterize materials by matching their attenuation profiles using a multichannel radiograph given an initial energy spectrum. The experiment was performed in order to evaluate the effects of noise on the resulting attenuation profiles, which was ignored in simulation. Spectrum measurements have also been collected from various materials of interest. Additionally, a MATLAB optimization algorithm has been applied to these candidate spectrum measurements in order to extract an estimate of the attenuation profile. Being able to characterize materials through this nondestructive method has an extensive range of applications for a wide variety of fields, including quality assessment, industry, and national security.

More Details

Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications

Proceedings of SPIE - The International Society for Optical Engineering

Jimenez, Edward S.; Thompson, Kyle R.; Stohn, Adriana S.; Goodner, Ryan N.

Sandia National Laboratories has recently developed the capability to acquire multi-channel radio- graphs for multiple research and development applications in industry and security. This capability allows for the acquisition of x-ray radiographs or sinogram data to be acquired at up to 300 keV with up to 128 channels per pixel. This work will investigate whether multiple quality metrics for computed tomography can actually benefit from binned projection data compared to traditionally acquired grayscale sinogram data. Features and metrics to be evaluated include the ability to dis- tinguish between two different materials with similar absorption properties, artifact reduction, and signal-to-noise for both raw data and reconstructed volumetric data. The impact of this technology to non-destructive evaluation, national security, and industry is wide-ranging and has to potential to improve upon many inspection methods such as dual-energy methods, material identification, object segmentation, and computer vision on radiographs.

More Details

Cluster-based approach to a multi-GPU CT reconstruction algorithm

2014 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2014

Orr, Laurel J.; Jimenez, Edward S.; Thompson, Kyle R.

Conventional CPU-based algorithms for Computed Tomography reconstruction lack the computational efficiency necessary to process large, industrial datasets in a reasonable amount of time. Specifically, processing time for a single-pass, trillion volumetric pixel (voxel) reconstruction requires months to reconstruct using a high performance CPU-based workstation. An optimized, single workstation multi-GPU approach has shown performance increases by 2-3 orders-of-magnitude; however, reconstruction of future-size, trillion voxel datasets can still take an entire day to complete. This paper details an approach that further decreases runtime and allows for more diverse workstation environments by using a cluster of GPU-capable workstations. Due to the irregularity of the reconstruction tasks throughout the volume, using a cluster of multi-GPU nodes requires inventive topological structuring and data partitioning to avoid network bottlenecks and achieve optimal GPU utilization. This paper covers the cluster layout and non-linear weighting scheme used in this high-performance multi-GPU CT reconstruction algorithm and presents experimental results from reconstructing two large-scale datasets to evaluate this approach's performance and applicability to future-size datasets. Specifically, our approach yields up to a 20 percent improvement for large-scale data.

More Details

Object composition identification via mediated-reality supplemented radiographs

2014 IEEE Nuclear Science Symposium and Medical Imaging Conference, NSS/MIC 2014

Jimenez, Edward S.; Orr, Laurel J.; Thompson, Kyle R.

This exploratory work investigates the feasibility of extracting linear attenuation functions with respect to energy from a multi-channel radiograph of an object of interest composed of a homogeneous material by simulating the entire imaging system combined with a digital phantom of the object of interest and leveraging this information along with the acquired multi-channel image. This synergistic combination of information allows for improved estimates on not only the attenuation for an effective energy, but for the entire spectrum of energy that is coincident with the detector elements. Material composition identification from radiographs would have wide applications in both medicine and industry. This work will focus on industrial radiography applications and will analyse a range of materials that vary in attenuative properties. This work shows that using iterative solvers holds encouraging potential to fully solve for the linear attenuation profile for the object and material of interest when the imaging system is characterized with respect to initial source x-ray energy spectrum, scan geometry, and accurate digital phantom.

More Details

Exploration of available feature detection and identification systems and their performance on radiographs

Proceedings of SPIE - The International Society for Optical Engineering

Wantuch, Andrew C.; Vita, Joshua V.; Jimenez, Edward S.; Bray, Iliana E.

Despite object detection, recognition, and identification being very active areas of computer vision research, many of the available tools to aid in these processes are designed with only photographs in mind. Although some algorithms used specifically for feature detection and identification may not take explicit advantage of the colors available in the image, they still under-perform on radiographs, which are grayscale images. We are especially interested in the robustness of these algorithms, specifically their performance on a preexisting database of X-ray radiographs in compressed JPEG form, with multiple ways of describing pixel information. We will review various aspects of the performance of available feature detection and identification systems, including MATLABs Computer Vision toolbox, VLFeat, and OpenCV on our non-ideal database. In the process, we will explore possible reasons for the algorithms' lessened ability to detect and identify features from the X-ray radiographs.

More Details

Hybrid object detection system for x-ray radiographs

Proceedings of SPIE - The International Society for Optical Engineering

Vita, Joshua V.; Wantuch, Andrew C.; Jimenez, Edward S.; Bray, Iliana E.

While object detection is a relatively well-developed field with respect to visible light photographs, there are significantly fewer algorithms designed to work with other imaging modalities. X-ray radiographs have many unique characteristics that introduce additional challenges that can cause common image processing and object detection algorithms to begin to fail. Examples of these problematic attributes include the fact that radiographs are only represented in gray scale with similar textures and that transmission overlap occurs when multiple objects are overlaid on top of each other. In this paper we not only analyze the effectiveness of common object detection techniques as applied to our specific database, but also outline how we combined various techniques to improve overall performance. While significant strides have been made towards developing a robust object detection algorithm for use with the given database, it is still a work in progress. Further research will be needed in order to deal with the specific obstacles posed by radiographs and X-ray imaging systems. Success in this project would have disruptive repercussions in fields ranging from medical imaging to manufacturing quality assurance and national security.

More Details

Developing imaging capabilities of multi-channel detectors comparable to traditional x-ray detector technology for industrial and security applications

Proceedings of SPIE - The International Society for Optical Engineering

Jimenez, Edward S.; Collins, Noelle M.; Holswade, Erica A.; Devonshire, Madison L.; Thompson, Kyle R.

This work will investigate the imaging capabilities of the Multix multi-channel linear array detector and its potential suitability for big-data industrial and security applications versus that which is currently deployed. Multi-channel imaging data holds huge promise in not only finer resolution in materials classification, but also in materials identification and elevated data quality for various radiography and computed tomography applications. The potential pitfall is the signal quality contained within individual channels as well as the required exposure and acquisition time necessary to obtain images comparable to those of traditional configurations. This work will present results of these detector technologies as they pertain to a subset of materials of interest to the industrial and security communities; namely, water, copper, lead, polyethylene, and tin.

More Details

Exploring the feasibility of traditional image querying tasks for industrial radiographs

Proceedings of SPIE - The International Society for Optical Engineering

Bray, Iliana E.; Tsai, Stephany J.; Jimenez, Edward S.

Although there have been great strides in object recognition with optical images (photographs), there has been comparatively little research into object recognition for X-ray radiographs. Our exploratory work contributes to this area by creating an object recognition system designed to recognize components from a related database of radiographs. Object recognition for radiographs must be approached differently than for optical images, because radiographs have much less color-based information to distinguish objects, and they exhibit transmission overlap that alters perceived object shapes. The dataset used in this work contained more than 55,000 intermixed radiographs and photographs, all in a compressed JPEG form and with multiple ways of describing pixel information. For this work, a robust and efficient system is needed to combat problems presented by properties of the X-ray imaging modality, the large size of the given database, and the quality of the images contained in said database. We have explored various pre-processing techniques to clean the cluttered and low-quality images in the database, and we have developed our object recognition system by combining multiple object detection and feature extraction methods. We present the preliminary results of the still-evolving hybrid object recognition system.

More Details

Exploring mediated reality to approximate X-ray attenuation coefficients from radiographs

Proceedings of SPIE - The International Society for Optical Engineering

Jimenez, Edward S.; Orr, Laurel J.; Morgan, Megan L.; Thompson, Kyle R.

Estimation of the x-ray attenuation properties of an object with respect to the energy emitted from the source is a challenging task for traditional Bremsstrahlung sources. This exploratory work attempts to estimate the x-ray attenuation profile for the energy range of a given Bremsstrahlung profile. Previous work has shown that calculating a single effective attenuation value for a polychromatic source is not accurate due to the non-linearities associated with the image formation process. Instead, we completely characterize the imaging system virtually and utilize an iterative search method/constrained optimization technique to approximate the attenuation profile of the object of interest. This work presents preliminary results from various approaches that were investigated. The early results illustrate the challenges associated with these techniques and the potential for obtaining an accurate estimate of the attenuation profile for objects composed of homogeneous materials.

More Details

Irregular large-scale computed tomography on multiple graphics processors improves energy-efficiency metrics for industrial applications

Proceedings of SPIE - The International Society for Optical Engineering

Jimenez, Edward S.; Goodman, Eric G.; Park, Ryeojin; Orr, Laurel J.; Thompson, Kyle R.

This paper will investigate energy-efficiency for various real-world industrial computed-tomography reconstruction algorithms, both CPU- and GPU-based implementations. This work shows that the energy required for a given reconstruction is based on performance and problem size. There are many ways to describe performance and energy efficiency, thus this work will investigate multiple metrics including performance-per-watt, energy-delay product, and energy consumption. This work found that irregular GPU-based approaches1 realized tremendous savings in energy consumption when compared to CPU implementations while also significantly improving the performanceper- watt and energy-delay product metrics. Additional energy savings and other metric improvement was realized on the GPU-based reconstructions by improving storage I/O by implementing a parallel MIMD-like modularization of the compute and I/O tasks.

More Details

A high-performance GPU-based forward-projection model for computed tomography applications

Proceedings of SPIE - The International Society for Optical Engineering

Perez, Ismael P.; Bauerle, Matthew; Jimenez, Edward S.; Thompson, Kyle R.

This work describes a high-performance approach to radiograph (i.e. X-ray image for this work) simulation for arbitrary objects. The generation of radiographs is more generally known as the forward projection imaging model. The formation of radiographs is very computationally expensive and is not typically approached for large-scale applications such as industrial radiography. The approach described in this work revolves around a single GPU-based implementation that performs the attenuation calculation in a massively parallel environment. Additionally, further performance gains are realized by exploiting the GPU-specific hardware. Early results show that using a single GPU can increase computational performance by three orders-of- magnitude for volumes of 10003 voxels and images with 10002 pixels.

More Details

High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications

Jimenez, Edward S.; Orr, Laurel J.; Thompson, Kyle R.

The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

More Details

Triangle finding: How graph theory can help the semantic web

CEUR Workshop Proceedings

Jimenez, Edward S.; Goodman, Eric G.

RDF data can be thought of as a graph where the subject and objects are vertices and the predicates joining them are edge attributes. Despite decades of research in graph theory, very little of this work has been applied to RDF data sets and it has been largely ignored by the Semantic Web research community. We present a case study of triangle finding, where existing algorithms from graph theory provide excellent complexity bounds, growing at a significantly slower rate than algorithms used within existing RDF triple stores. In order to scale to large volumes of data, the Semantic Web community should look to the many existing graph algorithms.

More Details

Scalable hashing for shared memory supercomputers

Proceedings of 2011 SC - International Conference for High Performance Computing, Networking, Storage and Analysis

Goodman, Eric G.; Lemaster, M.N.; Jimenez, Edward S.

Hashing is a fundamental technique in computer science to allow O(1) insert and lookups of items in an associative array. Here we present several thread coordination and hashing strategies and compare and contrast their performance on large, shared memory symmetric multiprocessor machines, each possessing between a half to a full terabyte of memory. We show how our approach can be used as a key kernel for fundamental paradigms such as dynamic programming and MapReduce. We further show that a set of approaches yields close to linear speedup for both uniform random and more difficult power law distributions. This scalable performance is in spite of the fact that our set of approaches is not completely lock-free. Our experimental results utilize and compare an SGI Altix UV with 4 Xeon processors (32 cores) and a Cray XMT with 128 processors. On the scale of data we addressed, on the order of 5 billion integers, we show that the Altix UV far exceeds the performance of the Cray XMT for power law distributions. However, the Cray XMT exhibits greater scalability. Copyright 2011 ACM.

More Details

High-performance computing applied to semantic databases

Jimenez, Edward S.; Goodman, Eric G.

To-date, the application of high-performance computing resources to Semantic Web data has largely focused on commodity hardware and distributed memory platforms. In this paper we make the case that more specialized hardware can offer superior scaling and close to an order of magnitude improvement in performance. In particular we examine the Cray XMT. Its key characteristics, a large, global shared-memory, and processors with a memory-latency tolerant design, offer an environment conducive to programming for the Semantic Web and have engendered results that far surpass current state of the art. We examine three fundamental pieces requisite for a fully functioning semantic database: dictionary encoding, RDFS inference, and query processing. We show scaling up to 512 processors (the largest configuration we had available), and the ability to process 20 billion triples completely in-memory.

More Details
104 Results
104 Results