Publications

59 Results
Skip to search filters

Performance evaluation of two optical architectures for task-specific compressive classification

Optical Engineering

Redman, Brian J.; Dagel, Amber L.; Galiardi, Meghan A.; LaCasse, Charles F.; Quach, Tu-Thach Q.; Birch, Gabriel C.

Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers. However, machine learning classification algorithms do not require the same data representation used by humans. We investigate the compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and trade-offs of these systems built for compressed classification of the Modified National Institute of Standards and Technology dataset. Both architectures achieve classification accuracies within 3% of the optimized sensing matrix for compression ranging from 98.85% to 99.87%. The performance of the systems with 98.85% compression were between an F / 2 and F / 4 imaging system in the presence of noise.

More Details

Efficient Generalized Boundary Detection Using a Sliding Information Distance

IEEE Transactions on Signal Processing

Field, Richard; Quach, Tu-Thach Q.; Ting, Christina T.

We present a general machine learning algorithm for boundary detection within general signals based on an efficient, accurate, and robust approximation of the universal normalized information distance. Our approach uses an adaptive sliding information distance (SLID) combined with a wavelet-based approach for peak identification to locate the boundaries. Special emphasis is placed on developing an adaptive formulation of SLID to handle general signals with multiple unknown and/or drifting section lengths. Although specialized algorithms may outperform SLID when domain knowledge is available, these algorithms are limited to specific applications and do not generalize. SLID excels in these cases. We demonstrate the versatility and efficacy of SLID on a variety of signal types, including synthetically generated sequences of tokens, binary executables for reverse engineering applications, and time series of seismic events.

More Details

Optimizing a Compressive Imager for Machine Learning Tasks

Conference Record - Asilomar Conference on Signals, Systems and Computers

Redman, Brian J.; Calzada, Daniel; Wingo, Jamie; Quach, Tu-Thach Q.; Galiardi, Meghan; Dagel, Amber L.; LaCasse, Charles F.; Birch, Gabriel C.

Images are often not the optimal data form to perform machine learning tasks such as scene classification. Compressive classification can reduce the size, weight, and power of a system by selecting the minimum information while maximizing classification accuracy.In this work we present designs and simulations of prism arrays which realize sensing matrices using a monolithic element. The sensing matrix is optimized using a neural network architecture to maximize classification accuracy of the MNIST dataset while considering the blurring caused by the size of each prism. Simulated optical hardware performance for a range of prism sizes are reported.

More Details

Generalized Boundary Detection Using Compression-based Analytics

ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings

Ting, Christina T.; Field, Richard V.; Quach, Tu-Thach Q.; Bauer, Travis L.

We present a new method for boundary detection within sequential data using compression-based analytics. Our approach is to approximate the information distance between two adjacent sliding windows within the sequence. Large values in the distance metric are indicative of boundary locations. A new algorithm is developed, referred to as sliding information distance (SLID), that provides a fast, accurate, and robust approximation to the normalized information distance. A modified smoothed z-score algorithm is used to locate peaks in the distance metric, indicating boundary locations. A variety of data sources are considered, including text and audio, to demonstrate the efficacy of our approach.

More Details

Characterization of 3D printed computational imaging element for use in task-specific compressive classification

Proceedings of SPIE - The International Society for Optical Engineering

Birch, Gabriel C.; Redman, Brian J.; Dagel, Amber L.; Kaehr, Bryan J.; Dagel, Daryl D.; LaCasse, Charles F.; Quach, Tu-Thach Q.; Galiardi, Meghan

We investigate the feasibility of additively manufacturing optical components to accomplish task-specific classification in a computational imaging device. We report on the design, fabrication, and characterization of a non-traditional optical element that physically realizes an extremely compressed, optimized sensing matrix. The compression is achieved by designing an optical element that only samples the regions of object space most relevant to the classification algorithms, as determined by machine learning algorithms. The design process for the proposed optical element converts the optimal sensing matrix to a refractive surface composed of a minimized set of non-repeating, unique prisms. The optical elements are 3D printed using a Nanoscribe, which uses two-photon polymerization for high-precision printing. We describe the design of several computational imaging prototype elements. We characterize these components, including surface topography, surface roughness, and angle of prism facets of the as-fabricated elements.

More Details

Sparse Data Acquisition on Emerging Memory Architectures

IEEE Access

Quach, Tu-Thach Q.; Agarwal, Sapan A.; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

Emerging memory devices, such as resistive crossbars, have the capacity to store large amounts of data in a single array. Acquiring the data stored in large-capacity crossbars in a sequential fashion can become a bottleneck. We present practical methods, based on sparse sampling, to quickly acquire sparse data stored on emerging memory devices that support the basic summation kernel, reducing the acquisition time from linear to sub-linear. The experimental results show that at least an order of magnitude improvement in acquisition time can be achieved when the data are sparse. In addition, we show that the energy cost associated with our approach is competitive to that of the sequential method.

More Details

Task-specific computational refractive element via two-photon additive manufacturing

Optics InfoBase Conference Papers

Redman, Brian J.; Dagel, Amber L.; Kaehr, Bryan; LaCasse, Charles F.; Birch, Gabriel C.; Quach, Tu-Thach Q.; Galiardi, Meghan A.

We report on the design and fabrication of a computational imaging element used within a compressive task-specific imaging system. Fabrication via two-photon 3D printing is reported, as well as characterization of the fabricated element.

More Details

Polarimetric synthetic-aperture-radar change-type classification with a hyperparameter-free open-set classifier

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Koch, Mark W.; West, Roger D.; Riley, Robert; Quach, Tu-Thach Q.

Synthetic aperture radar (SAR) is a remote sensing technology that can truly operate 24/7. It's an all-weather system that can operate at any time except in the most extreme conditions. Coherent change detection (CCD) in SAR can identify minute changes such as vehicle tracks that occur between images taken at different times. From polarimetric SAR capabilities, researchers have developed decompositions that allow one to automatically classify the scattering type in a single polarimetric SAR (PolSAR) image set. We extend that work to CCD in PolSAR images to identify the type change. Such as change caused by no return regions, trees, or ground. This work could then be used as a preprocessor for algorithms to automatically detect tracks.

More Details

Computing with spikes: The advantage of fine-grained timing

Neural Computation

Verzi, Stephen J.; Rothganger, Fredrick R.; Parekh, Ojas D.; Quach, Tu-Thach Q.; Miner, Nadine E.; Vineyard, Craig M.; James, Conrad D.; Aimone, James B.

Neural-inspired spike-based computing machines often claim to achieve considerable advantages in terms of energy and time efficiency by using spikes for computation and communication. However, fundamental questions about spike-based computation remain unanswered. For instance, how much advantage do spike-based approaches have over conventionalmethods, and underwhat circumstances does spike-based computing provide a comparative advantage? Simply implementing existing algorithms using spikes as the medium of computation and communication is not guaranteed to yield an advantage. Here, we demonstrate that spike-based communication and computation within algorithms can increase throughput, and they can decrease energy cost in some cases. We present several spiking algorithms, including sorting a set of numbers in ascending/descending order, as well as finding the maximum or minimum ormedian of a set of numbers.We also provide an example application: a spiking median-filtering approach for image processing providing a low-energy, parallel implementation. The algorithms and analyses presented here demonstrate that spiking algorithms can provide performance advantages and offer efficient computation of fundamental operations useful in more complex algorithms.

More Details

Sparse coding for N-gram feature extraction and training for file fragment classification

IEEE Transactions on Information Forensics and Security

Wang, Felix W.; Quach, Tu-Thach Q.; Wheeler, Jason W.; Aimone, James B.; James, Conrad D.

File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features, such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used to reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers, such as support vector machines over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.

More Details

Optical systems for task-specific compressive classification

Proceedings of SPIE - The International Society for Optical Engineering

Birch, Gabriel C.; Quach, Tu-Thach Q.; Galiardi, Meghan; LaCasse, Charles F.; Dagel, Amber L.

Advancements in machine learning (ML) and deep learning (DL) have enabled imaging systems to perform complex classification tasks, opening numerous problem domains to solutions driven by high quality imagers coupled with algorithmic elements. However, current ML and DL methods for target classification typically rely upon algorithms applied to data measured by traditional imagers. This design paradigm fails to enable the ML and DL algorithms to influence the sensing device itself, and treats the optimization of the sensor and algorithm as separate sequential elements. Additionally, this current paradigm narrowly investigates traditional images, and therefore traditional imaging hardware, as the primary means of data collection. We investigate alternative architectures for computational imaging systems optimized for specific classification tasks, such as digit classification. This involves a holistic approach to the design of the system from the imaging hardware to algorithms. Techniques to find optimal compressive representations of training data are discussed, and most-useful object-space information is evaluated. Methods to translate task-specific compressed data representations into non-traditional computational imaging hardware are described, followed by simulations of such imaging devices coupled with algorithmic classification using ML and DL techniques. Our approach allows for inexpensive, efficient sensing systems. Reduced storage and bandwidth are achievable as well since data representations are compressed measurements which is especially important for high data volume systems.

More Details

Convolutional networks for vehicle track segmentation

Journal of Applied Remote Sensing

Quach, Tu-Thach Q.

Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times of the same scene, rely on simple and fast models to label track pixels. These models, however, are unable to capture natural track features, such as continuity and parallelism. More powerful but computationally expensive models can be used in offline settings. We present an approach that uses dilated convolutional networks consisting of a series of 3×3 convolutions to segment vehicle tracks. The design of our networks considers the fact that remote sensing applications tend to operate in low power and have limited training data. As a result, we aim for small and efficient networks that can be trained end-to-end to learn natural track features entirely from limited training data. We demonstrate that our six-layer network, trained on just 90 images, is computationally efficient and improves the F-score on a standard dataset to 0.992, up from 0.959 obtained by the current state-of-the-art method.

More Details

Scalable Track Detection in SAR CCD Images

Chow, James G.; Quach, Tu-Thach Q.

Existing methods to detect vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images ta ken at different times of the same scene, rely on simple, fast models to label track pixels. These models, however, are often too simple to capture natural track features such as continuity and parallelism. We present a simple convolutional network architecture consisting of a series of 3-by-3 convolutions to detect tracks. The network is trained end-to-end to learn natural track features entirely from data. The network is computationally efficient and improves the F-score on a standard dataset to 0.988, up fr om 0.907 obtained by the current state-of-the-art method.

More Details

Vehicle Track Segmentation Using Higher Order Random Fields

IEEE Geoscience and Remote Sensing Letters

Quach, Tu-Thach Q.

We present an approach to segment vehicle tracks in coherent change detection images, a product of combining two synthetic aperture radar images taken at different times. The approach uses multiscale higher order random field models to capture track statistics, such as curvatures and their parallel nature, that are not currently utilized in existing methods. These statistics are encoded as 3-by-3 patterns at different scales. The model can complete disconnected tracks often caused by sensor noise and various environmental effects. Coupling the model with a simple classifier, our approach is effective at segmenting salient tracks. We improve the F-measure on a standard vehicle track data set to 0.963, up from 0.897 obtained by the current state-of-the-art method.

More Details

Data Inferencing on Semantic Graphs (DISeG) Final Report

Wendt, Jeremy D.; Quach, Tu-Thach Q.; Zage, David J.; Field, Richard V.; Wells, Randall W.; Soundarajan, Sucheta S.; Cruz, Gerardo C.

The Data Inferencing on Semantic Graphs project (DISeG) was a two-year investigation of inferencing techniques (focusing on belief propagation) to social graphs with a focus on semantic graphs (also called multi-layer graphs). While working this problem, we developed a new directed version of inferencing we call Directed Propagation (Chapters 2 and 4), identified new semantic graph sampling problems (Chapter 3).

More Details

Vehicle track detection in CCD imagery via conditional random field

Conference Record - Asilomar Conference on Signals, Systems and Computers

Malinas, Rebecca; Quach, Tu-Thach Q.; Koch, Mark W.

Coherent change detection (CCD) can indicate subtle scene changes in synthetic aperture radar (SAR) imagery, such as vehicle tracks. Automatic track detection in SAR CCD is difficult due to various sources of low coherence other than the track activity we wish to detect. Existing methods require user cues or explicit modeling of track structure, which limit algorithms' ability to find tracks that do not fit the model. In this paper, we present a track detection approach based on a pixel-level labeling of the image via a conditional random field classifier, with features based on radial derivatives of local Radon transforms. Our approach requires no modeling of track characteristics and no user input, other than a training phase for the unary cost of the conditional random field. Experiments show that our method can successfully detect both parallel and single tracks in SAR CCD as well as correctly declare when no tracks are present.

More Details

Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding

Frontiers in Neuroscience

Agarwal, Sapan A.; Quach, Tu-Thach Q.; Parekh, Ojas D.; Hsia, Alexander H.; DeBenedictis, Erik; James, Conrad D.; Marinella, Matthew J.; Aimone, James B.

The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an N × N crossbar, these two kernels can be O(N) more energy efficient than a conventional digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1)). These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N) reduction in energy for the entire algorithm when run with finite precision. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

More Details

A diffusion model for maximizing influence spread in large networks

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Quach, Tu-Thach Q.; Wendt, Jeremy D.

Influence spread is an important phenomenon that occurs in many social networks. Influence maximization is the corresponding problem of finding the most influential nodes in these networks. In this paper, we present a new influence diffusion model, based on pairwise factor graphs, that captures dependencies and directions of influence among neighboring nodes.We use an augmented belief propagation algorithm to efficiently compute influence spread on this model so that the direction of influence is preserved. Due to its simplicity, the model can be used on large graphs with high-degree nodes, making the influence maximization problem practical on large, real-world graphs. Using large Flixster and Epinions datasets, we provide experimental results showing that our model predictions match well with ground-truth influence spreads, far better than other techniques. Furthermore, we show that the influential nodes identified by our model achieve significantly higher influence spread compared to other popular models. The model parameters can easily be learned from basic, readily available training data. In the absence of training, our approach can still be used to identify influential seed nodes.

More Details

Low-Level track finding and completion using random fields

IS and T International Symposium on Electronic Imaging Science and Technology

Quach, Tu-Thach Q.; Malinas, Rebecca; Koch, Mark W.

Coherent change detection (CCD) images, which are prod- ucts of combining two synthetic aperture radar (SAR) images taken at different times of the same scene, can reveal subtle sur- face changes such as those made by tire tracks. These images, however, have low texture and are noisy, making it difficult to au- Tomate track finding. Existing techniques either require user cues and can only trace a single track or make use of templates that are difficult to generalize to different types of tracks, such as those made by motorcycles, or vehicles sizes. This paper presents an approach to automatically identify vehicle tracks in CCD images. We identify high-quality track segments and leverage the con- strained Delaunay triangulation (CDT) to find completion track segments. We then impose global continuity and track smoothness using a binary random field on the resulting CDT graph to determine edges that belong to real tracks. Experimental results show that our algorithm outperforms existing state-of-the- Art techniques in both accuracy and speed.

More Details

The energy scaling advantages of RRAM crossbars

2015 4th Berkeley Symposium on Energy Efficient Electronic Systems, E3S 2015 - Proceedings

Agarwal, Sapan A.; Parekh, Ojas D.; Quach, Tu-Thach Q.; James, Conrad D.; Aimone, James B.; Marinella, Matthew J.

As transistors start to approach fundamental limits and Moore's law slows down, new devices and architectures are needed to enable continued performance gains. New approaches based on RRAM (resistive random access memory) or memristor crossbars can enable the processing of large amounts of data[1, 2]. One of the most promising applications for RRAM crossbars is brain inspired or neuromorphic computing[3, 4].

More Details

A model-based approach to finding tracks in SAR CCD images

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Quach, Tu-Thach Q.; Malinas, Rebecca; Koch, Mark W.

Combining multiple synthetic aperture radar (SAR) images taken at different times of the same scene produces coherent change detection (CCD) images that can detect small surface changes such as tire tracks. The resulting CCD images can be used in an automated approach to identify and label tracks. Existing techniques have limited success due to the noisy nature of these CCD images. In particular, existing techniques require some user cues and can only trace a single track. This paper presents an approach to automatically identify and label multiple tracks in CCD images. We use an explicit objective function that utilizes the Bayesian information criterion to find the simplest set of curves that explains the observed data. Experimental results show that it is capable of identifying tracks under various scenes and can correctly declare when no tracks are present.

More Details

Extracting hidden messages in steganographic images

Digital Investigation

Quach, Tu-Thach Q.

The eventual goal of steganalytic forensic is to extract the hidden messages embedded in steganographic images. A promising technique that addresses this problem partially is steganographic payload location, an approach to reveal the message bits, but not their logical order. It works by finding modified pixels, or residuals, as an artifact of the embedding process. This technique is successful against simple least-significant bit steganography and group-parity steganography. The actual messages, however, remain hidden as no logical order can be inferred from the located payload. This paper establishes an important result addressing this shortcoming: we show that the expected mean residuals contain enough information to logically order the located payload provided that the size of the payload in each stego image is not fixed. The located payload can be ordered as prescribed by the mean residuals to obtain the hidden messages without knowledge of the embedding key, exposing the vulnerability of these embedding algorithms. We provide experimental results to support our analysis.

More Details

Cover estimation and payload location using Markov random fields

Proceedings of SPIE - The International Society for Optical Engineering

Quach, Tu-Thach Q.

Payload location is an approach to find the message bits hidden in steganographic images, but not necessarily their logical order. Its success relies primarily on the accuracy of the underlying cover estimators and can be improved if more estimators are used. This paper presents an approach based on Markov random field to estimate the cover image given a stego image. It uses pairwise constraints to capture the natural two-dimensional statistics of cover images and forms a basis for more sophisticated models. Experimental results show that it is competitive against current state-of-The-Art estimators and can locate payload embedded by simple LSB steganography and group-parity steganography. Furthermore, when combined with existing estimators, payload location accuracy improves significantly. © 2014 SPIE-IS&T .

More Details

A general model of resource production and exchange in systems of interdependent specialists

Beyeler, Walter E.; Glass, Robert J.; Finley, Patrick D.; Quach, Tu-Thach Q.

Infrastructures are networks of dynamically interacting systems designed for the flow of information, energy, and materials. Under certain circumstances, disturbances from a targeted attack or natural disasters can cause cascading failures within and between infrastructures that result in significant service losses and long recovery times. Reliable interdependency models that can capture such multi-network cascading do not exist. The research reported here has extended Sandia's infrastructure modeling capabilities by: (1) addressing interdependencies among networks, (2) incorporating adaptive behavioral models into the network models, and (3) providing mechanisms for evaluating vulnerability to targeted attack and unforeseen disruptions. We have applied these capabilities to evaluate the robustness of various systems, and to identify factors that control the scale and duration of disruption. This capability lays the foundation for developing advanced system security solutions that encompass both external shocks and internal dynamics.

More Details
59 Results
59 Results