Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers. However, machine learning classification algorithms do not require the same data representation used by humans. We investigate the compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and trade-offs of these systems built for compressed classification of the Modified National Institute of Standards and Technology dataset. Both architectures achieve classification accuracies within 3% of the optimized sensing matrix for compression ranging from 98.85% to 99.87%. The performance of the systems with 98.85% compression were between an F / 2 and F / 4 imaging system in the presence of noise.
Images are often not the optimal data form to perform machine learning tasks such as scene classification. Compressive classification can reduce the size, weight, and power of a system by selecting the minimum information while maximizing classification accuracy.In this work we present designs and simulations of prism arrays which realize sensing matrices using a monolithic element. The sensing matrix is optimized using a neural network architecture to maximize classification accuracy of the MNIST dataset while considering the blurring caused by the size of each prism. Simulated optical hardware performance for a range of prism sizes are reported.
We investigate the feasibility of additively manufacturing optical components to accomplish task-specific classification in a computational imaging device. We report on the design, fabrication, and characterization of a non-traditional optical element that physically realizes an extremely compressed, optimized sensing matrix. The compression is achieved by designing an optical element that only samples the regions of object space most relevant to the classification algorithms, as determined by machine learning algorithms. The design process for the proposed optical element converts the optimal sensing matrix to a refractive surface composed of a minimized set of non-repeating, unique prisms. The optical elements are 3D printed using a Nanoscribe, which uses two-photon polymerization for high-precision printing. We describe the design of several computational imaging prototype elements. We characterize these components, including surface topography, surface roughness, and angle of prism facets of the as-fabricated elements.
In this effort, random noise data augmentation is compared to phenomenologically-inspired data augmentation for a target detection task, evaluated on the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model "MegaScene" simulated hyperspectral dataset. Random data augmentation is commonly used in the machine learning literature to improve model generalization. While random perturbations of an input may work well in certain fields such as image classification, they can be unhelpful in other applications such as hyperspectral target detection. For instance, random noise augmentation may not be beneficial when the applied noise distribution does not match underlying physical signal processes or sensor noise. In the context of a low-noise sensor, augmentation mimicking material mixing and other practical spectral modulations is likely to be more effective when used to train a target detector. It is therefore important to utilize a data augmentation strategy that emulates the natural variability in observed spectra. To validate this claim, a small fully connected neural network architecture is trained using an ideal hemispheric reflectance materials dataset as a trivial baseline. That dataset is then augmented using Gaussian random noise and the model is retrained and again applied to MegaScene. Finally, augmentation is instead performed using phenomenological insight and used to retrain and reevaluate the model. In this work, the phenomenological augmentation implements only simple and commonly encountered spectral permutations, namely linear mixing and shadowing. Comparison is made between the augmented models and the baseline model in terms of low constant false alarm rate (CFAR) performance.
We report on the design and fabrication of a computational imaging element used within a compressive task-specific imaging system. Fabrication via two-photon 3D printing is reported, as well as characterization of the fabricated element.
Advancements in machine learning (ML) and deep learning (DL) have enabled imaging systems to perform complex classification tasks, opening numerous problem domains to solutions driven by high quality imagers coupled with algorithmic elements. However, current ML and DL methods for target classification typically rely upon algorithms applied to data measured by traditional imagers. This design paradigm fails to enable the ML and DL algorithms to influence the sensing device itself, and treats the optimization of the sensor and algorithm as separate sequential elements. Additionally, this current paradigm narrowly investigates traditional images, and therefore traditional imaging hardware, as the primary means of data collection. We investigate alternative architectures for computational imaging systems optimized for specific classification tasks, such as digit classification. This involves a holistic approach to the design of the system from the imaging hardware to algorithms. Techniques to find optimal compressive representations of training data are discussed, and most-useful object-space information is evaluated. Methods to translate task-specific compressed data representations into non-traditional computational imaging hardware are described, followed by simulations of such imaging devices coupled with algorithmic classification using ML and DL techniques. Our approach allows for inexpensive, efficient sensing systems. Reduced storage and bandwidth are achievable as well since data representations are compressed measurements which is especially important for high data volume systems.
Channeled spectropolarimetry measures the spectrally resolved Stokes parameters. A key aspect of this technique is to accurately reconstruct the Stokes parameters from a modulated measurement of the channeled spectropolarimeter. The state-of-the-art reconstruction algorithm uses the Fourier transform to extract the Stokes parameters from channels in the Fourier domain. While this approach is straightforward, it can be sensitive to noise and channel cross-talk, and it imposes bandwidth limitations that cut o high frequency details. To overcome these drawbacks, we present a reconstruction method called compressed channeled spectropolarimetry. In our proposed framework, reconstruction in channeled spectropolarimetry is an underdetermined problem, where we take N measurements and solve for 3N unknown Stokes parameters. We formulate an optimization problem by creating a mathematical model of the channeled spectropolarimeter with inspiration from compressed sensing. We show that our approach o ers greater noise robustness and reconstruction accuracy compared with the Fourier transform technique in simulations and experimental measurements. By demonstrating more accurate reconstructions, we push performance to the native resolution of the sensor, allowing more information to be recovered from a single measurement of a channeled spectropolarimeter.
Physical unclonable functions (PUFs) are devices which are easily probed but difficult to predict. Optical PUFs have been discussed within the literature, with traditional optical PUFs typically using spatial light modulators, coherent illumination, and scattering volumes; however, these systems can be large, expensive, and difficult to maintain alignment in practical conditions. We propose and demonstrate a new kind of optical PUF based on computational imaging and compressive sensing to address these challenges with traditional optical PUFs. This work describes the design, simulation, and prototyping of this computational optical PUF (COPUF) that utilizes incoherent polychromatic illumination passing through an additively manufactured refracting optical polymer element. We demonstrate the ability to pass information through a COPUF using a variety of sampling methods, including the use of compressive sensing. The sensitivity of the COPUF system is also explored. We explore non-traditional PUF configurations enabled by the COPUF architecture. The double COPUF system, which employees two serially connected COPUFs, is proposed and analyzed as a means to authenticate and communicate between two entities that have previously agreed to communicate. This configuration enables estimation of a message inversion key without the calculation of individual COPUF inversion keys at any point in the PUF life cycle. Our results show that it is possible to construct inexpensive optical PUFs using computational imaging. This could lead to new uses of PUFs in places where electrical PUFs cannot be utilized effectively, as low cost tags and seals, and potentially as authenticating and communicating devices.
Hyperspectral imaging polarimetry enables both the spectrum and its spectrally resolved state of polarization to be measured. This information is important for identifying material properties for various applications in remote sensing and agricultural monitoring. We describe the design and performance of a ruggedized, field deployable hyperspectral imaging polarimeter, designed for wavelengths spanning the visible to near-infrared (450 to 800 nm). An entrance slit was used to sample the scene in a pushbroom scanning mode across a 30 deg vertical by 110 deg horizontal field-of-view. Furthermore, athermalized achromatic retarders were implemented in a channel spectrum generator to measure the linear Stokes parameters. The mechanical and optical layout of the system and its peripherals, in addition to the results of the sensor's spectral and polarimetric calibration, are provided. Finally, field measurements are also provided and an error analysis is conducted. With its present calibration, the sensor has an absolute polarimetric error of 2.5% RMS and a relative spectral error of 2.3% RMS.
Channeled linear imaging polarimeters measure the two-dimensional distribution of the linear Stokes parameters. A key aspect of this technique is to accurately reconstruct the Stokes parameters from a snapshot, modulated measurement of the channeled linear imaging polarimeter. The state-of-The-Art reconstruction takes the Fourier transform of the measurement to separate the Stokes parameters into channels. While straightforward, this approach is sensitive to channel cross-Talk and imposes bandwidth limitations that cut off high frequency details. To overcome these drawbacks, we present a reconstruction method called compressed channeled linear imaging polarimetry. In this framework, reconstruction in channeled linear imaging polarimetry is an underdetermined problem, where we measure N pixels and recover 3N Stokes parameters. We formulate an optimization problem by creating a mathematical model of the channeled linear imaging polarimeter with inspiration from compressed sensing. Through simulations, we show that our approach mitigates artifacts seen in Fourier reconstruction, including image blurring and degradation and ringing artifacts caused by windowing and channel cross-Talk. By demonstrating more accurate reconstructions, we push performance to the native resolution of the sensor, allowing more information to be recovered from a single measurement of a channeled linear imaging polarimeter.
Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, which moves the complexity of the system away from optical subcomponents and into a calibration process whereby the measurement matrix is estimated. We report on the design, simulation, and prototyping of a lensless imaging system that utilizes a 3D printed optically transparent random scattering element. Development of end-to-end system simulations, which includes simulations of the calibration process, as well as the data processing algorithm used to generate an image from the raw data are presented. These simulations utilize GPU-based raytracing software, and parallelized minimization algorithms to bring complete system simulation times down to the order of seconds. Hardware prototype results are presented, and practical lessons such as the effect of sensor noise on reconstructed image quality are discussed. System performance metrics are proposed and evaluated to discuss image quality in a manner that is relatable to traditional image quality metrics. Various hardware instantiations are discussed.
Computational imagers fundamentally enable new optical hardware through the use of both physical and algorithmic elements. We report on the creation of a static lensless computational imaging system enabled by this paradigm.
We report on the design of a refracting prism array element for use in a computational lensless imaging system. The technique discussed enables creation of a refracting element that maximizes signal on a detector region.
The modeling and simulation of non-traditional imaging systems require holistic consideration of the end-to-end system. We demonstrate this approach through a tolerance analysis of a random scattering lensless imaging system.
Compact snapshot imaging polarimeters have been demonstrated in literature to provide Stokes parameter estimations for spatially varying scenes using polarization gratings. However, the demonstrated system does not employ aggressive modulation frequencies to take full advantage of the bandwidth available to the focal plane array. A snapshot imaging Stokes polarimeter is described and demonstrated through results. The simulation studies the challenges of using a maximum bandwidth configuration for a snapshot polarization grating based polarimeter, such as the fringe contrast attenuation that results from higher modulation frequencies. Similar simulation results are generated and compared for a microgrid polarimeter. Microgrid polarimeters are instruments where pixelated polarizers are superimposed onto a focal plan array, and this is another type of spatially modulated polarimeter, and the most common design uses a 2x2 super pixel of polarizers which maximally uses the available bandwidth of the focal plane array.