A computationally efficient radiative transport model is presented that predicts a camera measurement and accounts for the light reflected and blocked by an object in a scattering medium. The model is in good agreement with experimental data acquired at the Sandia National Laboratory Fog Chamber Facility (SNLFC). The model is applicable in computational imaging to detect, localize, and image objects hidden in scattering media. Here, a statistical approach was implemented to study object detection limits in fog.
Performing terrain classification with data from heterogeneous imaging modalities is a very challenging problem. The challenge is further compounded by very high spatial resolution. (In this paper we consider very high spatial resolution to be much less than a meter.) At very high resolution many additional complications arise, such as geometric differences in imaging modalities and heightened pixel-by-pixel variability due to inhomogeneity within terrain classes. In this paper we consider the fusion of very high resolution hyperspectral imaging (HSI) and polarimetric synthetic aperture radar (PolSAR) data. We introduce a framework that utilizes the probabilistic feature fusion (PFF) one-class classifier for data fusion and demonstrate the effect of making pixelwise, superpixel, and pixelwise voting (within a superpixel) terrain classification decisions. We show that fusing imaging modality data sets, combined with pixelwise voting within the spatial extent of superpixels, gives a robust terrain classification framework that gives a good balance between quantitative and qualitative results.
Random scattering and absorption of light by tiny particles in aerosols, like fog, reduce situational awareness and cause unacceptable down-time for critical systems or operations. Computationally efficient light transport models are desired for computational imaging to improve remote sensing capabilities in degraded optical environments. To this end, we have developed a model based on a weak angular dependence approximation to the Boltzmann or radiative transfer equation that appears to be applicable in both the moderate and highly scattering regimes, thereby covering the applicability domain of both the small angle and diffusion approximations. An analytic solution was derived and validated using experimental data acquired at the Sandia National Laboratory Fog Chamber facility. The evolution of the fog particle density and size distribution were measured and used to determine macroscopic absorption and scattering properties using Mie theory. A three-band (0.532, 1.55, and 9.68 μm) transmissometer with lock-in amplifiers enabled changes in fog density of over an order of magnitude to be measured due to the increased transmission at higher wavelengths, covering both the moderate and highly scattering regimes. The meteorological optical range parameter is shown to be about 0.6 times the transport mean free path length, suggesting an improved physical interpretation of this parameter.
Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers. However, machine learning classification algorithms do not require the same data representation used by humans. We investigate the compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and trade-offs of these systems built for compressed classification of the Modified National Institute of Standards and Technology dataset. Both architectures achieve classification accuracies within 3% of the optimized sensing matrix for compression ranging from 98.85% to 99.87%. The performance of the systems with 98.85% compression were between an F / 2 and F / 4 imaging system in the presence of noise.
This communication reports progress towards the development of computational sensing and imaging methods that utilize highly scattered light to extract information at greater depths in degraded visual environments like fog for improved situational awareness. As light propagates through fog, information is lost due to random scattering and absorption by micrometer sized water droplets. Computational diffuse optical imaging shows promise for interpreting the detected scattered light, enabling greater depth penetration than current methods. Developing this capability requires verification and validation of diffusion models of light propagation in fog. We report models that were developed and compared to experimental data captured at the Sandia National Laboratory Fog Chamber facility. The diffusion approximation to the radiative transfer equation was found to predict light propagation in fog under the appropriate conditions.
There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.
Images are often not the optimal data form to perform machine learning tasks such as scene classification. Compressive classification can reduce the size, weight, and power of a system by selecting the minimum information while maximizing classification accuracy.In this work we present designs and simulations of prism arrays which realize sensing matrices using a monolithic element. The sensing matrix is optimized using a neural network architecture to maximize classification accuracy of the MNIST dataset while considering the blurring caused by the size of each prism. Simulated optical hardware performance for a range of prism sizes are reported.