A computationally efficient radiative transport model is presented that predicts a camera measurement and accounts for the light reflected and blocked by an object in a scattering medium. The model is in good agreement with experimental data acquired at the Sandia National Laboratory Fog Chamber Facility (SNLFC). The model is applicable in computational imaging to detect, localize, and image objects hidden in scattering media. Here, a statistical approach was implemented to study object detection limits in fog.
Performing terrain classification with data from heterogeneous imaging modalities is a very challenging problem. The challenge is further compounded by very high spatial resolution. (In this paper we consider very high spatial resolution to be much less than a meter.) At very high resolution many additional complications arise, such as geometric differences in imaging modalities and heightened pixel-by-pixel variability due to inhomogeneity within terrain classes. In this paper we consider the fusion of very high resolution hyperspectral imaging (HSI) and polarimetric synthetic aperture radar (PolSAR) data. We introduce a framework that utilizes the probabilistic feature fusion (PFF) one-class classifier for data fusion and demonstrate the effect of making pixelwise, superpixel, and pixelwise voting (within a superpixel) terrain classification decisions. We show that fusing imaging modality data sets, combined with pixelwise voting within the spatial extent of superpixels, gives a robust terrain classification framework that gives a good balance between quantitative and qualitative results.
Random scattering and absorption of light by tiny particles in aerosols, like fog, reduce situational awareness and cause unacceptable down-time for critical systems or operations. Computationally efficient light transport models are desired for computational imaging to improve remote sensing capabilities in degraded optical environments. To this end, we have developed a model based on a weak angular dependence approximation to the Boltzmann or radiative transfer equation that appears to be applicable in both the moderate and highly scattering regimes, thereby covering the applicability domain of both the small angle and diffusion approximations. An analytic solution was derived and validated using experimental data acquired at the Sandia National Laboratory Fog Chamber facility. The evolution of the fog particle density and size distribution were measured and used to determine macroscopic absorption and scattering properties using Mie theory. A three-band (0.532, 1.55, and 9.68 μm) transmissometer with lock-in amplifiers enabled changes in fog density of over an order of magnitude to be measured due to the increased transmission at higher wavelengths, covering both the moderate and highly scattering regimes. The meteorological optical range parameter is shown to be about 0.6 times the transport mean free path length, suggesting an improved physical interpretation of this parameter.
Deciding on an imaging modality for terrain classification can be a challenging problem. For some terrain classes a given sensing modality may discriminate well, but may not have the same performance on other classes that a different sensor may be able to easily separate. The most effective terrain classification will utilize the abilities of multiple sensing modalities. The challenge of utilizing multiple sensing modalities is then determining how to combine the information in a meaningful and useful way. In this paper, we introduce a framework for effectively combining data from optical and polarimetric synthetic aperture radar sensing modalities. We demonstrate the fusion framework for two vegetation classes and two ground classes and show that fusing data from both imaging modalities has the potential to improve terrain classification from either modality, alone.
This communication reports progress towards the development of computational sensing and imaging methods that utilize highly scattered light to extract information at greater depths in degraded visual environments like fog for improved situational awareness. As light propagates through fog, information is lost due to random scattering and absorption by micrometer sized water droplets. Computational diffuse optical imaging shows promise for interpreting the detected scattered light, enabling greater depth penetration than current methods. Developing this capability requires verification and validation of diffusion models of light propagation in fog. We report models that were developed and compared to experimental data captured at the Sandia National Laboratory Fog Chamber facility. The diffusion approximation to the radiative transfer equation was found to predict light propagation in fog under the appropriate conditions.
There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.
The detection, location, and identification of suspected underground nuclear explosions (UNEs) are global security priorities that rely on integrated analysis of multiple data modalities for uncertainty reduction in event analysis. Vegetation disturbances may provide complementary signatures that can confirm or build on the observables produced by prompt sensing techniques such as seismic or radionuclide monitoring networks. For instance, the emergence of non-native species in an area may be indicative of anthropogenic activity or changes in vegetation health may reflect changes in the site conditions resulting from an underground explosion. Previously, we collected high spatial resolution (10 cm) hyperspectral data from an unmanned aerial system at a legacy underground nuclear explosion test site and its surrounds. These data consist of visible and near-infrared wavebands over 4.3 km2 of high desert terrain along with high spatial resolution (2.5 cm) RGB context imagery. In this work, we employ various spectral detection and classification algorithms to identify and map vegetation species in an area of interest containing the legacy test site. We employed a frequentist framework for fusing multiple spectral detections across various reference spectra captured at different times and sampled from multiple locations. The spatial distribution of vegetation species is compared to the location of the underground nuclear explosion. We find a difference in species abundance within a 130 m radius of the center of the test site.
We present simulation results quantitatively showing that circularly polarized light persists in transmission through several real-world and model fog environments better than linearly polarized light over broad wavelength ranges from the visible through the infrared. We present results for polydisperse particle distributions from realistic and measured fog environments, comparing the polarization persistence of linear and circular polarization. Using a polarization-tracking Monte Carlo program, we simulate polarized light propagation through four MODTRAN fog models (moderate and heavy radiation fog and moderate and heavy advection fog) and four real-world measured fog particle distributions (Garland measured radiation and advection fogs, Kunkel measured advection fog, and Sandia National Laboratories’ Fog Facility’s fog). Simulations were performed for each fog environment with wavelengths ranging from 0.4 to 12 μm for increasing optical thicknesses of 5, 10, and 15 (increasing fog density or sensing range). Circular polarization persists superiorly for all optical wavelength bands from the visible to the long-wave infrared in nearly all fog types for all optical thicknesses. Throughout our analysis, we show that if even a small percentage of a fog’s particle size distribution is made up of large particles, those particles dominate the scattering process. In nearly all real-world fog situations, these large particles and their dominant scattering characteristics are present. Larger particles are predominantly forward-scattering and contribute to circular polarization’s persistence superiority over broad wavelength ranges and optical thicknesses/ range. Circularly polarized light can transmit over 30% more signal in its intended state compared to linearly polarized light through real-world fog environments. This work broadens the understanding of how circular polarization persists through natural fog particle distributions with natural variations in mode particle radius and single or bimodal characteristics.