Ship tracks are quasi-linear cloud patterns produced from the interaction of ship emissions with low boundary layer clouds. They are visible throughout the diurnal cycle in satellite images from space-borne assets like the Advanced Baseline Imagers (ABI) aboard the National Oceanic and Atmospheric Administration Geostationary Operational Environmental Satellites (GOES-R). However, complex atmospheric dynamics often make it difficult to identify and characterize the formation and evolution of tracks. Ship tracks have the potential to increase a cloud's albedo and reduce the impact of global warming. Thus, it is important to study these patterns to better understand the complex atmospheric interactions between aerosols and clouds to improve our climate models, and examine the efficacy of climate interventions, such as marine cloud brightening. Over the course of this 3-year project, we have developed novel data-driven techniques that advance our ability to assess the effects of ship emissions on marine environments and the risks of future marine cloud brightening efforts. The three main innovative technical contributions we will document here are a method to track aerosol injections using optical flow, a stochastic simulation model for track formations and an automated detection algorithm for efficient identification of ship tracks in large datasets.
A new method is introduced for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorous assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. The method is seen to be particularly effective for relatively small training samples.
In previous research, two-pass repeat-geometry synthetic aperture radar (SAR) coherent change detection (CCD) predominantly utilized the sample degree of coherence as a measure of the temporal change occurring between two complex-valued image collects. Previous coherence-based CCD approaches tend to show temporal change when there is none in areas of the image that have a low clutter-to-noise power ratio. Instead of employing the sample coherence magnitude as a change metric, in this paper, we derive a new maximum-likelihood (ML) temporal change estimate—the complex reflectance change detection (CRCD) metric to be used for SAR coherent temporal change detection. The new CRCD estimator is a surprisingly simple expression, easy to implement, and optimal in the ML sense. As a result, this new estimate produces improved results in the coherent pair collects that we have tested.
In an effort to enhance image classification of terrain features in fully polarimetric SAR images, this paper explores the utility of combining the results of two state-of-the-art decompositions along with a semi-supervised classification algorithm to classify each pixel in an image. Each pixel is labeled either with a pre-determined classification label, or labeled as unknown.