Publications

53 Results

Search results

Jump to search filters

Vehicle track detection in synthetic aperture radar imagery

Malinas, Rebecca; Quach, Tu T.; Koch, Mark W.

The various technologies presented herein relate to detecting one or more vehicle tracks in radar imagery. A CCD image can be generated from a first SAR image and a second SAR image captured for a common scene, wherein the second SAR image may include a vehicle track that is not present in the first SAR image. A Radon transform (RT) process can be applied to each pixel in the CCD image, and further, a radial derivative (RDRT) can be determined for each pixel from RT values derived for each pixel. Each pixel can be labelled as being related to a track, or not, based upon a unary cost obtained from the RDRT value of that pixel, combined with a probability of the pixel label based upon labels applied to neighboring pixels. A labelled representation of the CCD image can be generated based upon the determination of “track” or “not track”.

More Details

Open set recognition of aircraft in aerial imagery using synthetic template models

Proceedings of SPIE - The International Society for Optical Engineering

Bapst, Aleksander B.; Tran, Jonathan; Koch, Mark W.; Moya, Mary M.; Swahn, Robert

Fast, accurate and robust automatic target recognition (ATR) in optical aerial imagery can provide game-changing advantages to military commanders and personnel. ATR algorithms must reject non-targets with a high degree of confidence in a world with an infinite number of possible input images. Furthermore, they must learn to recognize new targets without requiring massive data collections. Whereas most machine learning algorithms classify data in a closed set manner by mapping inputs to a fixed set of training classes, open set recognizers incorporate constraints that allow for inputs to be labelled as unknown. We have adapted two template-based open set recognizers to use computer generated synthetic images of military aircraft as training data, to provide a baseline for military-grade ATR: (1) a frequentist approach based on probabilistic fusion of extracted image features, and (2) an open set extension to the one-class support vector machine (SVM). These algorithms both use histograms of oriented gradients (HOG) as features as well as artificial augmentation of both real and synthetic image chips to take advantage of minimal training data. Our results show that open set recognizers trained with synthetic data and tested with real data can successfully discriminate real target inputs from non-targets. However, there is still a requirement for some knowledge of the real target in order to calibrate the relationship between synthetic template and target score distributions. We conclude by proposing algorithm modifications that may improve the ability of synthetic data to represent real data.

More Details

Rapid abstract perception to enable tactical unmanned system operations

Proceedings of SPIE - The International Society for Optical Engineering

Buerger, Stephen P.; Parikh, Anup N.; Spencer, Steven J.; Koch, Mark W.

As unmanned systems (UMS) proliferate for security and defense applications, autonomous control system capabilities that enable them to perform tactical operations are of increasing interest. These operations, in which UMS must match or exceed the performance and speed of people or manned assets, even in the presence of dynamic mission objectives and unpredictable adversary behavior, are well beyond the capability of even the most advanced control systems demonstrated to date. In this paper we deconstruct the tactical autonomy problem, identify the key technical challenges, and place them into context with the autonomy taxonomy produced by the US Department of Defense's Autonomy Community of Interest. We argue that two key capabilities beyond the state of the art are required to enable an initial fieldable capability: rapid abstract perception in appropriate environments, and tactical reasoning. We summarize our work to date in tactical reasoning, and present initial results from a new research program focused on abstract perception in tactical environments. This approach seeks to apply semantic labels to a broad set of objects via three core thrusts. First, we use physics-based multi-sensor fusion to enable generalization from imperfect and limited training data. Second, we pursue methods to optimize sensor perspective to improve object segmentation, mapping and, ultimately, classification. Finally, we assess the potential impact of using sensors that have not traditionally been used by UMS to perceive their environment, for example hyperspectral imagers, on the ability to identify objects. Our technical approach and initial results are presented.

More Details

Terrain classification using single-pol synthetic aperture radar

Advances in Engineering Research

Koch, Mark W.; Moya, Mary M.; Steinbach, Ryan M.

Except in extreme weather conditions, Synthetic aperture radar (SAR) is a remote sensing technology that can operate day or night. SAR can provide surveillance by making multiple passes over a wide area. For object-based intelligence, it is convenient to use these multiple passes to segment and classify the SAR images into objects that identify various terrains and man-made structures that we call "static-features." Our approach is unique in that we have multiple SAR passes of an area over a long period of time (on the order of weeks). From these many SAR images of the same area, we can combine SAR images from different times to create a variety of SAR products. For example, we introduce a novel SAR image product that captures how different regions decorrelate at different rates. From these many SAR products, we exact superpixels or groups of connected pixels that describe a homogenous region. Using pixels contained within a superpixel we develop a series of one-class classification algorithms using a goodness-of-fit metric that classifies terrains of interest in each SAR product for each superpixel. To combine the results from many SAR products we use P-value fusion. The result is a classification and a confidence about the different classes. To enforce spatial consistency, we represent the confidence labeling of the superpixels as a conditional random field and infer the most likely labeling by maximize the posterior probability of the random field. The result is a colorized SAR image where each color represents a different terrain class.

More Details

Low-Level track finding and completion using random fields

IS and T International Symposium on Electronic Imaging Science and Technology

Quach, Tu T.; Malinas, Rebecca; Koch, Mark W.

Coherent change detection (CCD) images, which are prod- ucts of combining two synthetic aperture radar (SAR) images taken at different times of the same scene, can reveal subtle sur- face changes such as those made by tire tracks. These images, however, have low texture and are noisy, making it difficult to au- Tomate track finding. Existing techniques either require user cues and can only trace a single track or make use of templates that are difficult to generalize to different types of tracks, such as those made by motorcycles, or vehicles sizes. This paper presents an approach to automatically identify vehicle tracks in CCD images. We identify high-quality track segments and leverage the con- strained Delaunay triangulation (CDT) to find completion track segments. We then impose global continuity and track smoothness using a binary random field on the resulting CDT graph to determine edges that belong to real tracks. Experimental results show that our algorithm outperforms existing state-of-the- Art techniques in both accuracy and speed.

More Details

Road segmentation using multipass single-pol synthetic aperture radar imagery

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Koch, Mark W.; Moya, Mary M.; Chow, James G.; Goold, Jeremy; Malinas, Rebecca

Synthetic aperture radar (SAR) is a remote sensing technology that can truly operate 24/7. It's an all-weather system that can operate at any time except in the most extreme conditions. By making multiple passes over a wide area, a SAR can provide surveillance over a long time period. For high level processing it is convenient to segment and classify the SAR images into objects that identify various terrains and man-made structures that we call 'static features.' In this paper we concentrate on automatic road segmentation. This not only serves as a surrogate for finding other static features, but road detection in of itself is important for aligning SAR images with other data sources. In this paper we introduce a novel SAR image product that captures how different regions decorrelate at different rates. We also show how a modified Kolmogorov-Smirnov test can be used to model the static features even when the independent observation assumption is violated.

More Details

Single-Pol Synthetic Aperture Radar Terrain Classification using Multiclass Confidence for One-Class Classifiers

Sandia journal manuscript; Not yet accepted for publication

Koch, Mark W.; Steinbach, Ryan M.; Moya, Mary M.

Except in the most extreme conditions, Synthetic aperture radar (SAR) is a remote sensing technology that can operate day or night. A SAR can provide surveillance over a long time period by making multiple passes over a wide area. For object-based intelligence it is convenient to segment and classify the SAR images into objects that identify various terrains and man-made structures that we call “static features.” In this paper we introduce a novel SAR image product that captures how different regions decorrelate at different rates. Using superpixels and their first two moments we develop a series of one-class classification algorithms using a goodness-of-fit metric. P-value fusion is used to combine the results from different classes. We also show how to combine multiple one-class classifiers to get a confidence about a classification. This can be used by downstream algorithms such as a conditional random field to enforce spatial constraints.

More Details

Building detection in SAR imagery

Proceedings of SPIE - The International Society for Optical Engineering

Steinbach, Ryan M.; Koch, Mark W.; Moya, Mary M.; Goold, Jeremy

Current techniques for building detection in Synthetic Aperture Radar (SAR) imagery can be computationally expensive and/or enforce stringent requirements for data acquisition. We present a technique that is effective and efficient at determining an approximate building location from multi-pass single-pol SAR imagery. This approximate location provides focus-of-attention to specific image regions for subsequent processing. The proposed technique assumes that for the desired image, a preprocessing algorithm has detected and labeled bright lines and shadows. Because we observe that buildings produce bright lines and shadows with predetermined relationships, our algorithm uses a graph clustering technique to find groups of bright lines and shadows that create a building. The nodes of the graph represent bright line and shadow regions, while the arcs represent the relationships between the bright lines and shadow. Constraints based on angle of depression and the relationship between connected bright lines and shadows are applied to remove unrelated arcs. Once the related bright lines and shadows are grouped, their locations are combined to provide an approximate building location. Experimental results are presented to demonstrate the outcome of this technique.

More Details

Building Detection in SAR Imagery

Steinbach, Ryan M.; Koch, Mark W.; Moya, Mary M.; Goold, Jeremy

Current techniques for building detection in Synthetic Aperture Radar (SAR) imagery can be computationally expensive and/or enforce stringent requirements for data acquisition. The desire is to present a technique that is effective and efficient at determining an approximate building location. This approximate location can be used to extract a portion of the SAR image to then perform a more robust detection. The proposed technique assumes that for the desired image, bright lines and shadows, SAR artifact effects, are approximately labeled. These labels are enhanced and utilized to locate buildings, only if the related bright lines and shadows can be grouped. In order to find which of the bright lines and shadows are related, all of the bright lines are connected to all of the shadows. This allows the problem to be solved from a connected graph viewpoint. Where the nodes are the bright lines and shadows and the arcs are the connections between bright lines and shadows. Constraints based on angle of depression and the relationship between connected bright lines and shadows are applied to remove unrelated arcs. Once the related bright lines and shadows are grouped, their locations are combined to provide an approximate building location. Experimental results are provided showing the outcome of the technique.

More Details

Superpixel segmentation using multiple SAR image products

Proceedings of SPIE - The International Society for Optical Engineering

Koch, Mark W.; Perkins, David N.; West, Roger D.

Sandia National Laboratories produces copious amounts of high-resolution, single-polarization Synthetic Aperture Radar (SAR) imagery, much more than available researchers and analysts can examine. Automating the recognition of terrains and structures in SAR imagery is highly desired. The optical image processing community has shown that superpixel segmentation (SPS) algorithms divide an image into small compact regions of similar intensity. Applying these SPS algorithms to optical images can reduce image complexity, enhance statistical characterization and improve segmentation and categorization of scene objects. SPS algorithms typically require high SNR (signal-to-noise-ratio) images to define segment boundaries accurately. Unfortunately, SAR imagery contains speckle, a product of coherent image formation, which complicates the extraction of superpixel segments and could preclude their use. Some researchers have developed modified SPS algorithms that discount speckle for application to SAR imagery. We apply two widely-used SPS algorithms to speckle-reduced SAR image products, both single SAR products and combinations of multiple SAR products, which include both single polarization and multi-polarization SAR images. To evaluate the quality of resulting superpixels, we compute research-standard segmentation quality measures on the match between superpixels and hand-labeled ground-truth, as well as statistical characterization of the radar-cross-section within each superpixel. Results of this quality analysis determine the best input/algorithm/parameter set for SAR imagery. Simple Linear Iterative Clustering provides faster computation time, superpixels that conform to scene-relevant structures, direct control of average superpixel size and more uniform superpixel sizes for improved statistical estimation which will facilitate subsequent terrain/structure categorization and segmentation into scene-relevant regions. © 2014 SPIE.

More Details

One-class multiple-look fusion: A theoretical comparison of different approaches with examples from infrared video

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Koch, Mark W.

Multiple-look fusion is quickly becoming more important in statistical pattern recognition. With increased computing power and memory one can make many measurements on an object of interest using, for example, video imagery or radar. By obtaining more views of an object, a system can make decisions with lower missed detection and false alarm errors. There are many approaches for combining information from multiple looks and we mathematically compare and contrast the sequential probability ratio test, Bayesian fusion, and Dempster-Shafer theory of evidence. Using a consistent probabilistic framework we demonstrate the differences and similarities between the approaches and show results for an application in infrared video classification. © 2013 IEEE.

More Details

Automatic recognition of malicious intent indicators

Koch, Mark W.; Nguyen, Hung D.; Giron, Casey; Yee, Mark L.; Drescher, Steven M.

A major goal of next-generation physical protection systems is to extend defenses far beyond the usual outer-perimeter-fence boundaries surrounding protected facilities. Mitigation of nuisance alarms is among the highest priorities. A solution to this problem is to create a robust capability to Automatically Recognize Malicious Indicators of intruders. In extended defense applications, it is not enough to distinguish humans from all other potential alarm sources as human activity can be a common occurrence outside perimeter boundaries. Our approach is unique in that it employs a stimulus to determine a malicious intent indicator for the intruder. The intruder's response to the stimulus can be used in an automatic reasoning system to decide the intruder's intent.

More Details

Distributed Sensor Fusion in Water Quality Event Detection

Journal of Water Resources Planning and Management

Koch, Mark W.; Mckenna, Sean A.

To protect drinking water systems, a contamination warning system can use in-line sensors to indicate possible accidental and deliberate contamination. Currently, reporting of an incident occurs when data from a single station detects an anomaly. This paper proposes an approach for combining data from multiple stations to reduce false background alarms. By considering the location and time of individual detections as points resulting from a random space-time point process, Kulldorff's scan test can find statistically significant clusters of detections. Using EPANET to simulate contaminant plumes of varying sizes moving through a water network with varying amounts of sensing nodes, it is shown that the scan test can detect significant clusters of events. Also, these significant clusters can reduce the false alarms resulting from background noise and the clusters can help indicate the time and source location of the contaminant. Fusion of monitoring station results within a moderately sized network show false alarm errors are reduced by three orders of magnitude using the scan test. © 2011 ASCE.

More Details

Learning a detection map for a network of unattended ground sensors

Koch, Mark W.; Nguyen, Hung D.

We have developed algorithms to automatically learn a detection map of a deployed sensor field for a virtual presence and extended defense (VPED) system without apriori knowledge of the local terrain. The VPED system is an unattended network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has the ability to detect and classify moving targets at a limited range. By using a network of pods we can form a virtual perimeter with each pod responsible for a certain section of the perimeter. The site's geography and soil conditions can affect the detection performance of the pods. Thus, a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being installed at a site by a mobile deployment unit (MDU). The MDU will wear a GPS unit, so the system not only knows when it can detect the MDU, but also the MDU's location. In this paper, we demonstrate how to handle anisotropic sensor-configurations, geography, and soil conditions.

More Details

A rapidly deployable virtual presence extended defense system

2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2009

Koch, Mark W.; Giron, Casey; Nguyen, Hung D.

We have developed algorithms for a virtual presence and extended defense (VPED) system that automatically learns the detection map of a deployed sensor field without a-priori knowledge of the local terrain. The VPED system is a network of sensor pods, with each pod containing acoustic and seismic sensors. Each pod has a limited detection range, but a network of pods can form a virtual perimeter. The site's geography and soil conditions can affect the detection performance of the pods. Thus a network in the field may not have the same performance as a network designed in the lab. To solve this problem we automatically estimate a network's detection performance as it is being constructed. We demonstrate results using simulated and real data. © 2009 IEEE.

More Details

Distributed network fusion for water quality

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Koch, Mark W.; Mckenna, Sean A.

To protect drinking water systems, a contamination warning system can use in-line sensors to detect accidental and deliberate contamination. Currently, detection of an incident occurs when data from a single station detects an anomaly. This paper considers the possibility of combining data from multiple locations to reduce false alarms and help determine the contaminant's injection source and time. If we consider the location and time of individual detections as points resulting from a random space-time point process, we can use Kulldorff's scan test to find statistically significant clusters of detections. Using EPANET, we simulate a contaminant moving through a water network and detect significant clusters of events. We show these significant clusters can distinguish true events from random false alarms and the clusters help identify the time and source of the contaminant. Fusion results show reduced errors with only 25% more sensors needed over a nonfusion approach. © 2008 ASCE.

More Details

Recognition using gait

Koch, Mark W.

Gait or an individual's manner of walking, is one approach for recognizing people at a distance. Studies in psychophysics and medicine indicate that humans can recognize people by their gait and have found twenty-four different components to gait that taken together make it a unique signature. Besides not requiring close sensor contact, gait also does not necessarily require a cooperative subject. Using video data of people walking in different scenarios and environmental conditions we develop and test an algorithm that uses shape and motion to identify people from their gait. The algorithm uses dynamic time warping to match stored templates against an unknown sequence of silhouettes extracted from a person walking. While results under similar constraints and conditions are very good, the algorithm quickly degrades with varying conditions such as surface and clothing.

More Details

Practical measures of confidence for acoustic identification of ground vehicles

Proceedings of SPIE - The International Society for Optical Engineering

Haschke, Greg B.; Koch, Mark W.; Malone, Kevin T.

An unattended ground sensor (UGS) that attempts to perform target identification without providing some corresponding estimate of confidence level is of limited utility. In this context, a confidence level is a measure of probability that the detected vehicle is of a particular target class. Many identification methods attempt to match features of a detected vehicle to each of a set of target templates. Each template is formed empirically from features collected from vehicles known to be members of the particular target class. The nontarget class is inherent in this formulation and must be addressed in providing a confidence level. Often, it is difficult to adequately characterize the nontarget class empirically by feature collection, so assumptions must be made about the nontarget class. An analyst tasked with deciding how to use the confidence level of the classifier decision should have an accurate understanding of the meaning of the confidence level given. This paper compares several definitions of confidence level by considering the assumptions that are made in each, how these assumptions affect the meaning, and giving examples of implementing them in a practical acoustic UGS.

More Details

A 2D range Hausdorff approach for 3D face recognition

Russ, Trina D.; Koch, Mark W.; Little, Charles Q.

This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

More Details

A prescreener for 3D face recognition using radial symmetry and the Hausdorff fraction

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops

Koudelka, Melissa L.; Koch, Mark W.; Russ, Trina D.

Face recognition systems require the ability to efficiently scan an existing database of faces to locate a match for a newly acquired face. The large number of faces in real world databases makes computationally intensive algorithms impractical for scanning entire databases. We propose the use of more efficient algorithms to “prescreen” face databases, determining a limited set of likely matches that can be processed further to identify a match. We use both radial symmetry and shape to extract five features of interest on 3D range images of faces. These facial features determine a very small subset of discriminating points which serve as input to a prescreening algorithm based on a Hausdorff fraction. We show how to compute the Haudorff fraction in linear O(n) time using a range image representation. Our feature extraction and prescreening algorithms are verified using the FRGC v1.0 3D face scan data. Results show 97% of the extracted facial features are within 10 mm or less of manually marked ground truth, and the prescreener has a rank 6 recognition rate of 100%.

More Details

3D facial recognition: A quantitative analysis

Proceedings - International Carnahan Conference on Security Technology

Russ, Trina D.; Koch, Mark W.; Little, Charles Q.

Two-dimensional facial recognition has, traditionally, been an attractive biometric, however, the accuracy of 2D facial recognition (FR) is performance limited and insufficient when confronted with extensive numbers of people to screen and identify, and the numerous appearances that a 2D face can exhibit. In efforts to overcome many of the issues limiting 2D FR technology, researchers are beginning to focus their attention on 3D FR technology. In this paper, an analysis of a 3D FR system being developed at Sandia National Laboratories is performed. The study involves the use of 200 subjects on which verification (one-to-one) matches are performed using a single probe database (one correct match per subject) and 30 subjects on which identification matches are performed. The system is evaluated in terms of probability of detection (Pd) and probability of false accepts (FAR). The results presented will aid in providing an initial understanding of the performance of 3D FR © 2004 IEEE.

More Details

A 2D range Hausdorff approach to 3D facial recognition

Koch, Mark W.; Little, Charles Q.

This paper presents a 3D facial recognition algorithm based on the Hausdorff distance metric. The standard 3D formulation of the Hausdorff matching algorithm has been modified to operate on a 2D range image, enabling a reduction in computation from O(N2) to O(N) without large storage requirements. The Hausdorff distance is known for its robustness to data outliers and inconsistent data between two data sets, making it a suitable choice for dealing with the inherent problems in many 3D datasets due to sensor noise and object self-occlusion. For optimal performance, the algorithm assumes a good initial alignment between probe and template datasets. However, to minimize the error between two faces, the alignment can be iteratively refined. Results from the algorithm are presented using 3D face images from the Face Recognition Grand Challenge database version 1.0.

More Details

A novel window based method for approximating the Hausdorff in 3D range imagery

Koch, Mark W.

Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.

More Details

Markov sequential pattern recognition : dependency and the unknown class

Koch, Mark W.; Haschke, Greg B.; Malone, Kevin T.

The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.

More Details

Syndrome Surveillance Using Parametric Space-Time Clustering

Koch, Mark W.; Mckenna, Sean A.; Bilisoly, Roger L.

As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.

More Details

Feature discovery in gray level imagery for one-class object recognition

IEEE International Conference on Neural Networks - Conference Proceedings

Koch, Mark W.

Feature extraction transforms an object's image representation to an alternate reduced representation. In one-class object recognition, we would like this alternate representation to give improved discrimination between the object and all possible non-objects and improved generalization between different object poses. Feature selection can be time-consuming and difficult to optimize so we have investigated unsupervised neural networks for feature discovery. We first discuss an inherent limitation in competitive type neural networks for discovering features in gray level images. We then show how Sanger's Generalized Hebbian Algorithm (GHA) removes this limitation and describe a novel GHA application for learning object features that discriminate the object from clutter. Using a specific example, we show how these features are better at distinguishing the target object from other non-target objects with Carpenter's ART 2-A as the pattern classifier.

More Details

Detecting residue on a printed circuit board: An application of the Boundary Contour/Feature Contour System

Koch, Mark W.

We have developed a video detection algorithm for measuring the residue left on a printed circuit board after a soldering process. Oblique lighting improves the contrast between the residue and the board substrate, but also introduces an illumination gradient. The algorithm uses the Boundary Contour System/Feature Contour System to produce an idealized clean board image by discounting the illuminant, detecting trace boundaries, and filling the trace and substrate regions. The algorithm then combines the original input image and ideal image using mathematical models of the normal and inverse Weber Law to enhance the residue on the traces and substrate. The paper includes results for a clean board and one with residue.

More Details
53 Results
53 Results