Publications

33 Results
Skip to search filters

Tracking of streaking targets in video frames

IEEE Aerospace Conference Proceedings

Finelli, Andrew; Willett, Peter; Bar-Shalom, Yaakov; Melgaard, David K.; Byrne, Raymond

A method for tracking streaking targets (targets whose signatures are spread across multiple pixels in a focal plane array) is developed. The outputs of a bank of matched filters are thresholded and then used for measurement extraction. The use of the Deep Target Extractor (DTE, previously called the MLPMHT) allows for tracking in the very low observable (VLO) environment common when a streaking target is present. A definition of moving target signal to noise ratio (MT-SNR) is also presented as a metric for trackability. The extraction algorithm and the DTE are then tested across several variables, including trajectory, MT-SNR, and streak length. The DTE and measurement extraction process performs remarkably well in this difficult tracking environment on these data features.

More Details

Evaluation of urban vehicle tracking algorithms

IEEE Aerospace Conference Proceedings

Love, Joshua A.; Hansen, Ross L.; Melgaard, David K.; Karelitz, David B.; Pitts, Todd A.; Byrne, Raymond H.

Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase significantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, blob tracking is the norm. For higher resolution data, additional information may be employed in the detection and classification steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment. The algorithms considered are: random sample consensus (RANSAC), Markov chain Monte Carlo data association (MCMCDA), tracklet inference from factor graphs, and a proximity tracker. Each algorithm was tested on a combination of real and simulated data and evaluated against a common set of metrics.

More Details

Large scale tracking algorithms

Byrne, Raymond H.; Hansen, Ross L.; Love, Joshua A.; Melgaard, David K.; Pitts, Todd A.; Karelitz, David B.; Zollweg, Joshua D.; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.

Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

More Details

Hyperspectral imaging of microalgae using two-photon excitation

Jones, Howland D.; Sinclair, Michael B.; Luk, Ting S.; Collins, Aaron M.; Garcia, Omar F.; Melgaard, David K.; Timlin, Jerilyn A.; Reichardt, Thomas A.

A considerable amount research is being conducted on microalgae, since microalgae are becoming a promising source of renewable energy. Most of this research is centered on lipid production in microalgae because microalgae produce triacylglycerol which is ideal for biodiesel fuels. Although we are interested in research to increase lipid production in algae, we are also interested in research to sustain healthy algal cultures in large scale biomass production farms or facilities. The early detection of fluctuations in algal health, productivity, and invasive predators must be developed to ensure that algae are an efficient and cost-effective source of biofuel. Therefore we are developing technologies to monitor the health of algae using spectroscopic measurements in the field. To do this, we have proposed to spectroscopically monitor large algal cultivations using LIDAR (Light Detection And Ranging) remote sensing technology. Before we can deploy this type of technology, we must first characterize the spectral bio-signatures that are related to algal health. Recently, we have adapted our confocal hyperspectral imaging microscope at Sandia to have two-photon excitation capabilities using a chameleon tunable laser. We are using this microscope to understand the spectroscopic signatures necessary to characterize microalgae at the cellular level prior to using these signatures to classify the health of bulk samples, with the eventual goal of using of LIDAR to monitor large scale ponds and raceways. By imaging algal cultures using a tunable laser to excite at several different wavelengths we will be able to select the optimal excitation/emission wavelengths needed to characterize algal cultures. To analyze the hyperspectral images generated from this two-photon microscope, we are using Multivariate Curve Resolution (MCR) algorithms to extract the spectral signatures and their associated relative intensities from the data. For this presentation, I will show our two-photon hyperspectral imaging results on a variety of microalgae species and show how these results can be used to characterize algal ponds and raceways.

More Details

Scene kinetics mitigation using factor analysis with derivative factors

Scholand, Andrew J.; Larson, K.W.; Melgaard, David K.

Line of sight jitter in staring sensor data combined with scene information can obscure critical information for change analysis or target detection. Consequently before the data analysis, the jitter effects must be significantly reduced. Conventional principal component analysis (PCA) has been used to obtain basis vectors for background estimation; however PCA requires image frames that contain the jitter variation that is to be modeled. Since jitter is usually chaotic and asymmetric, a data set containing all the variation without the changes to be detected is typically not available. An alternative approach, Scene Kinetics Mitigation, first obtains an image of the scene. Then it computes derivatives of that image in the horizontal and vertical directions. The basis set for estimation of the background and the jitter consists of the image and its derivative factors. This approach has several advantages including: (1) only a small number of images are required to develop the model, (2) the model can estimate backgrounds with jitter different from the input training images, (3) the method is particularly effective for sub-pixel jitter, and (4) the model can be developed from images before the change detection process. In addition the scores from projecting the factors on the background provide estimates of the jitter magnitude and direction for registration of the images. In this paper we will present a discussion of the theoretical basis for this technique, provide examples of its application, and discuss its limitations.

More Details

Application specific compression : final report

Melgaard, David K.; Lewis, Phillip J.; Lee, David S.; Carlson, Jeffrey J.; Byrne, Raymond H.; Harrison, Carol D.

With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

More Details

Weighting hyperspectral image data for improved multivariate curve resolution results

Journal of Chemometrics

Jones, Howland D.; Haaland, David M.; Sinclair, Michael B.; Melgaard, David K.; Van Benthem, Mark V.; Pedroso, M.C.

The combination of hyperspectral confocal fluorescence microscopy and multivariate curve resolution (MCR) provides an ideal system for improved quantitative imaging when multiple fluorophores are present. However, the presence of multiple noise sources limits the ability of MCR to accurately extract pure-component spectra when there is high spectral and/or spatial overlap between multiple fluorophores. Previously, MCR results were improved by weighting the spectral images for Poisson-distributed noise, but additional noise sources are often present. We have identified and quantified all the major noise sources in hyperspectral fluorescence images. Two primary noise sources were found: Poisson-distributed noise and detector-read noise. We present methods to quantify detector-read noise variance and to empirically determine the electron multiplying CCD (EMCCD) gain factor required to compute the Poisson noise variance. We have found that properly weighting spectral image data to account for both noise sources improved MCR accuracy. In this paper, we demonstrate three weighting schemes applied to a real hyperspectral corn leaf image and to simulated data based upon this same image. MCR applied to both real and simulated hyperspectral images weighted to compensate for the two major noise sources greatly improved the extracted pure emission spectra and their concentrations relative to MCR with either unweighted or Poisson-only weighted data. Thus, properly identifying and accounting for the major noise sources in hyperspectral images can serve to improve the MCR results. These methods are very general and can be applied to the multivariate analysis of spectral images whenever CCD or EMCCD detectors are used. Copyright © 2008 John Wiley & Sons, Ltd.

More Details

Effect of processing parameters on temperature profiles, fluid flow, and pool shape in the ESR process

LMPC 2005 - Proceedings of the 2005 International Symposium on Liquid Metal Processing and Casting

Viswanathan, Srinath; Melgaard, David K.; Patel, Ashish D.; Evans, David G.

A numerical model of the ESR process was used to study the effect of the various process parameters on the resulting temperature profiles, flow field, and pool shapes. The computational domain included the slag and ingot, while the electrode, crucible, and cooling water were considered as external boundary conditions. The model considered heat transfer, fluid flow, solidification, and electromagnetic effects. The predicted pool profiles were compared with experimental results obtained over a range of processing parameters from an industrial-scale 718 alloy ingot. The shape of the melt pool was marked by dropping nickel balls down the annulus of the crucible during melting. Thermocouples placed in the electrode monitored the electrode and slag temperature as melting progressed. The cooling water temperature and flow rate were also monitored. The resulting ingots were sectioned and etched to reveal the ingot macrostructure and the shape of the melt pool. Comparisons of the predicted and experimentally measured pool profiles show excellent agreement. The effect of processing parameters, including the slag cap thickness, on the temperature distribution and flow field are discussed. The results of a sensitivity study of thermophysical properties of the slag are also discussed.

More Details

A demonstration of melt rate control during VAR of "cracked" electrodes

Journal of Materials Science

Williamson, Rodney L.; Beaman, J.J.; Melgaard, David K.; Shelmidine, G.J.; Patel, A.D.; Adasczik, C.B.

A particularly challenging problem associated with vacuum arc remelting occurs when trying to maintain accurate control of electrode melt rate as the melt zone passes through a transverse crack in the electrode. As the melt zone approaches the crack, poor heat conduction across the crack drives the local temperature in the electrode tip above its steady-state value, causing the controller to cut back on melting current in response to an increase in melting efficiency. The difficulty arises when the melt zone passes through the crack and encounters the relatively cold metal on the other side, giving rise to an abrupt drop in melt rate. This extremely dynamic melting situation is very difficult to handle using standard load-cell based melt rate control, resulting in large melt rate excursions. We have designed and tested a new generation melt rate controller that is capable of controlling melt rate through crack events. The controller is designed around an accurate dynamic melting model that uses four process variables: electrode tip thermal boundary layer, electrode gap, electrode mass and melting efficiency. Tests, jointly sponsored by the Specialty Metals Processing Consortium and Sandia National Laboratories, were performed at Carpenter Technology Corporation wherein two 0.43 m diameter Pyromet® 718 electrodes were melted into 0.51 m diameter ingots. Each electrode was cut approximately halfway through its diameter with an abrasive saw to simulate an electrode crack. Relatively accurate melt rate control through the cuts was demonstrated despite the observation of severe arc disturbances and loss of electrode gap control. Subsequent to remelting, one ingot was sectioned in the "as cast" condition, whereas the other was forged to 0.20 m diameter billet. Macrostructural characterization showed solidification white spots in regions affected by the cut in the electrode.

More Details

Model based gap and melt rate control for VAR of Ti-6Al-4V

Journal of Materials Science

Beaman, J.J.; Williamson, Rodney L.; Melgaard, David K.; Shelmidine, G.J.; Hamel, J.C.

A new controller has been designed for vacuum arc remelting titanium alloys based on an accurate, low order, nonlinear, melting model. The controller adjusts melting current and electrode drive speed to match estimated gap and melt rate with operator supplied reference values. Estimates of gap and melt rate are obtained by optimally combining predictions from the model with measurements of voltage, current, and electrode position. Controller tests were carried out at Timet Corporation's Henderson Technical Laboratory in Henderson, Nevada. Previous test results were used to correlate measured gap to voltage and current. A controller test melt was performed wherein a 0.279 m diameter Ti-6Al-4V electrode was melted into 0.356 m diameter ingot. Commanded melt rate was varied from 20 to 90 g/s and commanded gap was held at 1.5 cm. Because no measure of electrode weight was available on the test furnace, electrode position data were analyzed and the results used to determine the actual melt rate. A gap-voltage-current factor space model was used to check estimated gap. The controller performed well, and both melt rate and electrode gap control were successfully demonstrated.

More Details

Exploration of new multivariate spectral calibration algorithms

Haaland, David M.; Melgaard, David K.

A variety of multivariate calibration algorithms for quantitative spectral analyses were investigated and compared, and new algorithms were developed in the course of this Laboratory Directed Research and Development project. We were able to demonstrate the ability of the hybrid classical least squares/partial least squares (CLSIPLS) calibration algorithms to maintain calibrations in the presence of spectrometer drift and to transfer calibrations between spectrometers from the same or different manufacturers. These methods were found to be as good or better in prediction ability as the commonly used partial least squares (PLS) method. We also present the theory for an entirely new class of algorithms labeled augmented classical least squares (ACLS) methods. New factor selection methods are developed and described for the ACLS algorithms. These factor selection methods are demonstrated using near-infrared spectra collected from a system of dilute aqueous solutions. The ACLS algorithm is also shown to provide improved ease of use and better prediction ability than PLS when transferring calibrations between near-infrared calibrations from the same manufacturer. Finally, simulations incorporating either ideal or realistic errors in the spectra were used to compare the prediction abilities of the new ACLS algorithm with that of PLS. We found that in the presence of realistic errors with non-uniform spectral error variance across spectral channels or with spectral errors correlated between frequency channels, ACLS methods generally out-performed the more commonly used PLS method. These results demonstrate the need for realistic error structure in simulations when the prediction abilities of various algorithms are compared. The combination of equal or superior prediction ability and the ease of use of the ACLS algorithms make the new ACLS methods the preferred algorithms to use for multivariate spectral calibrations.

More Details

Comparisons of prediction abilities of augmented classical least squares and partial least squares with realistic simulated data : effects of uncorrelated and correlated errors with nonlinearities

Proposed for publication in Applied Spectroscopy.

Melgaard, David K.; Melgaard, David K.; Haaland, David M.

A manuscript describing this work summarized below has been submitted to Applied Spectroscopy. Comparisons of prediction models from the new ACLS and PLS multivariate spectral analysis methods were conducted using simulated data with deviations from the idealized model. Simulated uncorrelated concentration errors, and uncorrelated and correlated spectral noise were included to evaluate the methods on situations representative of experimental data. The simulations were based on pure spectral components derived from real near-infrared spectra of multicomponent dilute aqueous solutions containing glucose, urea, ethanol, and NaCl in the concentration range from 0-500 mg/dL. The statistical significance of differences was evaluated using the Wilcoxon signed rank test. The prediction abilities with nonlinearities present were similar for both calibration methods although concentration noise, number of samples, and spectral noise distribution sometimes affected one method more than the other. In the case of ideal errors and in the presence of nonlinear spectral responses, the differences between the standard error of predictions of the two methods were sometimes statistically significant, but the differences were always small in magnitude. Importantly, SRACLS was found to be competitive with PLS when component concentrations were only known for a single component. Thus, SRACLS has a distinct advantage over standard CLS methods that require that all spectral components be included in the model. In contrast to simulations with ideal error, SRACLS often generated models with superior prediction performance relative to PLS when the simulations were more realistic and included either non-uniform errors and/or correlated errors. Since the generalized ACLS algorithm is compatible with the PACLS method that allows rapid updating of models during prediction, the powerful combination of PACLS with ACLS is very promising for rapidly maintaining and transferring models for system drift, spectrometer differences, and unmodeled components without the need for recalibration. The comparisons under different noise assumptions in the simulations obtained during this investigation emphasize the need to use realistic simulations when making comparisons between various multivariate calibration methods. Clearly, the conclusions of the relative performance of various methods were found to be dependent on how realistic the spectral errors were in the simulated data. Results demonstrating the simplicity and power of ACLS relative to PLS are presented in the following section.

More Details

Optimal Estimation of Electrode Gap During Vacuum ARC Remelting

Metallurgical and Materials Transactions B

Williamson, Rodney L.; Melgaard, David K.

Electrode gap is a very important parameter for the safe and successful control of vacuum arc remelting (VAR), a process used extensively throughout the specialty metals industry for the production of nickel base alloys and aerospace titanium alloys. Optimal estimation theory has been applied to the problem of estimating electrode gap and a filter has been developed based on a model of the gap dynamics. Taking into account the uncertainty in the process inputs and noise in the measured process variables, the filter provides corrected estimates of electrode gap that have error variances two-to-three orders of magnitude less than estimates based solely on measurements for the sample times of interest. This is demonstrated through simulations and confined by tests on the VAR furnace at Sandia National Laboratories. Furthermore, the estimates are inherently stable against common process disturbances that affect electrode gap measurement and melting rate. This is not only important for preventing (or minimizing) the formation of solidification defects during VAR of nickel base alloys, but of importance for high current processing of titanium alloys where loss of gap control can lead to a catastrophic, explosive failure of the process.

More Details

Calibration-free electrical conductivity measurements for highly conductive slags

Metallurgical Transactions B

Van Den Avyle, James A.; Melgaard, David K.; Van Den Avyle, James A.

This research involves the measurement of the electrical conductivity (K) for the ESR (electroslag remelting) slag (60 wt.% CaF{sub 2} - 20 wt.% CaO - 20 wt.% Al{sub 2}O{sub 3}) used in the decontamination of radioactive stainless steel. The electrical conductivity is measured with an improved high-accuracy-height-differential technique that requires no calibration. This method consists of making continuous AC impedance measurements over several successive depth increments of the coaxial cylindrical electrodes in the ESR slag. The electrical conductivity is then calculated from the slope of the plot of inverse impedance versus the depth of the electrodes in the slag. The improvements on the existing technique include an increased electrochemical cell geometry and the capability of measuring high precision depth increments and the associated impedances. These improvements allow this technique to be used for measuring the electrical conductivity of highly conductive slags such as the ESR slag. The volatilization rate and the volatile species of the ESR slag measured through thermogravimetric (TG) and mass spectroscopy analysis, respectively, reveal that the ESR slag composition essentially remains the same throughout the electrical conductivity experiments.

More Details

New prediction-augmented classical least squares (PACLS) methods: Application to unmodeled interferents

Applied Spectroscopy

Haaland, David M.; Melgaard, David K.

A significant improvement to the classical least squares (CLS) multivariate analysis method has been developed. The new method, called prediction-augmented classical least squares (PACLS), removes the restriction for CLS that all interfering spectral species must be known and their concentrations included during the calibration. The authors demonstrate that PACLS can correct inadequate CLS models if spectral components left out of the calibration can be identified and if their spectral shapes can be derived and added during a PACLS prediction step. The new PACLS method is demonstrated for a system of dilute aqueous solutions containing urea, creatinine, and NaCl analytes with and without temperature variations. The authors demonstrate that if CLS calibrations are performed using only a single analyte's concentration, then there is little, if any, prediction ability. However, if pure-component spectra of analytes left out of the calibration are independently obtained and added during PACLS prediction, then the CLS prediction ability is corrected and predictions become comparable to that of a CLS calibration that contains all analyte concentrations. It is also demonstrated that constant-temperature CLS models can be used to predict variable-temperature data by employing the PACLS method augmented by the spectral shape of a temperature change of the water solvent. In this case, PACLS can also be used to predict sample temperature with a standard error of prediction of 0.07 C even though the calibration data did not contain temperature variations. The PACLS method is also shown to be capable of modeling system drift to maintain a calibration in the presence of spectrometer drift.

More Details
33 Results
33 Results