Disastrous consequences can result from defects in manufactured parts—particularly the high consequence parts developed at Sandia. Identifying flaws in as-built parts can be done with nondestructive means, such as X-ray Computed Tomography (CT). However, due to artifacts and complex imagery, the task of analyzing the CT images falls to humans. Human analysis is inherently unreproducible, unscalable, and can easily miss subtle flaws. We hypothesized that deep learning methods could improve defect identification, increase the number of parts that can effectively be analyzed, and do it in a reproducible manner. We pursued two methods: 1) generating a defect-free version of a scan and looking for differences (PandaNet), and 2) using pre-trained models to develop a statistical model of normality (Feature-based Anomaly Detection System: FADS). Both PandaNet and FADS provide good results, are scalable, and can identify anomalies in imagery. In particular, FADS enables zero-shot (training-free) identification of defects for minimal computational cost and expert time. It significantly outperforms prior approaches in computational cost while achieving comparable results. FADS’ core concept has also shown utility beyond anomaly detection by providing feature extraction for downstream tasks.
We propose a nonlinear manifold learning technique based on deep convolutional autoencoders that is appropriate for model order reduction of physical systems in complex geometries. Convolutional neural networks have proven to be highly advantageous for compressing data arising from systems demonstrating a slow-decaying Kolmogorov n-width. However, these networks are restricted to data on structured meshes. Unstructured meshes are often required for performing analyses of real systems with complex geometry. Our custom graph convolution operators based on the available differential operators for a given spatial discretization effectively extend the application space of deep convolutional autoencoders to systems with arbitrarily complex geometry that are typically discretized using unstructured meshes. We propose sets of convolution operators based on the spatial derivative operators for the underlying spatial discretization, making the method particularly well suited to data arising from the solution of partial differential equations. We demonstrate the method using examples from heat transfer and fluid mechanics and show better than an order of magnitude improvement in accuracy over linear methods.
Automatic detection of defects in as-built parts is a challenging task due to the large number of potential manufacturing flaws that can occur. X-Ray computed tomography (CT) can produce high-quality images of the parts in a non-destructive manner. The images, however, are grayscale valued, often have artifacts and noise, and require expert interpretation to spot flaws. In order for anomaly detection to be reproducible and cost effective, an automated method is needed to find potential defects. Traditional supervised machine learning techniques fail in the high reliability parts regime due to large class imbalance: there are often many more examples of well-built parts than there are defective parts. This, coupled with the time expense of obtaining labeled data, motivates research into unsupervised techniques. In particular, we build upon the AnoGAN and f-AnoGAN work by T. Schlegl et al. and created a new architecture called PandaNet. PandaNet learns an encoding function to a latent space of defect-free components and a decoding function to reconstruct the original image. We restrict the training data to defect-free components so that the encode-decode operation cannot learn to reproduce defects well. The difference between the reconstruction and the original image highlights anomalies that can be used for defect detection. In our work with CT images, PandaNet successfully identifies cracks, voids, and high z inclusions. Beyond CT, we demonstrate PandaNet working successfully with little to no modifications on a variety of common 2-D defect datasets both in color and grayscale.
Deep learning segmentation models are known to be sensitive to the scale, contrast, and distribution of pixel values when applied to Computed Tomography (CT) images. For material samples, scans are often obtained from a variety of scanning equipment and resolutions resulting in domain shift. The ability of segmentation models to generalize to examples from these shifted domains relies on how well the distribution of the training data represents the overall distribution of the target data. We present a method to overcome the challenges presented by domain shifts. Our results indicate that we can leverage a deep learning model trained on one domain to accurately segment similar materials at different resolutions by refining binary predictions using uncertainty quantification (UQ). We apply this technique to a set of unlabeled CT scans of woven composite materials with clear qualitative improvement of binary segmentations over the original deep learning predictions. In contrast to prior work, our technique enables refined segmentations without the expense of the additional training time and parameters associated with deep learning models used to address domain shift.