Additively manufactured metamaterials such as lattices offer unique physical properties such as high specific strengths and stiffnesses. However, additively manufactured parts, including lattices, exhibit a higher variability in their mechanical properties than wrought materials, placing more stringent demands on inspection, part quality verification, and product qualification. Previous research on anomaly detection has primarily focused on using in-situ monitoring of the additive manufacturing process or post-process (ex-situ) x-ray computed tomography. In this work, we show that convolutional neural networks (CNN), a machine learning algorithm, can directly predict the energy required to compressively deform gyroid and octet truss metamaterials using only optical images. Using the tiled nature of engineered lattices, the relatively small data set (43 to 48 lattices) can be augmented by systematically subdividing the original image into many smaller sub-images. During testing of the CNN, the prediction from these sub-images can be combined using an ensemble-like technique to predict the deformation work of the entire lattice. This approach provides a fast and inexpensive screening tool for predicting properties of 3D printed lattices. Importantly, this artificial intelligence strategy goes beyond ‘inspection’, since it accurately estimates product performance metrics, not just the existence of defects.
This paper reports on a near-zero power inertial wakeup sensor system supporting digital weighting of inputs and with protection against false positives due to mechanical shocks. This improves upon existing work by combining the selectivity and sensitivity (Q-amplification) of resonant MEMS sensors with the flexibility of digital signal processing while consuming below 10 nW. The target application is unattended sensors for perimeter sensing and machinery health monitoring where extended battery life afforded by the low power consumption eliminates the need for power cables. For machinery health monitoring, the signals of interest are stationary but may contain spurious mechanical shocks.
Automatic detection of defects in as-built parts is a challenging task due to the large number of potential manufacturing flaws that can occur. X-Ray computed tomography (CT) can produce high-quality images of the parts in a non-destructive manner. The images, however, are grayscale valued, often have artifacts and noise, and require expert interpretation to spot flaws. In order for anomaly detection to be reproducible and cost effective, an automated method is needed to find potential defects. Traditional supervised machine learning techniques fail in the high reliability parts regime due to large class imbalance: there are often many more examples of well-built parts than there are defective parts. This, coupled with the time expense of obtaining labeled data, motivates research into unsupervised techniques. In particular, we build upon the AnoGAN and f-AnoGAN work by T. Schlegl et al. and created a new architecture called PandaNet. PandaNet learns an encoding function to a latent space of defect-free components and a decoding function to reconstruct the original image. We restrict the training data to defect-free components so that the encode-decode operation cannot learn to reproduce defects well. The difference between the reconstruction and the original image highlights anomalies that can be used for defect detection. In our work with CT images, PandaNet successfully identifies cracks, voids, and high z inclusions. Beyond CT, we demonstrate PandaNet working successfully with little to no modifications on a variety of common 2-D defect datasets both in color and grayscale.
This report details the data collected from plate impact experiments performed at the Ballistics Launch Tube (BLT) in May 2019. The experiments consisted of 62 shots of copper projectiles (cylindrical and ogive) impacting 1/4", 1/2", and 3/4" aluminum plates at varying velocities. An additional 14 shots of copper cylinders on a 1" steel plate were fired at varying velocities as a Taylor anvil test. We recorded videos of the impact events and resulting fragmentation using a multi-view system of three high speed cameras. The purpose of these tests was to collect high quality data from the multi-view camera system and create digital representations of the deformed target, projectile and fragments. This data is intended to be used as validation data set for high fidelity simulation codes. This report covers the experimental setup, diagnostics, and collected data. Data processing and analysis are underway and will be discussed in a separate report.
Deep learning segmentation models are known to be sensitive to the scale, contrast, and distribution of pixel values when applied to Computed Tomography (CT) images. For material samples, scans are often obtained from a variety of scanning equipment and resolutions resulting in domain shift. The ability of segmentation models to generalize to examples from these shifted domains relies on how well the distribution of the training data represents the overall distribution of the target data. We present a method to overcome the challenges presented by domain shifts. Our results indicate that we can leverage a deep learning model trained on one domain to accurately segment similar materials at different resolutions by refining binary predictions using uncertainty quantification (UQ). We apply this technique to a set of unlabeled CT scans of woven composite materials with clear qualitative improvement of binary segmentations over the original deep learning predictions. In contrast to prior work, our technique enables refined segmentations without the expense of the additional training time and parameters associated with deep learning models used to address domain shift.