Deep learning segmentation models are known to be sensitive to the scale, contrast, and distribution of pixel values when applied to Computed Tomography (CT) images. For material samples, scans are often obtained from a variety of scanning equipment and resolutions resulting in domain shift. The ability of segmentation models to generalize to examples from these shifted domains relies on how well the distribution of the training data represents the overall distribution of the target data. We present a method to overcome the challenges presented by domain shifts. Our results indicate that we can leverage a deep learning model trained on one domain to accurately segment similar materials at different resolutions by refining binary predictions using uncertainty quantification (UQ). We apply this technique to a set of unlabeled CT scans of woven composite materials with clear qualitative improvement of binary segmentations over the original deep learning predictions. In contrast to prior work, our technique enables refined segmentations without the expense of the additional training time and parameters associated with deep learning models used to address domain shift.