03 Jun 2022
03 Jun 2022
Status: this preprint is currently under review for the journal AMT.

Neural network processing of holographic images

John Stephen Schreck1, Gabrielle Gantos1, Matthew Hayman2, Aaron Bensemer3, and David John Gagne1 John Stephen Schreck et al.
  • 1National Center for Atmospheric Research (NCAR), Computational and Information Systems Lab, Boulder, CO, USA
  • 2National Center for Atmospheric Research (NCAR), Earth Observing Lab, Boulder, CO, USA
  • 3National Center for Atmospheric Research (NCAR), Mesoscale and Microscale Meteorology Lab, Boulder, CO, USA

Abstract. HOLODEC, an airborne cloud particle imager, captures holographic images of a fixed volume of cloud to characterize the types and sizes of cloud particles, such as water droplets and ice crystals. Cloud particle properties include position, diameter, and shape. In this work we evaluate the potential for processing HOLODEC data by leveraging a combination of GPU hardware and machine learning with the eventual goal of improving HOLODEC processing speed and performance. We present a hologram processing algorithm, HolodecML, that utilizes a neural network segmentation model and computational parallelization to achieve these goals. HolodecML is trained using synthetically generated holograms based on a model of the instrument, and predicts masks around particles found within reconstructed images. From these masks, the position and size of the detected particles can be characterized in three dimensions. In order to successfully process real holograms, we find we must apply a series of image corrupting transformations and noise to the synthetic images used in training.

In this evaluation, HolodecML had comparable position and size estimations performance to the standard processing method, but improved particle detection by nearly 20 % on several thousand manually labeled HOLODEC images. However, the particle detection improvement only occurred when image corruption was performed on the simulated images during training, thereby mimicking non-ideal conditions in the actual probe. The trained model also learned to differentiate artifacts and other impurities in the HOLODEC images from the particles, even though no such objects were present in the training data set. By contrast, the standard processing method struggled to separate particles from artifacts. HolodecML also leverages GPUs and parallel computing that enables large processing speed gains over serial and CPU-only based evaluation. Our results demonstrate that the machine-learning based framework may be a possible path to both improving and accelerating hologram processing. The novelty of the training approach, which leveraged noise as a means for parameterizing non-ideal aspects of the HOLODEC detector, could be applied in other domains where the theoretical model is incapable of fully describing the real-world operation of the instrument and accurate truth data required for supervised learning cannot be obtained from real-world observations.

John Stephen Schreck et al.

Status: open (until 08 Jul 2022)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse

John Stephen Schreck et al.

John Stephen Schreck et al.


Total article views: 214 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
175 32 7 214 2 2
  • HTML: 175
  • PDF: 32
  • XML: 7
  • Total: 214
  • BibTeX: 2
  • EndNote: 2
Views and downloads (calculated since 03 Jun 2022)
Cumulative views and downloads (calculated since 03 Jun 2022)

Viewed (geographical distribution)

Total article views: 188 (including HTML, PDF, and XML) Thereof 188 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
Latest update: 02 Jul 2022
Short summary
We show promising results for a new machine-learning based paradigm for processing field-acquired cloud droplet holograms. The approach is fast, scalable, and takes advantage of GPUs and other heterogeneous computing platforms. It combines applications of transfer and active learning by using synthetic data for training, and a small set of hand-labeled data for refinement and validation. Artificial noise applied during synthetic training enables optimized models for real-world situations.