Articles | Volume 15, issue 19
https://doi.org/10.5194/amt-15-5793-2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
https://doi.org/10.5194/amt-15-5793-2022
© Author(s) 2022. This work is distributed under
the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Neural network processing of holographic images
John S. Schreck
CORRESPONDING AUTHOR
Computational and Information Systems Laboratory, National Center for Atmospheric Research (NCAR), Boulder, CO, USA
Gabrielle Gantos
Computational and Information Systems Laboratory, National Center for Atmospheric Research (NCAR), Boulder, CO, USA
Matthew Hayman
CORRESPONDING AUTHOR
Earth Observing Lab, National Center for Atmospheric Research (NCAR), Boulder, CO, USA
Aaron Bansemer
Mesoscale and Microscale Meteorology Laboratory, National Center for Atmospheric Research (NCAR), Boulder, CO, USA
David John Gagne
Computational and Information Systems Laboratory, National Center for Atmospheric Research (NCAR), Boulder, CO, USA
Related authors
No articles found.
Matthew Hayman, Robert A. Stillwell, Adam Karboski, and Scott M. Spuler
EGUsphere, https://doi.org/10.5194/egusphere-2025-3523, https://doi.org/10.5194/egusphere-2025-3523, 2025
This preprint is open for discussion and under review for Atmospheric Measurement Techniques (AMT).
Short summary
Short summary
A new processing method for lidar data obtained from rapidly changing laser pulse lengths enables measurement of atmospheric water vapor from the ground up to 6 km. The technique blends all captured data to reveal hidden water vapor structures, especially near the surface. This solution offers continuous, high-resolution insights, key for improving weather forecasts. It showcases how flexible laser technology can enhance atmospheric observation.
Luke Colberg, Kevin S. Repasky, Matthew Hayman, Robert A. Stillwell, and Scott M. Spuler
EGUsphere, https://doi.org/10.5194/egusphere-2025-1989, https://doi.org/10.5194/egusphere-2025-1989, 2025
This preprint is open for discussion and under review for Atmospheric Measurement Techniques (AMT).
Short summary
Short summary
Two methods were developed to measure the mixed layer height, an important variable for weather forecasting and air quality studies. An aerosol-based method and a thermodynamic method were tested using a lidar system that can measure vertical profiles of aerosols, humidity, and temperature. Each method performed best under different conditions. Together, they provide a path toward more reliable mixed layer height monitoring with a single instrument.
Robert A. Stillwell, Adam Karboski, Matthew Hayman, and Scott M. Spuler
EGUsphere, https://doi.org/10.5194/egusphere-2025-1288, https://doi.org/10.5194/egusphere-2025-1288, 2025
Short summary
Short summary
A method is introduced to expand the observational capability of a diode-laser-based lidar system. This method allows the lidar transmitter to change the laser pulse characteristics from one shot to the next. We use this capability to lower the minimum altitude of observation and also used it to observe clouds with higher range resolution.
Nina Maherndl, Manuel Moser, Imke Schirmacher, Aaron Bansemer, Johannes Lucke, Christiane Voigt, and Maximilian Maahn
Atmos. Chem. Phys., 24, 13935–13960, https://doi.org/10.5194/acp-24-13935-2024, https://doi.org/10.5194/acp-24-13935-2024, 2024
Short summary
Short summary
It is not clear why ice crystals in clouds occur in clusters. Here, airborne measurements of clouds in mid-latitudes and high latitudes are used to study the spatial variability of ice. Further, we investigate the influence of riming, which occurs when liquid droplets freeze onto ice crystals. We find that riming enhances the occurrence of ice clusters. In the Arctic, riming leads to ice clustering at spatial scales of 3–5 km. This is due to updrafts and not higher amounts of liquid water.
Sue Ellen Haupt, Branko Kosović, Larry K. Berg, Colleen M. Kaul, Matthew Churchfield, Jeffrey Mirocha, Dries Allaerts, Thomas Brummet, Shannon Davis, Amy DeCastro, Susan Dettling, Caroline Draxl, David John Gagne, Patrick Hawbecker, Pankaj Jha, Timothy Juliano, William Lassman, Eliot Quon, Raj K. Rai, Michael Robinson, William Shaw, and Regis Thedin
Wind Energ. Sci., 8, 1251–1275, https://doi.org/10.5194/wes-8-1251-2023, https://doi.org/10.5194/wes-8-1251-2023, 2023
Short summary
Short summary
The Mesoscale to Microscale Coupling team, part of the U.S. Department of Energy Atmosphere to Electrons (A2e) initiative, has studied various important challenges related to coupling mesoscale models to microscale models. Lessons learned and discerned best practices are described in the context of the cases studied for the purpose of enabling further deployment of wind energy. It also points to code, assessment tools, and data for testing the methods.
Sachin Patade, Deepak Waman, Akash Deshmukh, Ashok Kumar Gupta, Arti Jadav, Vaughan T. J. Phillips, Aaron Bansemer, Jacob Carlin, and Alexander Ryzhkov
Atmos. Chem. Phys., 22, 12055–12075, https://doi.org/10.5194/acp-22-12055-2022, https://doi.org/10.5194/acp-22-12055-2022, 2022
Short summary
Short summary
This modeling study focuses on the role of multiple groups of primary biological aerosol particles as ice nuclei on cloud properties and precipitation. This was done by implementing a more realistic scheme for biological ice nucleating particles in the aerosol–cloud model. Results show that biological ice nucleating particles have a limited role in altering the ice phase and precipitation in deep convective clouds.
Willem J. Marais and Matthew Hayman
Atmos. Meas. Tech., 15, 5159–5180, https://doi.org/10.5194/amt-15-5159-2022, https://doi.org/10.5194/amt-15-5159-2022, 2022
Short summary
Short summary
For atmospheric science and weather prediction, it is important to make water vapor measurements in real time. A low-cost lidar instrument has been developed by Montana State University and the National Center for Atmospheric Research. We developed an advanced signal-processing method to extend the scientific capability of the lidar instrument. With the new method we show that the maximum altitude at which the MPD can make water vapor measurements can be extended up to 8 km.
Scott M. Spuler, Matthew Hayman, Robert A. Stillwell, Joshua Carnes, Todd Bernatsky, and Kevin S. Repasky
Atmos. Meas. Tech., 14, 4593–4616, https://doi.org/10.5194/amt-14-4593-2021, https://doi.org/10.5194/amt-14-4593-2021, 2021
Short summary
Short summary
Continuous water vapor and temperature profiles are critically needed for improved understanding of the lower atmosphere and potential advances in weather forecasting skill. To address this observation need, an active remote sensing technology based on a diode-laser-based lidar architecture is being developed. We discuss the details of the lidar architecture and analyze how it addresses a national-scale profiling network's need to provide continuous thermodynamic observations.
Cited articles
Agrawal, S., Barrington, L., Bromberg, C., Burge, J., Gazen, C., and Hickey, J.: Machine learning for precipitation nowcasting from radar images, arXiv [preprint], https://doi.org/10.48550/arXiv.1912.12132, 11 December 2019. a
Albrecht, B., Ghate, V., Mohrmann, J., Wood, R., Zuidema, P., Bretherton, C., Schwartz, C., Eloranta, E., Glienke, S., Donaher, S., Sarkar, M., McGibbon, J., Nugent, A. D., Shaw, R. A., Fugal, J., Minnis, P., Paliknoda, R., Lussier, L., Jensen, J., Vivekanandan, J., Ellis, S., Tsai, P., Rilling, R., Haggerty, J., Campos, T., Stell, M., Reeves, M., Beaton, S., Allison, J., Stossmeister, G., Hall, S., and Schmidt, S.: Cloud System Evolution in the Trades (CSET): Following the evolution of boundary layer cloud systems with the NSF–NCAR GV, B. Am. Meteorol. Soc., 100, 93–121, 2019. a
Berman, M., Triki, A. R., and Blaschko, M. B.: The lovász-softmax loss: A
tractable surrogate for the optimization of the intersection-over-union
measure in neural networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 18–22 June 2018, Salt Lake City, Utah, USA, 4413–4421, https://doi.org/10.1109/CVPR.2018.00464, 2018. a
Bernauer, F., Hürkamp, K., Rühm, W., and Tschiersch, J.: Snow event
classification with a 2D video disdrometer – A decision tree approach,
Atmos. Res., 172, 186–195, 2016. a
Chaurasia, A. and Culurciello, E.: Linknet: Exploiting encoder representations for efficient semantic segmentation, Proceedings of the IEEE Visual Communications and Image Processing (VCIP), 10–13 December 2017, St. Petersburg, FL, USA, IEEE, 1–4, https://doi.org/10.1109/VCIP.2017.8305148, 2017. a
Chen, L.-C., Papandreou, G., Schroff, F., and Adam, H.: Rethinking atrous convolution for semantic image segmentation, arXiv [preprint], https://doi.org/10.48550/arXiv.1706.05587, 17 June 2017. a, b
Chen, L.-C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H.:
Encoder-decoder with atrous separable convolution for semantic image
segmentation, Proceedings of the European conference on computer vision
(ECCV), 8–14 September 2018, Munich, Germany, 833–851, https://doi.org/10.1007/978-3-030-01234-2_49, 2018. a, b
Chen, Y., Li, J., Xiao, H., Jin, X., Yan, S., and Feng, J.: Dual path networks, Proceedings of the 31st International Conference on Neural Information Processing Systems, 4–9 December 2017, Long Beach, CA, USA, Adv. Neur. In., 30, 4470–4478, https://dl.acm.org/doi/10.5555/3294996.3295200 (last access: 11 October 2022), 2017. a
Chollet, F.: Xception: Deep learning with depthwise separable convolutions, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21–26 July 2017, Honolulu, Hawaii, USA, 1800–1807, https://doi.org/10.1109/CVPR.2017.195, 2017. a
Computational and Information Systems Laboratory (CISL): Cheyenne: HPE/SGI
ICE XA System (NCAR Community Computing), Tech. rep., National Center for
Atmospheric Research, https://doi.org/10.5065/D6RX99HX, 2020. a
Desai, N., Liu, Y., Glienke, S., Shaw, R. A., Lu, C., Wang, J., and Gao, S.:
Vertical Variation of Turbulent Entrainment Mixing Processes in Marine
Stratocumulus Clouds Using High-Resolution Digital Holography, J.
Geophys. Res.-Atmos., 126, e2020JD033527, https://doi.org/10.1029/2020JD033527, 2021. a
Fugal, J. P., Shaw, R. A., Saw, E. W., and Sergeyev, A. V.: Airborne digital
holographic system for cloud particle measurements, Appl. Optics, 43,
5987–5995, 2004. a
Fugal, J. P., Schulz, T. J., and Shaw, R. A.: Practical methods for automated
reconstruction and characterization of particles in digital in-line
holograms, Meas. Sci. Technol., 20, 075501, https://doi.org/10.1088/0957-0233/20/7/075501, 2009. a, b, c
Glienke, S., Kostinski, A., Fugal, J., Shaw, R., Borrmann, S., and Stith, J.: Cloud droplets to drizzle: Contribution of transition drops to microphysical
and optical properties of marine stratocumulus clouds, Geophys. Res.
Lett., 44, 8002–8010, 2017. a
Glienke, S., Kostinski, A. B., Shaw, R. A., Larsen, M. L., Fugal, J. P., Schlenczek, O., and Borrmann, S.: Holographic observations of centimeter-scale nonuniformities within marine stratocumulus clouds, J. Atmos. Sci., 77, 499–512, 2020. a
Grazioli, J., Tuia, D., Monhart, S., Schneebeli, M., Raupach, T., and Berne, A.: Hydrometeor classification from two-dimensional video disdrometer data, Atmos. Meas. Tech., 7, 2869–2882, https://doi.org/10.5194/amt-7-2869-2014, 2014. a
He, K., Zhang, X., Ren, S., and Sun, J.: Deep residual learning for image
recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27 June–1 July 2016, Las Vegas, Nevada, USA, 770–778, https://doi.org/10.1109/CVPR.2016.90, 2016a. a
He, K., Zhang, X., Ren, S., and Sun, J.: Deep Residual Learning for Image
Recognition, Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), 27 June–1 July 2016, Las Vegas, Nevada, USA, 770–778, https://doi.org/10.1109/CVPR.2016.90, 2016b. a
He, K., Gkioxari, G., Dollár, P., and Girshick, R.: Mask r-cnn,
Proceedings of the IEEE International Conference on Computer Vision (ICCV), 22–29 October 2017, Venice, Italy, 2961–2969, https://doi.org/10.1109/ICCV.2017.322, 2017. a
Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T.,
Andreetto, M., and Adam, H.: Mobilenets: Efficient convolutional neural
networks for mobile vision applications, arXiv [preprint], https://doi.org/10.48550/arXiv.1704.04861, 17 April 2017. a
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q.: Densely
connected convolutional networks, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21–26 June 2017, Honolulu, Hawaii, USA, 4700–4708, https://doi.org/10.1109/CVPR.2017.243, 2017. a
Li, H., Xiong, P., An, J., and Wang, L.: Pyramid attention network for semantic segmentation, arXiv [preprint], https://doi.org/10.48550/arXiv.1805.10180, 25 May 2018. a, b, c
Lin, T.-Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S.: Feature Pyramid Networks for Object Detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21–26 July 2017, Honolulu, Hawaii, USA, 936–944, https://doi.org/10.1109/CVPR.2017.106, 2017a. a
Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P.: Focal loss for dense object detection, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 22–29 October 2017, Venice, Italy, 2999–3007, https://doi.org/10.1109/ICCV.2017.324, 2017b. a
Ravuri, S., Lenc, K., Willson, M., Kangin, D., Lam, R., Mirowski, P., Fitzsimons, M., Athanassiadou, M., Kashem, S., Madge, S., and Prudden, R.: Skillful Precipitation Nowcasting using Deep Generative Models of Radar, Nature, 597, 672–677, https://doi.org/10.1038/s41586-021-03854-z, 2021. a
Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.: You only look once:
Unified, real-time object detection, Proceedings of the IEEE conference
on computer vision and pattern recognition, 26 June–1 July 2016, Las Vegas, Nevada, USA, 779–788, https://doi.org/10.1109/CVPR.2016.91, 2016. a
Ren, S., He, K., Girshick, R., and Sun, J.: Faster r-cnn: Towards real-time
object detection with region proposal networks, Adv. Neur. In., 28, 91–99, 2015. a
Ronneberger, O., Fischer, P., and Brox, T.: U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical image
computing and computer-assisted intervention, 5–9 October 2015, Munich, Germany, Springer, 234–241, https://doi.org/10.1007/978-3-319-24574-4_28, 2015. a, b, c
Rumelhart, D. E., Hinton, G. E., and Williams, R. J.: Learning representations by back-propagating errors, Nature, 323, 533–536, 1986. a
Salehi, S. S. M., Erdogmus, D., and Gholipour, A.: Tversky loss function for image segmentation using 3D fully convolutional deep networks, International workshop on machine learning in medical imaging, 10 September 2017, Quebec City, Quebec, Canada, Springer, 379–387, https://doi.org/10.1007/978-3-319-67389-9_44, 2017. a
Schreck, J. S. and Gagne, D. J.: Earth Computing Hyperparameter Optimization, GitHub [code], https://github.com/NCAR/echo-opt (last access: 11 October 2022), 2021. a
Schreck, J. S., Gantos, G., Hayman, M., Bensemer, A., and Gagne, D. J.: Data sets used in “Neural network processing of holographic images”, Zenodo [data set], https://doi.org/10.5281/zenodo.6347222, 2022a. a, b
Schreck, J. S., Hayman, M., Gantos, G., Bansemer, A., and Gagne, D. J.: NCAR/holodec-ml: v0.1 (v0.1), Zenodo [code], https://doi.org/10.5281/zenodo.7186527, 2022b. a
Sha, Y., Gagne II, D. J., West, G., and Stull, R.: Deep-learning-based gridded downscaling of surface meteorological variables in complex terrain. Part I: Daily maximum and minimum 2-m temperature, J. Appl. Meteorol.
Clim., 59, 2057–2073, 2020. a
Shao, S., Mallery, K., Kumar, S. S., and Hong, J.: Machine learning holography for 3D particle field imaging, Opt. Express, 28, 2987–2999,
https://doi.org/10.1364/OE.379480, 2020. a, b, c, d
Shaw, R.: Holographic Detector for Clouds (HOLODEC) particle-by-particle data, Version 1.0. UCAR/NCAR – Earth Observing Laboratory,
https://doi.org/10.26023/EVRR-1K5Q-350V, 2021. a
Shelhamer, E., Long, J., and Darrell, T.: Fully Convolutional Networks for Semantic Segmentation, IEEE T. Pattern Anal., 39, 640–651, https://doi.org/10.1109/TPAMI.2016.2572683, 2017. a
Shimobaba, T., Takahashi, T., Yamamoto, Y., Endo, Y., Shiraki, A., Nishitsuji, T., Hoshikawa, N., Kakue, T., and Ito, T.: Digital holographic particle volume reconstruction using a deep neural network, Appl. Optics, 58,
1900–1906, https://doi.org/10.1364/AO.58.001900, 2019. a, b
Simonyan, K. and Zisserman, A.: Very deep convolutional networks for large-scale image recognition, arXiv [preprint], https://doi.org/10.48550/arXiv.1409.1556, 4 September 2014. a
Spuler, S. M. and Fugal, J.: Design of an in-line, digital holographic imaging system for airborne measurement of clouds, Appl. Optics, 50, 1405–1412, 2011. a
Tan, M. and Le, Q.: Efficientnet: Rethinking model scaling for convolutional
neural networks, International conference on machine learning, 9–15 June 2019, Long Beach, CA, USA, PMLR,
6105–6114, http://proceedings.mlr.press/v97/tan19a.html (last access: 11 October 2022), 2019. a
Touloupas, G., Lauber, A., Henneberger, J., Beck, A., and Lucchi, A.: A convolutional neural network for classifying cloud particles recorded by imaging probes, Atmos. Meas. Tech., 13, 2219–2239, https://doi.org/10.5194/amt-13-2219-2020, 2020. a, b
Wilks, D. S.: Statistical Methods in the Atmospheric Sciences, 3rd edn.,
Academic Press, ISBN 0123850223, 2011. a
Xie, W., Liu, D., Yang, M., Chen, S., Wang, B., Wang, Z., Xia, Y., Liu, Y., Wang, Y., and Zhang, C.: SegCloud: a novel cloud image segmentation model using a deep convolutional neural network for ground-based all-sky-view camera observation, Atmos. Meas. Tech., 13, 1953–1961, https://doi.org/10.5194/amt-13-1953-2020, 2020. a
Yan, W., Zhang, Y., Abbeel, P., and Srinivas, A.: VideoGPT: Video Generation using VQ-VAE and Transformers, arXiv [preprint], https://doi.org/10.48550/arXiv.2104.10157, 20 April 2021. a
Yuan, K., Meng, G., Cheng, D., Bai, J., Xiang, S., and Pan, C.: Efficient cloud detection in remote sensing images using edge-aware segmentation network and easy-to-hard training strategy, Proceedings of the IEEE International Conference on Image Processing (ICIP), 17–20 September 2017, Beijing, China, IEEE, 61–65, https://doi.org/10.1109/ICIP.2017.8296243, 2017. a
Zhang, J., Liu, P., Zhang, F., and Song, Q.: CloudNet: Ground-based cloud classification with deep convolutional neural network, Geophys. Res. Lett., 45, 8665–8672, 2018. a
Zhao, H., Shi, J., Qi, X., Wang, X., and Jia, J.: Pyramid scene parsing
network, Proceedings of the IEEE conference on computer vision and
pattern recognition, 21–26 June 2017, Honolulu, Hawaii, USA, 6230–6239, https://doi.org/10.1109/CVPR.2017.660, 2017.
a, b
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., and Liang, J.: Unet++: A nested u-net architecture for medical image segmentation, Deep learning in medical image analysis and multimodal learning for clinical decision support, Springer, 3–11, https://doi.org/10.1007/978-3-030-00889-5_1, 2018. a, b
Short summary
We show promising results for a new machine-learning based paradigm for processing field-acquired cloud droplet holograms. The approach is fast, scalable, and leverages GPUs and other heterogeneous computing platforms. It combines applications of transfer and active learning by using synthetic data for training and a small set of hand-labeled data for refinement and validation. Artificial noise applied during synthetic training enables optimized models for real-world situations.
We show promising results for a new machine-learning based paradigm for processing...