15 Feb 2022
15 Feb 2022
Status: a revised version of this preprint is currently under review for the journal AMT.

Segmentation-Based Multi-Pixel Cloud Optical Thickness Retrieval Using a Convolutional Neural Network

Vikas Nataraja1, Sebastian Schmidt1,2, Hong Chen1,2, Takanobu Yamaguchi3,4, Jan Kazil3,4, Graham Feingold3, Kevin Wolf1, and Hironobu Iwabuchi5 Vikas Nataraja et al.
  • 1Laboratory for Atmospheric and Space Physics (LASP), University of Colorado, Boulder, CO 80303, USA
  • 2Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, CO 80303, USA
  • 3Cooperative Institute for Research in Environmental Sciences (CIRES), University of Colorado Boulder, CO 80309, USA
  • 4National Oceanic and Atmospheric Administration (NOAA), Chemical Sciences Laboratory, Boulder, CO 80305, USA
  • 5Center for Atmospheric and Oceanic Studies, Graduate School of Science, Tohoku University, Sendai, Miyagi 980-8578, Japan

Abstract. We introduce a new machine learning approach to retrieve cloud optical thickness (COT) fields from visible passive imagery. In contrast to the heritage Independent Pixel Approximation (IPA), our Convolutional Neural Network (CNN) retrieval takes the spatial context of a pixel into account, and thereby reduces artifacts arising from net horizontal photon transfer, commonly known as independent pixel (IP) bias. The CNN maps radiance fields acquired by imaging radiometers at a single wavelength channel to COT fields. It is trained with a low-complexity and therefore fast U-Net architecture where the mapping is implemented as a segmentation problem with 36 COT classes. As a training data set, we use a single radiance channel (600 nm) generated from a 3D radiative transfer model using Large Eddy Simulations (LES) from the Sulu Sea. We study the CNN model under various conditions based on different permutations of cloud aspect ratio and morphology, and use appropriate cloud morphology metrics to measure the performance of the retrievals. Additionally, we test the general applicability of the CNN on a new geographic location with LES data from the equatorial Atlantic. Results indicate that the CNN is broadly successful in overcoming the IP bias and outperforms IPA retrievals across all morphologies. In the Atlantic, the CNN tends to overestimate the COT but shows promise in regions with high cloud fractions and high optical thicknesses, despite being outside the general training envelope. This work is intended to be used as a baseline for future implementations of the CNN that can enable generalization to different regions, scales, wavelengths, and sun-sensor geometries with limited training.

Vikas Nataraja et al.

Status: final response (author comments only)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on amt-2022-45', Anonymous Referee #1, 04 Mar 2022
    • AC1: 'Reply on RC1', Vikas Nataraja, 20 Apr 2022
  • RC2: 'Comment on amt-2022-45', Anonymous Referee #2, 05 May 2022

Vikas Nataraja et al.

Vikas Nataraja et al.


Total article views: 523 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
350 157 16 523 7 7
  • HTML: 350
  • PDF: 157
  • XML: 16
  • Total: 523
  • BibTeX: 7
  • EndNote: 7
Views and downloads (calculated since 15 Feb 2022)
Cumulative views and downloads (calculated since 15 Feb 2022)

Viewed (geographical distribution)

Total article views: 520 (including HTML, PDF, and XML) Thereof 520 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
Latest update: 02 Jul 2022
Short summary
A Convolutional Neural Network (CNN) is introduced to retrieve Cloud Optical Thickness (COT) from passive cloud imagery. The CNN, trained on Large Eddy Simulations from the Sulu Sea, learns from spatial information at multiple scales to reduce cloud inhomogeneity effects. By considering the spatial context of a pixel, the CNN outperforms the traditional Independent Pixel Approximation (IPA) across several cloud morphology metrics.