Preprints
https://doi.org/10.5194/amt-2021-100
https://doi.org/10.5194/amt-2021-100

  06 May 2021

06 May 2021

Review status: this preprint is currently under review for the journal AMT.

Inpainting Radar Missing Data Regions with Deep Learning

Andrew Geiss and Joseph C. Hardin Andrew Geiss and Joseph C. Hardin
  • Pacific Northwest National Laboratory, Richland, WA, USA

Abstract. Missing and low-quality data regions are a frequent problem for weather radars. They can stem from a variety of sources: beam blockage, instrument failure, near-ground blind zones, and many others. Filling-in missing data regions is often useful for estimating local atmospheric properties and application of high-level data processing schemes without the need for preprocessing and error-handling steps; feature detection and tracking for instance. Interpolation schemes are typically used for this task, though they tend to produce unrealistically spatially-smoothed results that are not representative of the atmospheric turbulence and variability that are usually resolved by weather radars. Recently, Generative Adversarial Networks (GANs) have achieved impressive results in the area of photo inpainting. Here, they are demonstrated as a tool for infilling radar missing data regions. These neural networks are capable of extending large-scale cloud and precipitation features that border missing data regions into the regions while hallucinating plausible small scale variability. In other words, they can inpaint missing data with accurate large-scale features and plausible local small-scale features. This method is demonstrated on a scanning C-band and vertically pointing Ka-band radar that were deployed as part of the Cloud Aerosol and Complex Terrain Interactions (CACTI) field campaign. Three missing data scenarios are explored: infilling low-level blind zones and short outage periods for the Ka-band radar, and infilling beam blockage areas for the C-band radar. Two deep learning based approaches are tested, a Convolutional Neural Network (CNN) and a GAN that optimize pixel level error or combined pixel level error and adversarial loss respectively. Both deep learning approaches significantly outperform traditional inpainting schemes under several pixel-level and perceptual quality metrics.

Andrew Geiss and Joseph C. Hardin

Status: open (extended)

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on amt-2021-100', Andrew Black, 03 Aug 2021 reply
  • EC1: 'Comment on amt-2021-100', Gianfranco Vulpiani, 24 Sep 2021 reply

Andrew Geiss and Joseph C. Hardin

Andrew Geiss and Joseph C. Hardin

Viewed

Total article views: 515 (including HTML, PDF, and XML)
HTML PDF XML Total Supplement BibTeX EndNote
339 170 6 515 16 4 3
  • HTML: 339
  • PDF: 170
  • XML: 6
  • Total: 515
  • Supplement: 16
  • BibTeX: 4
  • EndNote: 3
Views and downloads (calculated since 06 May 2021)
Cumulative views and downloads (calculated since 06 May 2021)

Viewed (geographical distribution)

Total article views: 487 (including HTML, PDF, and XML) Thereof 487 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 24 Sep 2021
Download
Short summary
Radars can suffer from missing or poor-quality data regions for several reasons: beam blockage, instrument failure, near-ground blind zones, etc. Here, we demonstrate how deep convolutional neural networks can be used for filling-in radar missing data regions, and that they can significantly outperform conventional approaches in terms of realism and accuracy.