<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing with OASIS Tables v3.0 20080202//EN" "journalpub-oasis3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:oasis="http://docs.oasis-open.org/ns/oasis-exchange/table" xml:lang="en" dtd-version="3.0" article-type="research-article"><?xmltex \makeatother\@nolinetrue\makeatletter?>
  <front>
    <journal-meta><journal-id journal-id-type="publisher">AMT</journal-id><journal-title-group>
    <journal-title>Atmospheric Measurement Techniques</journal-title>
    <abbrev-journal-title abbrev-type="publisher">AMT</abbrev-journal-title><abbrev-journal-title abbrev-type="nlm-ta">Atmos. Meas. Tech.</abbrev-journal-title>
  </journal-title-group><issn pub-type="epub">1867-8548</issn><publisher>
    <publisher-name>Copernicus Publications</publisher-name>
    <publisher-loc>Göttingen, Germany</publisher-loc>
  </publisher></journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.5194/amt-14-7729-2021</article-id><title-group><article-title>Inpainting radar missing data regions with deep learning</article-title><alt-title>Inpainting radar missing data regions with deep learning</alt-title>
      </title-group><?xmltex \runningtitle{Inpainting radar missing data regions with deep learning}?><?xmltex \runningauthor{A.~Geiss and J.~C.~Hardin}?>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes">
          <name><surname>Geiss</surname><given-names>Andrew</given-names></name>
          <email>andrew.geiss@pnnl.gov</email>
        </contrib>
        <contrib contrib-type="author" corresp="no">
          <name><surname>Hardin</surname><given-names>Joseph C.</given-names></name>
          
        <ext-link>https://orcid.org/0000-0002-8489-4763</ext-link></contrib>
        <aff id="aff1"><institution>Pacific Northwest National Laboratory, Atmospheric Sciences and Global Change Division, Richland, WA, USA</institution>
        </aff>
      </contrib-group>
      <author-notes><corresp id="corr1">Andrew Geiss (andrew.geiss@pnnl.gov)</corresp></author-notes><pub-date><day>9</day><month>December</month><year>2021</year></pub-date>
      
      <volume>14</volume>
      <issue>12</issue>
      <fpage>7729</fpage><lpage>7747</lpage>
      <history>
        <date date-type="received"><day>11</day><month>April</month><year>2021</year></date>
           <date date-type="rev-request"><day>6</day><month>May</month><year>2021</year></date>
           <date date-type="rev-recd"><day>23</day><month>October</month><year>2021</year></date>
           <date date-type="accepted"><day>3</day><month>November</month><year>2021</year></date>
      </history>
      <permissions>
        <copyright-statement>Copyright: © 2021 Andrew Geiss</copyright-statement>
        <copyright-year>2021</copyright-year>
      <license license-type="open-access"><license-p>This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this licence, visit <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">https://creativecommons.org/licenses/by/4.0/</ext-link></license-p></license></permissions><self-uri xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021.html">This article is available from https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021.html</self-uri><self-uri xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021.pdf">The full text article is available as a PDF file from https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021.pdf</self-uri>
      <abstract><title>Abstract</title>

      <p id="d1e87">Missing and low-quality data regions are a frequent problem for weather radars. They stem from a variety of sources: beam blockage, instrument failure, near-ground blind zones, and many others. Filling in missing data regions is often useful for estimating local atmospheric properties and the application of high-level data processing schemes without the need for preprocessing and error-handling steps – feature detection and tracking, for instance. Interpolation schemes are typically used for this task, though they tend to produce unrealistically spatially smoothed results that are not representative of the atmospheric turbulence and variability that are usually resolved by weather radars. Recently, generative adversarial networks (GANs) have achieved impressive results in the area of photo inpainting. Here, they are demonstrated as a tool for infilling radar missing data regions. These neural networks are capable of extending large-scale cloud and precipitation features that border missing data regions into the regions while hallucinating plausible small-scale variability. In other words, they can inpaint missing data with accurate large-scale features and plausible local small-scale features. This method is demonstrated on a scanning C-band and vertically pointing Ka-band radar that were deployed as part of the Cloud Aerosol and Complex Terrain Interactions (CACTI) field campaign. Three missing data scenarios are explored: infilling low-level blind zones and short outage periods for the Ka-band radar and infilling beam blockage areas for the C-band radar. Two deep-learning-based approaches are tested, a convolutional neural network (CNN) and a GAN that optimize pixel-level error or combined pixel-level error and adversarial loss respectively. Both deep-learning approaches significantly outperform traditional inpainting schemes under several pixel-level and perceptual quality metrics.</p>
  </abstract>
    </article-meta>
  </front>
<body>
      

<sec id="Ch1.S1" sec-type="intro">
  <label>1</label><title>Introduction</title>
      <p id="d1e99">Missing data regions are a common problem for weather radars and can arise for many reasons. One of the most common for scanning radars is beam blockage. This occurs when terrain or nearby objects like buildings and trees obstruct the radar beam, resulting in a wedge-shaped blind zone behind the object. This is a particularly large problem in regions with substantial terrain like the western potion of the United States, for instance <xref ref-type="bibr" rid="bib1.bibx40 bib1.bibx43" id="paren.1"/>. Scanning radars can suffer from many other data quality issues where contiguous regions of missing or low-quality data may need to be inpainted. Some examples are interference from solar radiation at dawn and dusk <xref ref-type="bibr" rid="bib1.bibx28" id="paren.2"/>, ground clutter <xref ref-type="bibr" rid="bib1.bibx19 bib1.bibx20" id="paren.3"/> and super-refraction <xref ref-type="bibr" rid="bib1.bibx30" id="paren.4"/>, and echoes off of wind farms <xref ref-type="bibr" rid="bib1.bibx22" id="paren.5"/>. Complete beam extinction due to attenuation by large storms can similarly cause large missing data regions for high-frequency radars. Several computational approaches exist to infill partial beam blockage cases <xref ref-type="bibr" rid="bib1.bibx26 bib1.bibx45" id="paren.6"/>. Another option is to use a radar network where multiple radars are installed on opposite sides of terrain <xref ref-type="bibr" rid="bib1.bibx43" id="paren.7"/>. More recently, deep-learning-based data fusion techniques <xref ref-type="bibr" rid="bib1.bibx39" id="paren.8"/> have been developed to enhance the coverage of radar networks by emulating radar observations based on data from satellite imagers and other instruments; these techniques have promise for combating beam blockage and large missing data regions for scanning radars. Finally, in the absence of additional data (from unblocked sweeps at higher elevation angles or other instruments), beam blockages can be filled in through traditional interpolation.</p>
      <p id="d1e127">In addition to beam blockage for scanning radars, we also examine simulated missing data scenarios for vertically pointing radars. Specifically, we examine two missing data<?pagebreak page7730?> scenarios for the Department of Energy Atmospheric Radiation Measurement (DOE-ARM) program's Ka-band zenith radar (KaZR). This instrument collects cloud and precipitation information in a vertical profile as weather passes over the radar and is used to generate time vs. height plots that are frequently used for atmospheric research. The first scenario is a simulated instrument failure, where data are unavailable for up to several minutes. The second scenario is a low-level blind zone. The low-level blind zone is of particular relevance because the KaZR operates with a burst and a pulse-compressed linear frequency-modulated chirped pulse mode. When operating in chirped pulse mode, data in the lower range gates are unavailable due to a receiver protection blanking region due to the longer pulse length <xref ref-type="bibr" rid="bib1.bibx41" id="paren.9"/>. Even the short burst pulse has a blind region near the surface based on the pulse width of the radar. Low-level blind zones are also relevant to space-borne precipitation radars like the Tropical Rainfall Meteorology Mission (TRMM) and the Global Precipitation Measurement mission (GPM) instruments, which can be blind at lower levels due to surface echoes and beam attenuation <xref ref-type="bibr" rid="bib1.bibx29 bib1.bibx36" id="paren.10"/>.</p>
      <p id="d1e136">Robust methods for inpainting missing radar data have many possible uses. Accurately inpainting can provide more useful operational meteorology products for dissemination to the public <xref ref-type="bibr" rid="bib1.bibx44" id="paren.11"/> or for use in nowcasting <xref ref-type="bibr" rid="bib1.bibx33 bib1.bibx1" id="paren.12"/> or aviation <xref ref-type="bibr" rid="bib1.bibx39" id="paren.13"/>. Furthermore, research applications often involve sophisticated, high-level processing of radar data – for feature detection and tracking, for instance <xref ref-type="bibr" rid="bib1.bibx9" id="paren.14"/>. Producing radar products for research purposes where missing and low quality data regions have been repaired could significantly accelerate research projects by reducing or eliminating the need for researchers to develop their own code for error-handling and data quality issues. Ideally, an inpainting scheme for radar data should produce results that are accurate at the pixel level but also visually appealing, physically consistent, and plausible.</p>
      <p id="d1e151">Image inpainting has long been an area of research in the fields of computer vision and image processing. The image inpainting problem involves restoring a missing or damaged region in an image by filling it with plausible data. Common applications include digital photo editing, restoration of damaged photographs, and restoration of lost information during image compression and transmission, etc., and many approaches with a range of application-specific advantages and varying levels of complexity exist <xref ref-type="bibr" rid="bib1.bibx24" id="paren.15"/>. Image inpainting schemes can be broken into several categories: texture-synthesis-based approaches assume self-similarity in images and copy textures found in the undamaged region of the image into the missing data region <xref ref-type="bibr" rid="bib1.bibx7" id="paren.16"/>. Structure-based methods seek to extend large-scale structures into the missing data region from its boundaries and often focus on isophotes (lines of constant pixel intensity) that intersect the boundary <xref ref-type="bibr" rid="bib1.bibx6" id="paren.17"/>. Diffusion-based methods diffuse boundary or isophote information through the missing data region, typically by solving a partial differential equation within the region: the Laplace equation <xref ref-type="bibr" rid="bib1.bibx3" id="paren.18"/> or the Navier–Stokes equations <xref ref-type="bibr" rid="bib1.bibx4" id="paren.19"/>, for instance. There are also sparse-representation and multi-resolution methods that are typically geared towards inpainting specific image classes, pictures of faces, for example <xref ref-type="bibr" rid="bib1.bibx35" id="paren.20"/>. Finally, there are many mixed approaches, like that used by <xref ref-type="bibr" rid="bib1.bibx5" id="text.21"/>, that combine concepts from two or more of these categories. Image inpainting is a large sub-field of image processing, and for a more detailed overview see <xref ref-type="bibr" rid="bib1.bibx24" id="text.22"/> or <xref ref-type="bibr" rid="bib1.bibx13" id="text.23"/>.</p>
      <p id="d1e183">In recent years, deep convolutional neural networks (CNNs) have revolutionized the area of image inpainting research. The earliest applications of CNNs to the image inpainting problem involved using autoencoders that optimized pixel-level loss <xref ref-type="bibr" rid="bib1.bibx23" id="paren.24"/>. Research in this area began to quickly accelerate after the introduction of generative adversarial networks (GANs) for image processing and synthesis by <xref ref-type="bibr" rid="bib1.bibx12" id="text.25"/> however. GANs allow CNN-based inpainting schemes to hallucinate plausible small-scale variability, including textures and sharp edges, in the inpainted regions. GAN-based algorithms can produce extremely realistic results, to the point that it may not be obvious that inpainting has been performed. GAN-based inpainting involves training two CNNs side by side. One is the generator network, which takes a damaged image as input and attempts to fill in missing data regions. The second is a discriminator network that takes either undamaged images or outputs from the generator and attempts to classify them as real or fake, typically optimizing a binary cross-entropy <xref ref-type="bibr" rid="bib1.bibx12" id="paren.26"/> or Wasserstein <xref ref-type="bibr" rid="bib1.bibx2" id="paren.27"/> loss function. <xref ref-type="bibr" rid="bib1.bibx32" id="text.28"/> first performed inpainting with a GAN, using a combination of <inline-formula><mml:math id="M1" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> loss that optimizes pixel-level errors and ensures that the inpainting CNN can reproduce large-scale structures (though they may be blurry) and an adversarial loss, enforced by a discriminator network, that ensures the inpainted regions look realistic and forces the generator to produce realistic sharp features and small-scale variability. <xref ref-type="bibr" rid="bib1.bibx42" id="text.29"/> expanded on this by using a combination of adversarial loss and feature loss based on the internal activations of image classification CNNs <xref ref-type="bibr" rid="bib1.bibx27" id="paren.30"/>. There has been a significant amount of subsequent work focusing on altered loss functions, incorporating updated CNN architectures like U-Net <xref ref-type="bibr" rid="bib1.bibx34" id="paren.31"/> and other applications of these CNN training paradigms. An example is the image-to-image translation introduced by <xref ref-type="bibr" rid="bib1.bibx21" id="text.32"/>, who used a combined <inline-formula><mml:math id="M2" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> and adversarial loss function (also used here). The research in this area is far too broad to include a comprehensive overview here, so please refer to <xref ref-type="bibr" rid="bib1.bibx8" id="text.33"/> for a more in-depth review of deep-learning-based inpainting.</p>
      <?pagebreak page7731?><p id="d1e240">In this study, we experiment with applying state-of-the-art deep-learning-based image inpainting schemes to fill in several types of missing data regions simulated for two of the ARM program radars. The majority of past image inpainting research is heavily focused on restoring missing regions in photographs. As a result, GAN-based methods are heavily optimized to produce visually appealing and plausible results and are not necessarily good at reproducing the ground-truth image in terms of pixel-level accuracy. Therefore two CNN-based inpainting paradigms are investigated, one that optimizes only the pixel-level mean absolute error (<inline-formula><mml:math id="M3" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> loss) and one that optimizes combined <inline-formula><mml:math id="M4" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> and adversarial loss.</p>
</sec>
<sec id="Ch1.S2">
  <label>2</label><title>Data</title>
      <p id="d1e273">The data used here are from two US DOE Atmospheric Radiation Measurement (ARM) program radars that were deployed as part of the Cloud, Aerosol, and Complex Terrain Interactions (CACTI) field campaign from October 2018 to April 2019 <xref ref-type="bibr" rid="bib1.bibx38" id="paren.34"/>. The field campaign deployed a large suite of instruments near the Sierras de Córdoba mountain range in Argentina, including multiple radars, lidars, imagers, rain gauges, and many others, with the primary goals of investigating the influence of orography, surface fluxes, aerosols, and thermodynamics on boundary layer clouds and on the initiation and development of convection. The radars were deployed in a region just east of the mountain range and were able to observe frequent warm boundary layer cloud and a range of convective systems at various points in their lifetime.</p>
<sec id="Ch1.S2.SS1">
  <label>2.1</label><title>KaZR</title>
      <p id="d1e286">The Ka-band zenith radar (KaZR) is a 35 GHz vertically pointing cloud radar that has been deployed at many of the ARM sites around the world <xref ref-type="bibr" rid="bib1.bibx41" id="paren.35"/>. It is a Doppler radar that produces time vs. height observations of cloud and precipitation and operates in both a burst and a pulse-compressed linear frequency-modulated chirped pulse mode. Here, we have used the quality-controlled burst pulse-mode reflectivity, mean Doppler velocity, and spectrum width fields from the CACTI field campaign from 15 October 2018 to 30 April 2019 <xref ref-type="bibr" rid="bib1.bibx14" id="paren.36"/>. A minimum reflectivity of <inline-formula><mml:math id="M5" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">10</mml:mn></mml:mrow></mml:math></inline-formula> dBZ was used as a mask, and any reflectivity, velocity, or spectrum width observations in pixels lower than this threshold were ignored. The CNN takes samples that have 256 time observations and 256 vertical range gates as inputs. The radar has a sampling frequency of 2 s and vertical resolution of 30 m, and this corresponds to about 8.5 min by 7.5 km. Because there are frequent time periods when there is no cloud over the radar, and training/evaluating the CNN on blank samples is not useful, time periods longer than 256 samples when there was no weather observed were removed from the dataset. Because there is some noise in the data, and samples without weather still contain some pixels with reflectivity greater than <inline-formula><mml:math id="M6" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">10</mml:mn></mml:mrow></mml:math></inline-formula> dBZ, “no weather” periods were defined as periods that do not contain any <inline-formula><mml:math id="M7" display="inline"><mml:mrow><mml:mn mathvariant="normal">10</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">10</mml:mn></mml:mrow></mml:math></inline-formula>-pixel (<inline-formula><mml:math id="M8" display="inline"><mml:mrow><mml:mn mathvariant="normal">300</mml:mn><mml:mspace linebreak="nobreak" width="0.125em"/><mml:mtext>m</mml:mtext><mml:mo>×</mml:mo></mml:mrow></mml:math></inline-formula> 20 s) block with all reflectivity values <inline-formula><mml:math id="M9" display="inline"><mml:mrow><mml:mo>&gt;</mml:mo><mml:mo>-</mml:mo><mml:mn mathvariant="normal">10</mml:mn></mml:mrow></mml:math></inline-formula> dBZ. After this filtering, the dataset is split into a training set containing the first 80 % of the data and a test set containing the last 20 % (with respect to time). The KaZR operated continuously throughout the field campaign <xref ref-type="bibr" rid="bib1.bibx16" id="paren.37"/>.</p>
</sec>
<sec id="Ch1.S2.SS2">
  <label>2.2</label><title>C-SAPR2</title>
      <p id="d1e364">The C-band Scanning ARM Precipitation Radar 2 (C-SAPR2) is a 5.7 GHz scanning precipitation radar. Here, we have used reflectivity, radial velocity, and spectrum width data from plan position indicator (PPI) scans. The data have a 1<inline-formula><mml:math id="M10" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> azimuth and 100 m range resolution  <xref ref-type="bibr" rid="bib1.bibx15" id="paren.38"/>.  The PPI scans used were preprocessed using the “Taranis” software package <xref ref-type="bibr" rid="bib1.bibx17" id="paren.39"/>. Taranis provides quality control and produces a suite of useful geophysical parameters using the radar's dual-polarization observations. This dataset has not yet been made publicly available but will be in the near future. The C-SAPR2 suffered a hardware failure in February 2019 and was no longer able to rotate in the azimuth, so PPI scans are only available from 15 October 2018 to 2 March 2019 <xref ref-type="bibr" rid="bib1.bibx16" id="paren.40"/>. For the majority of the field campaign, the C-SAPR2 scan strategy involved performing a series of PPI scans at consecutively increasing elevation angles, followed by a vertical scan, followed by a series of range height indicator scans. The whole process takes about 15 min. PPI scans at subsequent elevations are often similar because they are observed in quick succession, so the training set used here contains only one sweep from each volume scan. The sweeps used to construct the inpainting dataset were selected randomly from the five sweeps in each volume scan that contained the most weather (most observations with reflectivity <inline-formula><mml:math id="M11" display="inline"><mml:mrow><mml:mo>&gt;</mml:mo><mml:mn mathvariant="normal">0</mml:mn></mml:mrow></mml:math></inline-formula> dBZ). Many of these still did not contain weather, and inpainting empty scans is not useful, so ultimately 1500 scans that contained weather were retained. As with the KaZR data, the first 80 % were used for training, and the last 20 % were used for testing. Finally, pixels with reflectivity below a threshold of 0 dBZ were blanked out.</p>
</sec>
</sec>
<sec id="Ch1.S3">
  <label>3</label><title>Methods</title>
      <p id="d1e404">Two deep-learning approaches are demonstrated here to inpaint missing data regions. The first involves using a single CNN. Radar data are intentionally degraded by removing randomly sized chunks of data. The exact manner in which the data are handled depends on the case, and more information is provided in Sect. <xref ref-type="sec" rid="Ch1.S4"/>. The CNN is tasked with taking the degraded radar data along with a mask indicating the position of the missing data as an input and minimizing the<?pagebreak page7732?> mean absolute pixel-wise error between its output and the original (non-degraded data). This is referred to as “<inline-formula><mml:math id="M12" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN” throughout the paper because it optimizes the <inline-formula><mml:math id="M13" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> norm of difference between its outputs and ground truth. The second approach involves training two CNNs adversarially: one that performs inpainting and one that discriminates between infilled radar data and ground-truth radar data. The inpainting network attempts to minimize both the mean absolute pixel error and the likelihood that the discriminator labels its output as inpainted data. The inpainting network is provided an additional random noise seed as input and is allowed to hallucinate plausible small-scale variability in its outputs. This CNN is referred to as a conditional generative adversarial network (“CGAN”) later in the paper. The exact neural networks and training procedure are described in more detail below.</p>
<sec id="Ch1.S3.SS1">
  <label>3.1</label><title>Convolutional neural network</title>
      <p id="d1e438">Inpainting is done with a Unet++-style CNN <xref ref-type="bibr" rid="bib1.bibx46" id="paren.41"/>. This is based on a previous neural network architecture called a “Unet” <xref ref-type="bibr" rid="bib1.bibx34" id="paren.42"/>. These CNNs map a 2-D gridded input to an output with the same dimensions and were originally developed for image segmentation tasks. A U-net is composed of a spatial down-sampling and a corresponding up-sampling branch. The down-sampling branch is composed of a series of “blocks” consisting of multiple convolutional layers, and the input data undergo <inline-formula><mml:math id="M14" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:math></inline-formula> down-sampling as they pass through each block, while the number of feature channels is increased. This process trades spatial information for feature information. The down-sampling branch is followed by an up-sampling branch that increases the spatial dimensions and decreases the feature dimension. A key aspect of the Unet is that it also includes skip connections between these two branches, where the output from each down-sampling block is provided as additional input to the up-sampling block with the corresponding spatial resolution. This makes these networks particularly good at combining large-scale contextual information with pixel-level information. The Unet++ extends this idea by constructing each skip connection across the U-net from a set of densely connected <xref ref-type="bibr" rid="bib1.bibx18" id="paren.43"/> convolutional layers and including intermediate up- and down-sampling connections (the inclusion of intermediate down-sampling connections is a slight difference from the original Unet++ as described by <xref ref-type="bibr" rid="bib1.bibx18" id="altparen.44"/>). A diagram of the inpainting CNN is shown in Fig. <xref ref-type="fig" rid="Ch1.F1"/>. The discriminator network consists of seven consecutive densely connected blocks <xref ref-type="bibr" rid="bib1.bibx18" id="paren.45"/> that consist of four convolutional layers with rectified linear unit transfer functions followed by <inline-formula><mml:math id="M15" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:math></inline-formula> down-sampling and 0.1 dropout layers. The final output is a classification produced by a one-neuron layer with a logistic (sigmoid) transfer function. A diagram of this CNN is included in the Supplement.
<?xmltex \hack{\newpage}?></p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F1" specific-use="star"><?xmltex \currentcnt{1}?><?xmltex \def\figurename{Figure}?><label>Figure 1</label><caption><p id="d1e486">A diagram of the Unet++-style CNN <xref ref-type="bibr" rid="bib1.bibx46" id="paren.46"/> used for inpainting. For the KaZR cases, <inline-formula><mml:math id="M16" display="inline"><mml:mrow><mml:mi>N</mml:mi><mml:mo>×</mml:mo><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn mathvariant="normal">256</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">256</mml:mn></mml:mrow></mml:math></inline-formula>. For the C-SAPR2 cases, <inline-formula><mml:math id="M17" display="inline"><mml:mrow><mml:mi>N</mml:mi><mml:mo>×</mml:mo><mml:mi>M</mml:mi><mml:mo>=</mml:mo><mml:mn mathvariant="normal">1024</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">128</mml:mn></mml:mrow></mml:math></inline-formula>. <inline-formula><mml:math id="M18" display="inline"><mml:mi mathvariant="normal">ℓ</mml:mi></mml:math></inline-formula> represents the number of spatial down-sampling operations, <inline-formula><mml:math id="M19" display="inline"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> is the number of output channels from the convolutional layers at the highest spatial resolution level, and <inline-formula><mml:math id="M20" display="inline"><mml:mi>g</mml:mi></mml:math></inline-formula> determines the rate at which the number of channels increases for lower resolutions.</p></caption>
          <?xmltex \igopts{width=355.659449pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f01.png"/>

        </fig>

</sec>
<sec id="Ch1.S3.SS2">
  <label>3.2</label><title>Data processing</title>
      <p id="d1e572">Several preprocessing and post-processing operations need to be applied to the radar data so that they are suitable for use with the CNNs. Firstly, the data need to be standardized to a consistent input range of [<inline-formula><mml:math id="M21" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn><mml:mo>,</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:math></inline-formula>]. The various fields used have different scales, and this ensures that they have similar weighting when computing loss. Furthermore, the CNN uses a <inline-formula><mml:math id="M22" display="inline"><mml:mi>tanh⁡</mml:mi></mml:math></inline-formula> transfer function after the last layer which ensures that the CNN outputs are limited to the same range as the inputs. Separate standardization procedures were used for each of the fields. For reflectivity, data were clipped to a range of <inline-formula><mml:math id="M23" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">10</mml:mn></mml:mrow></mml:math></inline-formula> to 40 dBZ for KaZR data and a range of 0 to 60 dBZ for C-SAPR2 data. In practice, we found that the <inline-formula><mml:math id="M24" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN and many of the inpainting schemes that were used as benchmarks tended to smooth reflectivity values near cloud edges. Most cloud edges in the dataset involve a sharper gradient in reflectivity however, and this smoothing is a result of the inpainting and interpolation schemes struggling to correctly position the cloud edge. To mitigate this, the reflectivity data were linearly mapped to a range of [<inline-formula><mml:math id="M25" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">0.5</mml:mn></mml:mrow></mml:math></inline-formula>, 1.0], and pixels with reflectivity below the minimum threshold were assigned a value of <inline-formula><mml:math id="M26" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.0</mml:mn></mml:mrow></mml:math></inline-formula>, leaving a gap of 0.5 between clear and cloudy values. This gap helped the CNN and some of the diffusion- and interpolation-based inpainting schemes to produce sharp edges at the boundaries of precipitation and cloud regions. No such gap was used for the other two fields, but as a post-processing step, after inpainting, the other two fields were masked so that all locations with reflectivity outputs <inline-formula><mml:math id="M27" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mo>-</mml:mo><mml:mn mathvariant="normal">0.5</mml:mn></mml:mrow></mml:math></inline-formula> were considered clear pixels. For velocity, a de-aliasing scheme was first applied (see Sect. <xref ref-type="sec" rid="Ch1.S3.SS3"/>), then velocity values were linearly scaled by a factor of (8 ms<inline-formula><mml:math id="M28" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>)<inline-formula><mml:math id="M29" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> for the KaZR data and (28 ms<inline-formula><mml:math id="M30" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>)<inline-formula><mml:math id="M31" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> for the C-SAPR2 data. Note that the instruments have Nyquist velocities of 8 ms<inline-formula><mml:math id="M32" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> and 16.5 ms<inline-formula><mml:math id="M33" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> respectively and that the scanning radar is more likely to observe large velocities because the vertical component of velocity is typically smaller than the horizontal component for atmospheric motions. Finally, a <inline-formula><mml:math id="M34" display="inline"><mml:mi>tanh⁡</mml:mi></mml:math></inline-formula> function was applied to bound all the velocity inputs in a range of [<inline-formula><mml:math id="M35" display="inline"><mml:mo lspace="0mm">-</mml:mo></mml:math></inline-formula>1,1]. Note that the unfolded velocities will often exceed the Nyquist velocity so the <inline-formula><mml:math id="M36" display="inline"><mml:mi>tanh⁡</mml:mi></mml:math></inline-formula> function was used to bound the transformed velocity data to [<inline-formula><mml:math id="M37" display="inline"><mml:mo lspace="0mm">-</mml:mo></mml:math></inline-formula>1,1] instead of clipping. This nonlinearly compresses the high velocity values near <inline-formula><mml:math id="M38" display="inline"><mml:mrow><mml:mo>±</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:math></inline-formula> but ensures that different large values remain unique and distinguishable from each other. The spectrum width data were clipped to a range of [0,2.5] ms<inline-formula><mml:math id="M39" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> for KaZR and [0,5.5] ms<inline-formula><mml:math id="M40" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula> for C-SAPR2 and then linearly mapped to a range of [<inline-formula><mml:math id="M41" display="inline"><mml:mo lspace="0mm">-</mml:mo></mml:math></inline-formula>1,1]. The inverse of each of these operations was performed to map the CNN outputs back to the range of the original dataset.</p>
      <p id="d1e795">When degrading data for CNN training and testing, missing data regions for each field were created by setting all reflectivity and spectrum width values in the region to <inline-formula><mml:math id="M42" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:math></inline-formula> and all velocity values to 0. Large regions of clear pixels naturally exist in the training data of course, so an additional<?pagebreak page7733?> mask channel is provided as an input to the CNN to indicate the regions that need to be inpainted. The mask has values of 1 in missing data regions and 0 elsewhere. Because values outside the missing data region are known, there is no point in returning CNN outputs for these areas. The final operation performed by the CNN takes the output from the last convolutional layer (with a <inline-formula><mml:math id="M43" display="inline"><mml:mi>tanh⁡</mml:mi></mml:math></inline-formula> transfer function) and uses the mask to combine the known data from the input with the CNN outputs in the missing data region. The exact operation is
            <disp-formula id="Ch1.E1" content-type="numbered"><label>1</label><mml:math id="M44" display="block"><mml:mrow><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo><mml:mo>=</mml:mo><mml:mi>m</mml:mi><mml:msup><mml:mi>G</mml:mi><mml:mo>′</mml:mo></mml:msup><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo><mml:mo>+</mml:mo><mml:mo>(</mml:mo><mml:mn mathvariant="normal">1</mml:mn><mml:mo>-</mml:mo><mml:mi>m</mml:mi><mml:mo>)</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
          where <inline-formula><mml:math id="M45" display="inline"><mml:mrow><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the CNN output for input pixel <inline-formula><mml:math id="M46" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math id="M47" display="inline"><mml:mrow><mml:msup><mml:mi>G</mml:mi><mml:mo>′</mml:mo></mml:msup></mml:mrow></mml:math></inline-formula> is the output from the last convolution + <inline-formula><mml:math id="M48" display="inline"><mml:mi>tanh⁡</mml:mi></mml:math></inline-formula> layer, and <inline-formula><mml:math id="M49" display="inline"><mml:mi>m</mml:mi></mml:math></inline-formula> is the corresponding pixel in the mask. CNN outputs in initial experiments that used a mask with a sharp edge (transition from 1 to 0) around the inpainting region tended to contain noticeable artifacts at the edges. To help ensure that the features produced by the CNN at the edges of the inpainting region matched with the features just outside of the region in the input data, a <inline-formula><mml:math id="M50" display="inline"><mml:mi>n</mml:mi></mml:math></inline-formula>-pixel buffer region was included where the values in the mask decrease linearly from 1 to 0. During training, <inline-formula><mml:math id="M51" display="inline"><mml:mi>n</mml:mi></mml:math></inline-formula> is randomly selected from a range of 1–17 to improve the robustness of the trained CNN. The result is that the final CNN outputs in this buffer region are a weighted average of the CNN output and the known input data. This significantly reduced artifacts near the edge of the inpainted region. Finally, in the CGAN case, an additional random seed was provided as a CNN input that allows the CNN to hallucinate plausible small-scale variability. Here, this seed was included as an additional input channel containing random values sampled from a Gaussian distribution with a standard deviation of 0.5. We found that after training, the CNNs generally did not did not rely on this random seed however; this is discussed in more detail in Sect. <xref ref-type="sec" rid="Ch1.S5"/>. In summary, the input channels to the inpainting CNN are [<inline-formula><mml:math id="M52" display="inline"><mml:mo lspace="0mm">-</mml:mo></mml:math></inline-formula>1,1] standardized reflectivity, velocity, and spectrum width data, a [0,1] mask indicating the inpainting region with smoothed edges, and a channel of random seed data for the generator when training as a CGAN.</p>
</sec>
<sec id="Ch1.S3.SS3">
  <label>3.3</label><title>Doppler velocity folding</title>
      <p id="d1e940">The Doppler velocity data from both KaZR and C-SAPR2 contain velocity folding. Doppler radars can only unambiguously resolve radial velocities of plus or minus a maximum value known as the Nyquist velocity (<inline-formula><mml:math id="M53" display="inline"><mml:mrow><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>). <inline-formula><mml:math id="M54" display="inline"><mml:mrow><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is a function of the frequency and range resolution of the radar. Velocities that exceed the Nyquist velocity are mapped periodically back into this range, so that velocities slightly larger than <inline-formula><mml:math id="M55" display="inline"><mml:mrow><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> are mapped to values slightly above (smaller magnitude) <inline-formula><mml:math id="M56" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>. The velocity data used here have <inline-formula><mml:math id="M57" display="inline"><mml:mrow><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">8</mml:mn><mml:mspace linebreak="nobreak" width="0.125em"/><mml:msup><mml:mtext>ms</mml:mtext><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M58" display="inline"><mml:mrow><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">16.5</mml:mn><mml:mspace linebreak="nobreak" width="0.125em"/><mml:msup><mml:mtext>ms</mml:mtext><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> for KaZR and C-SAPR2 respectively. Despite the smaller Nyquist velocity, the KaZR data are generally less susceptible to aliasing because vertical velocities in the atmosphere are typically smaller (in<?pagebreak page7734?> magnitude) than horizontal velocities. In practice, instances of velocity folding manifest as large jumps in velocity near the scale of <inline-formula><mml:math id="M59" display="inline"><mml:mrow><mml:mo>±</mml:mo><mml:mn mathvariant="normal">2</mml:mn><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>, and because real-world meteorological conditions are extremely unlikely to cause jumps in velocity of this magnitude over such a small spatial scale, velocity folding is often easily detectable in contiguous cloud and precipitation regions. Correcting folding is much more difficult than simply detecting it however. Many automated unfolding algorithms exist, and this is still an active area of research.</p>
      <p id="d1e1053">We initially attempted to train the inpainting CNN on the velocity data without applying any unfolding scheme, but it struggled to adequately inpaint regions where folding had occurred. As noted above, the large jumps at the boundaries of aliased regions are extreme and nonphysical, and while the CNN could reproduce large velocity-folded regions in its outputs, it tended to smooth the change in velocity at the region boundaries over several pixels leading to a smoother transition and thus a result that most velocity unfolding algorithms that rely on detecting these jumps would fail on. We ultimately chose to implement a 2-D flood-filling-based de-aliasing algorithm that is usable for both the KaZR and C-SAPR2 data. The unfolding algorithm takes the velocity data, the Nyquist velocity, and a mask indicating clear pixels. For C-SAPR2, one sweep is processed at a time, and for KaZR, each netCDF file retrieved from ARM is processed individually (typically about 20 min of data each, though this is variable). The algorithm first breaks the velocity data into a set of contiguous regions that do not contain aliasing. This is done by first detecting the edges of regions with non-aliased velocity data by flagging all pixels that have velocity data where there is either a jump in velocity between that pixel and a neighboring pixel that exceeds <inline-formula><mml:math id="M60" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.1</mml:mn><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> or there is a neighboring pixel that does not have velocity data. These region edge pixels are then used as seed points for a flood-fill algorithm which is applied iteratively until no seed points remain (every time a region is filled, all seed points contained in that region get removed from the list). The regions are then processed from largest to smallest: if a region has no neighbors, its velocity remains unaltered, and it is removed from the list of regions; if it does have neighbors, the largest neighboring region is identified, and the mean change in velocity across the border between the two is used to correct the smaller region's velocity by adding or subtracting the appropriate multiple of <inline-formula><mml:math id="M61" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:msub><mml:mi>V</mml:mi><mml:mi mathvariant="normal">max</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>. The smaller of the two regions is then integrated into the larger. This process continues until the list of contiguous velocity regions is emptied. This approach does have some failure modes, typically associated with contiguous regions of aliased velocity that do not have any neighboring regions. We note that many other dealiasing schemes exist (<xref ref-type="bibr" rid="bib1.bibx25" id="altparen.47"/> provide a KaZR specific scheme, for instance) and may be worth investigating for future work, but this approach was sufficient for the CNNs trained here.</p>
</sec>
<sec id="Ch1.S3.SS4">
  <label>3.4</label><title>Training</title>
      <p id="d1e1094">The neural networks were trained using two different loss functions. In the first case, they were trained using a pixel-level mean absolute error (MAE) loss, also known as <inline-formula><mml:math id="M62" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> loss. This can be written as
            <disp-formula id="Ch1.E2" content-type="numbered"><label>2</label><mml:math id="M63" display="block"><mml:mrow><mml:mi mathvariant="script">L</mml:mi><mml:mo>=</mml:mo><mml:msub><mml:mi mathvariant="double-struck">E</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>[</mml:mo><mml:mo>|</mml:mo><mml:mi>y</mml:mi><mml:mo>-</mml:mo><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo><mml:mo>|</mml:mo><mml:mo>]</mml:mo><mml:mo>.</mml:mo></mml:mrow></mml:math></disp-formula></p>
      <p id="d1e1150">Here, <inline-formula><mml:math id="M64" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> is the true pixel value, and <inline-formula><mml:math id="M65" display="inline"><mml:mrow><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the CNN output pixel value. We chose to limit the pixel-level loss so that it is only computed on pixels that are part of the infilled region (and the buffer pixels surrounding it; see Sect. <xref ref-type="sec" rid="Ch1.S3.SS2"/>) because the CNN is constructed to exactly reproduce the pixel values in the region with good data and because we used different sized missing data regions during training and wanted them to have equal weighting when computing gradients and batch loss. The CNNs trained with MAE loss tend to produce more conservative results within the inpainted regions than CGANs (fewer details and extreme pixel values), but they are still particularly good at localizing and preserving sharp edges and larger structures and can outperform conventional inpainting and interpolation methods. In initial experiments, the mean squared error or <inline-formula><mml:math id="M66" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> loss was used, but this led to extremely smoothed features in the inpainted region. We also trained CNNs as conditional generative adversarial networks (CGANs), using a combination of <inline-formula><mml:math id="M67" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> and adversarial loss:

                <disp-formula specific-use="gather" content-type="numbered"><mml:math id="M68" display="block"><mml:mtable displaystyle="true"><mml:mlabeledtr id="Ch1.E3"><mml:mtd><mml:mtext>3</mml:mtext></mml:mtd><mml:mtd><mml:mrow><mml:mstyle class="stylechange" displaystyle="true"/><?xmltex \hack{\hbox\bgroup\fontsize{9.9}{9.9}\selectfont$\displaystyle}?><mml:msub><mml:mi mathvariant="script">L</mml:mi><mml:mi mathvariant="normal">G</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mi mathvariant="italic">λ</mml:mi><mml:msub><mml:mi mathvariant="double-struck">E</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:mo>[</mml:mo><mml:mo>|</mml:mo><mml:mi>y</mml:mi><mml:mo>-</mml:mo><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo><mml:mo>|</mml:mo><mml:mo>]</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mi mathvariant="double-struck">E</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:mo>[</mml:mo><mml:mi>log⁡</mml:mi><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo><mml:mo>)</mml:mo><mml:mo>)</mml:mo><mml:mo>]</mml:mo><?xmltex \hack{$\egroup}?></mml:mrow></mml:mtd></mml:mlabeledtr><mml:mlabeledtr id="Ch1.E4"><mml:mtd><mml:mtext>4</mml:mtext></mml:mtd><mml:mtd><mml:mrow><mml:mstyle displaystyle="true" class="stylechange"/><?xmltex \hack{\hbox\bgroup\fontsize{9.3}{9.3}\selectfont$\displaystyle}?><mml:msub><mml:mi mathvariant="script">L</mml:mi><mml:mi mathvariant="normal">D</mml:mi></mml:msub><mml:mo>=</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mi mathvariant="double-struck">E</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi></mml:mrow></mml:msub><mml:mo>[</mml:mo><mml:mi>log⁡</mml:mi><mml:mo>(</mml:mo><mml:mi>D</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>y</mml:mi><mml:mo>)</mml:mo><mml:mo>)</mml:mo><mml:mo>]</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mi mathvariant="double-struck">E</mml:mi><mml:mrow><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi></mml:mrow></mml:msub><mml:mo>[</mml:mo><mml:mi>log⁡</mml:mi><mml:mo>(</mml:mo><mml:mn mathvariant="normal">1</mml:mn><mml:mo>-</mml:mo><mml:mi>D</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo><mml:mo>)</mml:mo><mml:mo>)</mml:mo><mml:mo>]</mml:mo><?xmltex \hack{$\egroup}?><mml:mo>,</mml:mo></mml:mrow></mml:mtd></mml:mlabeledtr></mml:mtable></mml:math></disp-formula>

            where <inline-formula><mml:math id="M69" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="script">L</mml:mi><mml:mi mathvariant="normal">G</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M70" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="script">L</mml:mi><mml:mi mathvariant="normal">D</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> are the generator and discriminator losses respectively, <inline-formula><mml:math id="M71" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula> is the input radar data and mask, <inline-formula><mml:math id="M72" display="inline"><mml:mi>z</mml:mi></mml:math></inline-formula> is the random seed input, <inline-formula><mml:math id="M73" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> is the ground-truth data, <inline-formula><mml:math id="M74" display="inline"><mml:mrow><mml:mi>G</mml:mi><mml:mo>(</mml:mo><mml:mi>x</mml:mi><mml:mo>,</mml:mo><mml:mi>z</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math></inline-formula> is the generator output, <inline-formula><mml:math id="M75" display="inline"><mml:mi>D</mml:mi></mml:math></inline-formula> is the discriminator classification, and <inline-formula><mml:math id="M76" display="inline"><mml:mi mathvariant="italic">λ</mml:mi></mml:math></inline-formula> is a constant used to weight the MAE and adversarial components of the generator loss. Refer to <xref ref-type="bibr" rid="bib1.bibx12" id="text.48"/> for a description of the adversarial loss and <xref ref-type="bibr" rid="bib1.bibx21" id="text.49"/> for more discussion of the conditional adversarial loss function in Eqs. (<xref ref-type="disp-formula" rid="Ch1.E3"/>) and (<xref ref-type="disp-formula" rid="Ch1.E4"/>).</p>
      <p id="d1e1498">An Adam optimizer was used for training with <inline-formula><mml:math id="M77" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="italic">β</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">0.9</mml:mn></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M78" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="italic">β</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">0.999</mml:mn></mml:mrow></mml:math></inline-formula>, and <inline-formula><mml:math id="M79" display="inline"><mml:mrow><mml:mi mathvariant="italic">ϵ</mml:mi><mml:mo>=</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">7</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> for the <inline-formula><mml:math id="M80" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> case and <inline-formula><mml:math id="M81" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="italic">β</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub><mml:mo>=</mml:mo><mml:mn mathvariant="normal">0.5</mml:mn></mml:mrow></mml:math></inline-formula> for the CGAN case. Other training details depend on the scenario and are summarized in Table <xref ref-type="table" rid="Ch1.T1"/>. Table <xref ref-type="table" rid="Ch1.T1"/> shows specific information about the CNN hyper-parameters, the batch size and number of batches used during training, and when the learning rate was decreased during training for each of the inpainting scenarios. For the <inline-formula><mml:math id="M82" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> cases, an initial learning rate of <inline-formula><mml:math id="M83" display="inline"><mml:mrow><mml:mn mathvariant="normal">5</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">4</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> was used and was reduced by a factor of 10 twice during training after a set number of batches. For the CGAN cases, an initial learning rate of <inline-formula><mml:math id="M84" display="inline"><mml:mrow><mml:mn mathvariant="normal">1</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">4</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> was used for the generator network, and a learning rate of <inline-formula><mml:math id="M85" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.5</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">4</mml:mn></mml:mrow></mml:msup></mml:mrow></mml:math></inline-formula> was used for the discriminator network. These were both manually reduced by a factor of 10 during training based on monitoring<?pagebreak page7735?> the adversarial loss and sample outputs for several randomly selected cases from the training set.</p>

<?xmltex \floatpos{t}?><table-wrap id="Ch1.T1" specific-use="star"><?xmltex \currentcnt{1}?><label>Table 1</label><caption><p id="d1e1649">Table of neural network hyper-parameters and training parameters. <inline-formula><mml:math id="M86" display="inline"><mml:mrow><mml:mi>N</mml:mi><mml:mo>×</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M87" display="inline"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M88" display="inline"><mml:mi>g</mml:mi></mml:math></inline-formula>, and “Depth” are hyper-parameters that define the size and shape of the CNN and are shown in Fig. <xref ref-type="fig" rid="Ch1.F1"/>. (“Depth” refers to the number of down-sampling operations or the maximum value of <inline-formula><mml:math id="M89" display="inline"><mml:mi mathvariant="normal">ℓ</mml:mi></mml:math></inline-formula> from Fig. <xref ref-type="fig" rid="Ch1.F1"/>.) “Batch size”, “Training batches”, and “LR reduction at batches” refer to the number of samples per mini-batch, the total number of mini-batches/weight updates during training, and the batch number after which a learning rate reduction was performed, respectively.</p></caption><oasis:table frame="topbot"><oasis:tgroup cols="9">
     <oasis:colspec colnum="1" colname="col1" align="left"/>
     <oasis:colspec colnum="2" colname="col2" align="left"/>
     <oasis:colspec colnum="3" colname="col3" align="right"/>
     <oasis:colspec colnum="4" colname="col4" align="right"/>
     <oasis:colspec colnum="5" colname="col5" align="right"/>
     <oasis:colspec colnum="6" colname="col6" align="right"/>
     <oasis:colspec colnum="7" colname="col7" align="right"/>
     <oasis:colspec colnum="8" colname="col8" align="right"/>
     <oasis:colspec colnum="9" colname="col9" align="right"/>
     <oasis:thead>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Case</oasis:entry>
         <oasis:entry colname="col2">Loss</oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M90" display="inline"><mml:mrow><mml:mi>N</mml:mi><mml:mo>×</mml:mo><mml:mi>M</mml:mi></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4"><inline-formula><mml:math id="M91" display="inline"><mml:mrow><mml:msub><mml:mi>C</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col5"><inline-formula><mml:math id="M92" display="inline"><mml:mi>g</mml:mi></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col6">Depth</oasis:entry>
         <oasis:entry colname="col7">Batch size</oasis:entry>
         <oasis:entry colname="col8">Training batches</oasis:entry>
         <oasis:entry colname="col9">LR reduction at batches</oasis:entry>
       </oasis:row>
     </oasis:thead>
     <oasis:tbody>
       <oasis:row>
         <oasis:entry colname="col1">KaZR outage</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M93" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M94" display="inline"><mml:mrow><mml:mn mathvariant="normal">256</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">256</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">8</oasis:entry>
         <oasis:entry colname="col5">2</oasis:entry>
         <oasis:entry colname="col6">5</oasis:entry>
         <oasis:entry colname="col7">8</oasis:entry>
         <oasis:entry colname="col8"><inline-formula><mml:math id="M95" display="inline"><mml:mrow><mml:mn mathvariant="normal">5</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col9"><inline-formula><mml:math id="M96" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M97" display="inline"><mml:mrow><mml:mn mathvariant="normal">4.75</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">KaZR outage</oasis:entry>
         <oasis:entry colname="col2">Eq. (<xref ref-type="disp-formula" rid="Ch1.E3"/>)</oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M98" display="inline"><mml:mrow><mml:mn mathvariant="normal">256</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">256</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">14</oasis:entry>
         <oasis:entry colname="col5">1.75</oasis:entry>
         <oasis:entry colname="col6">5</oasis:entry>
         <oasis:entry colname="col7">16</oasis:entry>
         <oasis:entry colname="col8"><inline-formula><mml:math id="M99" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.6</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col9"><inline-formula><mml:math id="M100" display="inline"><mml:mrow><mml:mn mathvariant="normal">0.3</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M101" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.25</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">KaZR blind zone</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M102" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M103" display="inline"><mml:mrow><mml:mn mathvariant="normal">256</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">256</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">8</oasis:entry>
         <oasis:entry colname="col5">2</oasis:entry>
         <oasis:entry colname="col6">5</oasis:entry>
         <oasis:entry colname="col7">8</oasis:entry>
         <oasis:entry colname="col8"><inline-formula><mml:math id="M104" display="inline"><mml:mrow><mml:mn mathvariant="normal">5</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col9"><inline-formula><mml:math id="M105" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M106" display="inline"><mml:mrow><mml:mn mathvariant="normal">4.75</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">KaZR blind zone</oasis:entry>
         <oasis:entry colname="col2">Eq. (<xref ref-type="disp-formula" rid="Ch1.E3"/>)</oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M107" display="inline"><mml:mrow><mml:mn mathvariant="normal">256</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">256</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">14</oasis:entry>
         <oasis:entry colname="col5">1.75</oasis:entry>
         <oasis:entry colname="col6">5</oasis:entry>
         <oasis:entry colname="col7">16</oasis:entry>
         <oasis:entry colname="col8"><inline-formula><mml:math id="M108" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.3</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col9"><inline-formula><mml:math id="M109" display="inline"><mml:mrow><mml:mn mathvariant="normal">0.7</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M110" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.1</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">C-SAPR2 blockage</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M111" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M112" display="inline"><mml:mrow><mml:mn mathvariant="normal">1024</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">128</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">10</oasis:entry>
         <oasis:entry colname="col5">2</oasis:entry>
         <oasis:entry colname="col6">6</oasis:entry>
         <oasis:entry colname="col7">8</oasis:entry>
         <oasis:entry colname="col8"><inline-formula><mml:math id="M113" display="inline"><mml:mrow><mml:mn mathvariant="normal">5</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col9"><inline-formula><mml:math id="M114" display="inline"><mml:mrow><mml:mn mathvariant="normal">4</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula>, <inline-formula><mml:math id="M115" display="inline"><mml:mrow><mml:mn mathvariant="normal">4.75</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">5</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">C-SAPR2 blockage</oasis:entry>
         <oasis:entry colname="col2">Eq. (<xref ref-type="disp-formula" rid="Ch1.E3"/>)</oasis:entry>
         <oasis:entry colname="col3"><inline-formula><mml:math id="M116" display="inline"><mml:mrow><mml:mn mathvariant="normal">1024</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">128</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">12</oasis:entry>
         <oasis:entry colname="col5">1.75</oasis:entry>
         <oasis:entry colname="col6">6</oasis:entry>
         <oasis:entry colname="col7">16</oasis:entry>
         <oasis:entry colname="col8"><inline-formula><mml:math id="M117" display="inline"><mml:mrow><mml:mn mathvariant="normal">6.5</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">4</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col9"><inline-formula><mml:math id="M118" display="inline"><mml:mrow><mml:mn mathvariant="normal">1.9</mml:mn><mml:mo>×</mml:mo><mml:msup><mml:mn mathvariant="normal">10</mml:mn><mml:mn mathvariant="normal">4</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
     </oasis:tbody>
   </oasis:tgroup></oasis:table></table-wrap>

      <p id="d1e2291">“Data augmentation” schemes involve applying random transformations to training samples and are often used to train deep CNNs. They increase the diversity of the training samples and can improve skill and reduce overfitting. Many of the common data augmentations used for images cannot be applied to radar data without resulting in physically impossible samples however. Here, we have carefully selected several data augmentations that result in physically plausible samples. For KaZR, training samples were selected from the training set using random start times (as opposed to chopping the dataset into discrete <inline-formula><mml:math id="M119" display="inline"><mml:mrow><mml:mn mathvariant="normal">256</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">256</mml:mn></mml:mrow></mml:math></inline-formula> samples prior to training) and the samples were randomly flipped with respect to time. For cloud features that are embedded in the large-scale flow, flipping with respect to time results in a physically plausible sample and approximates the same cloud feature embedded in large-scale flow in the opposite direction. For weather features whose shape is heavily determined by the large-scale flow (e.g. fall streaks), this results in an unlikely but still very realistic-looking sample. For C-CSAPR2, random rotation with respect to azimuth and random flips with respect to azimuth were used during training. These transforms approximate different start azimuths or a different coordinate convention for the sweeps and do not alter the physical structure of the weather.</p>
</sec>
</sec>
<sec id="Ch1.S4">
  <label>4</label><title>Results</title>
      <?pagebreak page7736?><p id="d1e2315">In this section, the results of the CNN-based inpainting are compared to several common inpainting techniques. We first introduce these benchmark schemes and then discuss the error metrics used. Finally, results for each of the three inpainting scenarios are discussed separately. For space, only one sample case is shown for each of the inpainting scenarios, but many more have been made available online <xref ref-type="bibr" rid="bib1.bibx11" id="paren.50"/>. The examples shown in the paper were chosen blindly but not randomly, meaning we picked cases to include based on the ground truth but without consulting the CNN output. This was to ensure that the examples included a sufficient amount of cloud and precipitation to be of interest.
<?xmltex \hack{\newpage}?></p>
<sec id="Ch1.S4.SS1">
  <label>4.1</label><title>Benchmark inpainting schemes</title>
      <p id="d1e2329">CNN output was compared to several more conventional inpainting schemes of varying complexity. Examples of each of these schemes applied to the KaZR inpainting scenarios are shown in Fig. <xref ref-type="fig" rid="Ch1.F2"/>. The same preprocessing and post-processing used for the inpainting CNNs is used for the inpainting schemes, as described in Sect. <xref ref-type="sec" rid="Ch1.S3.SS2"/>. We found that, in practice, this also helped the inpainting schemes generate sharper borders near cloud edges. Because the KaZR low-level blind zone scenario only has a single boundary with information that can be used for inpainting (the upper boundary), a different set of benchmark schemes were used for this case that are applicable to this type of scenario. The first three inpainting schemes below were used for both the KaZR outage and C-SAPR2 beam blockage scenarios which have two to three boundaries with information, while the last three were used for low-level inpainting.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F2" specific-use="star"><?xmltex \currentcnt{2}?><?xmltex \def\figurename{Figure}?><label>Figure 2</label><caption><p id="d1e2338">Example outputs demonstrating each of the benchmark inpainting schemes applied to KaZR reflectivity data. Panels <bold>(b)</bold>–<bold>(d)</bold> demonstrate the schemes used for both the KaZR outage (shown) and C-SAPR2 beam blockage scenarios red (not shown). Panels <bold>(f)</bold>–<bold>(h)</bold> show the schemes used for the low-level blind zone scenario. Ground truth is shown in panels <bold>(a)</bold> and <bold>(e)</bold> (left column).</p></caption>
          <?xmltex \igopts{width=497.923228pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f02.png"/>

        </fig>

      <p id="d1e2366"><italic>Linear interpolation</italic>. This is simple 1-dimensional linear interpolation between opposite boundaries of the missing data region. For the KaZR data, it is done with respect to time, and for the C-SAPR2 data, it is done with respect to azimuth angle. Typically this approach performs well in terms of MAE but produces unrealistic results. It also does not take into account variability with respect to height (range) for KaZR (C-SAPR2).</p>
      <p id="d1e2372"><italic>Laplace</italic>. This scheme involves numerically solving the 2-dimensional Laplace equation on the interior of the missing data region. Firstly, we use linear interpolation to define the values on any missing boundaries (in the KaZR outage case, we interpolate the values for the bottom and top range gates for instance), then
            <disp-formula id="Ch1.E5" content-type="numbered"><label>5</label><mml:math id="M120" display="block"><mml:mrow><mml:mn mathvariant="normal">0</mml:mn><mml:mo>=</mml:mo><mml:mi mathvariant="italic">α</mml:mi><mml:msup><mml:mi mathvariant="normal">∇</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msup><mml:mi>z</mml:mi></mml:mrow></mml:math></disp-formula>
          is solved numerically on the interior of the inpainting region. Here, <inline-formula><mml:math id="M121" display="inline"><mml:mi>z</mml:mi></mml:math></inline-formula> is the field being inpainted, and <inline-formula><mml:math id="M122" display="inline"><mml:mi mathvariant="normal">∇</mml:mi></mml:math></inline-formula> is the Laplacian with respect to time–height for KaZR or azimuth–range for C-SAPR2, and <inline-formula><mml:math id="M123" display="inline"><mml:mrow><mml:mi mathvariant="italic">α</mml:mi><mml:mo>=</mml:mo><mml:mstyle displaystyle="false"><mml:mfrac style="text"><mml:mn mathvariant="normal">1</mml:mn><mml:mn mathvariant="normal">50</mml:mn></mml:mfrac></mml:mstyle></mml:mrow></mml:math></inline-formula> and represents the diffusivity. The solution is found to a tolerance of 0.0001 using an explicit forward differencing scheme with a nine-point stencil, which was sufficient for this application. This method produces particularly smoothed results in the missing data regions, though it is better than other methods at accurately producing large 2-D structures (for instance, linear interpolation only considers 1-D variability).</p>
      <p id="d1e2426"><italic>Telea</italic>. The Telea inpainting scheme <xref ref-type="bibr" rid="bib1.bibx37" id="paren.51"/> is a fast inpainting algorithm that involves marching inward from the boundary of a missing data region and combines concepts of isophote-based inpainting and 2-D averaging-based diffusion. We used the implementation in the Python CV2 package with a radius of 16 pixels. This approach also produces fairly smoothed results and cannot produce small-scale variability like turbulence in the missing data regions but does a better job of extending larger sharp features like cloud edges than the Laplace method.</p>
      <p id="d1e2434"><italic>Repeat</italic>. This is the simplest inpainting scheme used here and involves simply repeating the data at the upper boundary of the low-level blind zone down to the lowest range gate.</p>
      <p id="d1e2439"><italic>Marching average</italic>. Each horizontal line in the missing data region is inpainted from the uppermost level downwards. The assigned values are the average of the above data within 4 pixels. This method is conceptually similar to downwards repetition of the boundary data but generates a smoothed result.</p>
      <p id="d1e2444"><italic>Efros and Leung</italic>. <xref ref-type="bibr" rid="bib1.bibx7" id="text.52"/> present a texture synthesis model for image extrapolation that is well suited for inpainting missing data regions on the edges of images. Here, the process also involves marching from the highest missing data level downwards and filling in the missing data region one level at a time. First, a dictionary of exemplars is built by sampling <inline-formula><mml:math id="M124" display="inline"><mml:mrow><mml:mn mathvariant="normal">5</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">5</mml:mn></mml:mrow></mml:math></inline-formula>-pixel patches in the area up to 64 pixels (192 m) above the missing data region with a stride of 2 pixels. Each pixel is filled by computing the pixel-wise mean squared error between the surrounding <inline-formula><mml:math id="M125" display="inline"><mml:mrow><mml:mn mathvariant="normal">5</mml:mn><mml:mo>×</mml:mo><mml:mn mathvariant="normal">5</mml:mn></mml:mrow></mml:math></inline-formula> patch (not considering pixels yet to be inpainted) and the dictionary and filling the missing pixel with the value of the center pixel from the closest matching patch. This scheme is designed to generate realistic textures in the inpainted region.</p>
</sec>
<sec id="Ch1.S4.SS2">
  <label>4.2</label><title>Error metrics</title>
      <p id="d1e2484">Three error metrics are used to evaluate the inpainting results. They have been selected to provide a comprehensive evaluation of the quality of the outputs that considers the quality of the spatial variability and distribution of values generated by the inpainting schemes in addition to pixel-level error.</p>
      <p id="d1e2487"><italic>Mean absolute error</italic>. The mean absolute error (MAE) is the primary error metric used. Panels (a)–(c) in Figs. <xref ref-type="fig" rid="Ch1.F4"/>, <xref ref-type="fig" rid="Ch1.F6"/>, and <xref ref-type="fig" rid="Ch1.F8"/> show MAE for the three fields analyzed. MAE was chosen because it is simple, the values are dimensional and easy to interpret, and it can be used directly in the loss function of the CNNs. Past CNN-based image inpainting work <xref ref-type="bibr" rid="bib1.bibx21" id="paren.53"/> has noted that MAE is preferable to other pixel-level losses because it is better for preserving sharp edges and small-scale variability. The MAE results for the test set are dimensionalized, meaning absolute error for velocities is shown in units of meters per second for instance, but it is important to note that because these errors are computed as an average over the missing data region, they are dependent on the amount of cloud present in each sample, and the inpainting scheme's ability to localize cloud edges (including many samples with little to no cloud in the test set would dramatically reduce MAE because inpainting no-data regions is trivial for all of the schemes). Because of this, the reported MAEs are mostly useful as a relative estimate of the skill of the different inpainting schemes as opposed to an absolute estimate of the difference between output reflectivity and ground truth for a particular sample for instance. In practice, MAE is particularly good for estimating the accuracy of an algorithm's ability to estimate the large-scale features in<?pagebreak page7737?> the missing data regions but is not a good estimator of the plausibility of the output. Indeed, while the <inline-formula><mml:math id="M126" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN is optimized on MAE and outperforms all other schemes under this metric, it produces smooth-looking outputs, and so two other error metrics are used to evaluate the accuracy of the small-scale variability and the distribution of the inpainted data.</p>
      <p id="d1e2512"><italic>Earth mover's distance</italic>. The earth mover's distance (EMD), also known as the Wasserstein metric, measures the similarity between probability density functions. Here, it is used to estimate the similarity of the distribution of values in the inpainting scheme outputs and the ground truth. The EMD imagines probability density functions (PDFs) as piles of “dirt” and computes the “work” necessary to transform one distribution into another: the product of the amount moved and the distance. For 1-dimensional distributions, this is simply the integral of the absolute difference between the PDFs. There are some important notes about our usage here: firstly, no-data regions are included in the computation. For reflectivity and spectrum width, they are assigned the minimum value for the field, and for velocity they are assigned 0 ms<inline-formula><mml:math id="M127" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. This was necessary because the inpainting schemes do not necessarily produce cloud and precipitation features in the inpainted region of the exact same size (number of pixels) as the ground truth, but the area under the two PDFs that the EMD compares needs to be equal. Secondly, the EMD values are dimensional but do not convey physical meaning unless compared to other EMD scores, so here we normalize the EMD and present it as the percentage of the worst possible EMD (lower values are better). Formally, the EMD is computed as
            <disp-formula id="Ch1.E6" content-type="numbered"><label>6</label><mml:math id="M128" display="block"><mml:mrow><?xmltex \hack{\hbox\bgroup\fontsize{9.8}{9.8}\selectfont$\displaystyle}?><mml:mi mathvariant="normal">EMD</mml:mi><mml:mo>=</mml:mo><mml:mfenced open="(" close=")"><mml:mstyle displaystyle="true"><mml:mfrac style="display"><mml:mn mathvariant="normal">100</mml:mn><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mtext>max</mml:mtext></mml:msub><mml:mo>-</mml:mo><mml:msub><mml:mi>z</mml:mi><mml:mtext>min</mml:mtext></mml:msub></mml:mrow></mml:mfrac></mml:mstyle></mml:mfenced><mml:munderover><mml:mo movablelimits="false">∫</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mtext>min</mml:mtext></mml:msub></mml:mrow><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mtext>max</mml:mtext></mml:msub></mml:mrow></mml:munderover><mml:mfenced open="|" close="|"><mml:mrow><mml:mspace width="0.25em" linebreak="nobreak"/><mml:munderover><mml:mo movablelimits="false">∫</mml:mo><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mtext>min</mml:mtext></mml:msub></mml:mrow><mml:mi>z</mml:mi></mml:munderover><mml:mfenced close=")" open="("><mml:mrow><mml:msub><mml:mi>Z</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub><mml:mo>(</mml:mo><mml:mi>y</mml:mi><mml:mo>)</mml:mo><mml:mo>-</mml:mo><mml:msub><mml:mi>Z</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msub><mml:mo>(</mml:mo><mml:mi>y</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:mfenced><mml:mi mathvariant="normal">d</mml:mi><mml:mi>y</mml:mi></mml:mrow></mml:mfenced><mml:mi mathvariant="normal">d</mml:mi><mml:mi>z</mml:mi><mml:mo>,</mml:mo><?xmltex \hack{$\egroup}?></mml:mrow></mml:math></disp-formula>
          where <inline-formula><mml:math id="M129" display="inline"><mml:mi>z</mml:mi></mml:math></inline-formula> and <inline-formula><mml:math id="M130" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> integrate over the range of reflectivity values, <inline-formula><mml:math id="M131" display="inline"><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mtext>max</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M132" display="inline"><mml:mrow><mml:msub><mml:mi>z</mml:mi><mml:mtext>min</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> are the minimum and maximum reflectivity, and <inline-formula><mml:math id="M133" display="inline"><mml:mi>Z</mml:mi></mml:math></inline-formula>s represent PDFs of reflectivity data. EMD for the reflectivity field is shown panel (d) of Figs. <xref ref-type="fig" rid="Ch1.F4"/>, <xref ref-type="fig" rid="Ch1.F6"/>, and <xref ref-type="fig" rid="Ch1.F8"/>. Plots of EMD for the other fields are included in the Supplement.</p>
      <p id="d1e2674"><italic>Power spectral density (PSD)</italic>. The PSD is used to estimate the ability of the inpainting schemes to produce plausible small-scale variability. This is an important error metric to consider because many inpainting approaches produce smooth results in the inpainted region that, while they may optimize pixel-level error reasonably well, are not representative of most atmospheric phenomena, which often contain small-scale turbulent features. Because some of the schemes behave differently along the different dimensions (time–height or range–azimuth), we separately compute PSD in each dimension. Here, the PSD is computed as
            <disp-formula id="Ch1.E7" content-type="numbered"><label>7</label><mml:math id="M134" display="block"><mml:mrow><mml:mi mathvariant="normal">PSD</mml:mi><mml:mo>=</mml:mo><mml:mn mathvariant="normal">10</mml:mn><mml:msub><mml:mi>log⁡</mml:mi><mml:mn mathvariant="normal">10</mml:mn></mml:msub><mml:mfenced open="(" close=")"><mml:mrow><mml:msub><mml:mover accent="true"><mml:mrow><mml:mfenced close=")" open="("><mml:mrow><mml:msub><mml:mi mathvariant="script">F</mml:mi><mml:mi>x</mml:mi></mml:msub><mml:mo mathvariant="italic">{</mml:mo><mml:mi>z</mml:mi><mml:mo mathvariant="italic">}</mml:mo></mml:mrow></mml:mfenced></mml:mrow><mml:mo mathvariant="normal">‾</mml:mo></mml:mover><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:mfenced><mml:mo>,</mml:mo></mml:mrow></mml:math></disp-formula>
          where <inline-formula><mml:math id="M135" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="script">F</mml:mi><mml:mi>x</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is the fast Fourier transform with respect to dimension <inline-formula><mml:math id="M136" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula>, <inline-formula><mml:math id="M137" display="inline"><mml:mrow><mml:msub><mml:mover accent="true"><mml:mrow><mml:mo>(</mml:mo><mml:mo>⋅</mml:mo><mml:mo>)</mml:mo></mml:mrow><mml:mo mathvariant="normal">‾</mml:mo></mml:mover><mml:mi>y</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula> is the mean with respect to dimensions <inline-formula><mml:math id="M138" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula>, and <inline-formula><mml:math id="M139" display="inline"><mml:mi>z</mml:mi></mml:math></inline-formula> is the reflectivity. For horizontal reflectivity PSD computed on the KaZR blind-zone case, for instance, <inline-formula><mml:math id="M140" display="inline"><mml:mi>z</mml:mi></mml:math></inline-formula> is reflectivity, <inline-formula><mml:math id="M141" display="inline"><mml:mi>x</mml:mi></mml:math></inline-formula> is time, and <inline-formula><mml:math id="M142" display="inline"><mml:mi>y</mml:mi></mml:math></inline-formula> is height. In the following plots of PSD, the ground-truth PSD is shown as a black line, and ideally the inpainting schemes yield PSD curves close to this black line.<?pagebreak page7738?> When viewing the results for this error metric, it is important to note that some of the simplest techniques, linear interpolation and repeating boundary conditions, appear to produce realistic power spectra. They are only able to do so in one dimension however because they are simply copying the realistic frequency information from the boundary into the missing data region. Along the other dimension, they do not produce any useful small-scale variability. As with the EMD, the PSD metric cannot handle no-data regions well, so these regions are filled with the same default values described for EMD before computing the PSD. In Figs. 4, 6, and 8, mean PSD for the reflectivity field is shown in panels (e) and (f). Plots of the PSD for velocity and spectrum width are included in the Supplement.</p>
</sec>
<sec id="Ch1.S4.SS3">
  <label>4.3</label><title>KaZR outage scenario</title>
      <p id="d1e2801">In this scenario, we simulate bad or missing data for a 16 to 168 s period. While such an outage is less common than the blind-zone or blockage scenario, it is possible, particularly when radars multi-task different modes and are not continuously in a zenith mode but  revisit it regularly. The outage case also provides an example of how the CNN-based inpainting schemes behave when there is information available on two boundaries. The CNN outputs for a sample case drawn from the test set are shown in Fig. <xref ref-type="fig" rid="Ch1.F3"/>. This sample shows a 1 min missing data period denoted by the vertical dashed lines, and a 16 s buffer period is used before and after the missing data where the CNN outputs are blended with the ground-truth data to avoid edge artifacts. This scenario (Fig. <xref ref-type="fig" rid="Ch1.F3"/>a–c) was chosen because it contained complex cloud and precipitation features, including multi-layer cloud, areas of heavy precipitation, and many regions with turbulent motion. The outputs from the <inline-formula><mml:math id="M143" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN are shown in Fig. <xref ref-type="fig" rid="Ch1.F3"/>d–f. They are clearly smoothed, and qualitatively speaking, if the markers to denote the outage period were not shown it would be obvious that inpainting had been done. This is particularly evident for the cloud near the top of the sample (around 6 km). In the ground truth (panels a–c), this cloud does not extend through the missing data region, but this is not made obvious by the information on the boundaries. The CNN has clearly struggled to localize the edges for this cloud. Nonetheless, the CNN outputs are considerably more appealing than the outputs from any of the baseline inpainting schemes. The <inline-formula><mml:math id="M144" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN is particularly good at extending diagonal features like fall streaks across the missing data region and identifying an appropriate location for the upper boundary of the lower cloud. While some smoothing of the data has occurred, the results are much sharper than those achieved with the other schemes.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F3" specific-use="star"><?xmltex \currentcnt{3}?><?xmltex \def\figurename{Figure}?><label>Figure 3</label><caption><p id="d1e2834">An example of inpainting a KaZR outage/missing data period. <bold>(a</bold>–<bold>c)</bold> Ground truth, <bold>(d</bold>–<bold>f)</bold> conventional CNN inpainting, and <bold>(g</bold>–<bold>i)</bold> CGAN inpainting.</p></caption>
          <?xmltex \igopts{width=355.659449pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f03.png"/>

        </fig>

      <?xmltex \floatpos{t}?><fig id="Ch1.F4" specific-use="star"><?xmltex \currentcnt{4}?><?xmltex \def\figurename{Figure}?><label>Figure 4</label><caption><p id="d1e2864">Error metrics computed on the KaZR outage inpainting scenario. Panels <bold>(a)</bold>–<bold>(c)</bold> show mean absolute pixel errors. Panel <bold>(d)</bold> shows the earth mover's distance. Panels <bold>(e)</bold> and <bold>(f)</bold> show power spectral density.</p></caption>
          <?xmltex \igopts{width=369.885827pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f04.png"/>

        </fig>

      <p id="d1e2889">The improvement over conventional inpainting schemes is clear in Fig. <xref ref-type="fig" rid="Ch1.F4"/>a–c, where pixel-wise MAE is shown for each of the fields as a function of the duration of the missing data period. The <inline-formula><mml:math id="M145" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>-optimizing CNN is the best-performing scheme for each field. It also produces the most reasonable histogram of reflectivity outputs according to the EMD metric shown in Fig. <xref ref-type="fig" rid="Ch1.F4"/>d. The margin of improvement for these two metrics increases with the size of the missing data region. Interestingly, the linear interpolation approach, which is one of the least sophisticated inpainting schemes, performs the second best in most cases. The vertical and horizontal PSD plots shown in Fig. <xref ref-type="fig" rid="Ch1.F4"/>e and f demonstrate the L1-CNN's (and other inpainting schemes') major limitation however: the tendency to over-smooth outputs. In panels (e)–(f), the black curves represent the ground truth, and proximity to the black curve is better. The inpainting schemes typically have lower PSD than the ground truth, and this is the most evident at high frequencies, meaning the inpainting methods do not produce a realistic amount of small-scale variability. The linear interpolation approach has a reasonable PSD curve in the vertical, but this is because it simply copies vertical frequency information from the boundaries of the missing data region. It is the worst performer in the horizontal (time) dimension.</p>
      <p id="d1e2909">The only approach that performs well in both the horizontal and vertical component of PSD is the CGAN. The output from the CGAN (Fig. <xref ref-type="fig" rid="Ch1.F3"/>g–i) is significantly more realistic than the <inline-formula><mml:math id="M146" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN, to the degree that it may not be obvious that inpainting was performed without close examination. This is because it has generated plausible small-scale variability in addition to inpainting the large-scale features successfully. For instance, for the lower cloud and precipitation region in this sample, it has successfully generated a plausible and sharp upper boundary for the cloud and has extended large reflectivity features that span the missing data region across, much like the output from the <inline-formula><mml:math id="M147" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN. In addition, it has added small-scale features, like jaggedness to the upper edge of the cloud, or turbulent motion visible in the vertical velocity field in the upper half of the cloud. The most noticeable unrealistic feature that is frequently produced by the CGAN is a tendency to introduce faint linear structures and checkerboard patterns. This is a well-documented problem for generative adversarial networks in general and is a byproduct of the convolutions in the CNN <xref ref-type="bibr" rid="bib1.bibx31" id="paren.54"/>. The results for the higher cloud are of particular interest. Here, the CGAN has opted to connect the cloud features on either side of the missing data region and includes positive velocities and high spectrum width, indicating cloud-top turbulence. These features are not present in the ground-truth data but are realistic and consistent with cloud features commonly observed during the field campaign. Because the features are not an exact match for the ground truth the scheme's MAE and EMD scores suffer, but they are still comparable to the scores of other inpainting schemes.</p>
</sec>
<sec id="Ch1.S4.SS4">
  <label>4.4</label><title>KaZR Blind Zone Scenario</title>
      <?pagebreak page7740?><p id="d1e2948">In this case, we simulate missing low-level KaZR data. This type of inpainting has multiple potential applications: Firstly, the KaZR operates in both a burst and chirped pulse mode. The chirped mode provides higher sensitivity but does not retrieve usable near-surface data <xref ref-type="bibr" rid="bib1.bibx41" id="paren.55"/>. Secondly, space-borne radars like GPM are unable to retrieve near-ground data due to interference from the surface <xref ref-type="bibr" rid="bib1.bibx36" id="paren.56"/> and the size of the blind zone depends on terrain and surface type. The CNN-based inpainting techniques presented here could easily be modified to work with such datasets. Here, KaZR burst pulse-mode data are artificially degraded, and all values below a randomly selected level from 0.21–2.01 km are removed. In this scenario, we also use a small buffer region (30–510 m) near the top of the missing data region, where observations are smoothly merged with the CNN output.</p>
      <p id="d1e2957">A sample from the test set is shown in Fig. <xref ref-type="fig" rid="Ch1.F5"/>. The ground-truth data (panels a–c) show a cloud (left side) with a fall streak extending diagonally downward towards the missing data region that weakens closer to the ground. Only about half of the missing data region contains cloud and precipitation, and so inpainting this particular case will involve accurately guessing the location of the cloud edge. The output from the <inline-formula><mml:math id="M148" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN is shown in panels (d)–(f). This scheme does a particularly good job of extending the large-scale features downwards diagonally into the missing data region, which is consistent with the ground truth. Note the leftmost cloud edge in the ground truth intersects the top of the missing data region at around 1 min but reaches the ground around 4.5 min. The <inline-formula><mml:math id="M149" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN does a particularly good job of capturing this. It does not, however, introduce sharp features or small-scale variability. This is perhaps most noticeable in the spectrum width field, where values in the blind zone only range from about 0–0.5 ms<inline-formula><mml:math id="M150" display="inline"><mml:msup><mml:mi/><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1</mml:mn></mml:mrow></mml:msup></mml:math></inline-formula>. On the other hand, the CGAN does a much better job of including sharp and realistic features in the missing data region. Notably, in this case it chooses to increase the intensity of the main fall streak in the example all the way to the surface, with corresponding high values of reflectivity, negative velocity, and increased spectrum width. This is not consistent with the ground truth, but it is a plausible scenario.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F5" specific-use="star"><?xmltex \currentcnt{5}?><?xmltex \def\figurename{Figure}?><label>Figure 5</label><caption><p id="d1e2998">An example of inpainting a KaZR low-level blind zone. <bold>(a</bold>–<bold>c)</bold> Ground truth, <bold>(d</bold>–<bold>f)</bold> conventional CNN inpainting, and <bold>(g</bold>–<bold>i)</bold> CGAN inpainting.</p></caption>
          <?xmltex \igopts{width=355.659449pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f05.png"/>

        </fig>

      <?xmltex \floatpos{t}?><fig id="Ch1.F6" specific-use="star"><?xmltex \currentcnt{6}?><?xmltex \def\figurename{Figure}?><label>Figure 6</label><caption><p id="d1e3029">Error metrics computed on the KaZR low-level blind zone. Panels <bold>(a)</bold>–<bold>(c)</bold> show mean absolute pixel errors. Panel <bold>(d)</bold> shows the earth mover's distance. Panels <bold>(e)</bold> and <bold>(f)</bold> show power spectral density.</p></caption>
          <?xmltex \igopts{width=384.112205pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f06.png"/>

        </fig>

      <p id="d1e3053">The error metrics for the low-level inpainting case are shown in Fig. <xref ref-type="fig" rid="Ch1.F6"/>. As in the simulated KaZR outage case discussed above, the classical CNN approach outperforms all of the baseline inpainting schemes in both MAE and EMD (panels a–d), typically only having about <inline-formula><mml:math id="M151" display="inline"><mml:mrow><mml:mn mathvariant="normal">1</mml:mn><mml:mo>/</mml:mo><mml:mn mathvariant="normal">2</mml:mn></mml:mrow></mml:math></inline-formula>–<inline-formula><mml:math id="M152" display="inline"><mml:mrow><mml:mn mathvariant="normal">1</mml:mn><mml:mo>/</mml:mo><mml:mn mathvariant="normal">5</mml:mn></mml:mrow></mml:math></inline-formula> of the pixel-level error. This alone is a significant improvement and indicates that in the future, a deep-learning-based approach may be best for filling in missing low-level radar data. Interestingly, in this case the CGAN performs about as well as the <inline-formula><mml:math id="M153" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN in terms of the MAE and EMD. We hypothesize that this is due to the fact that there is only one boundary with information that can be used by the inpainting algorithms in this case. Compared to the KaZR outage case (Fig. <xref ref-type="fig" rid="Ch1.F4"/>) the MAE for all of the algorithms has increased substantially in this case, though the MAE for the CGAN less so. This may be because the other inpainting algorithms relied heavily on having two boundaries to accurately place the edges of inpainted clouds, and because the second boundary is not available in this case, inability to correctly place cloud edges and large-scale features becomes a large contributor to MAE. Meanwhile, a large contributor to the MAE of the CGAN is likely its tendency to generate small-scale pixel-level variability which would not necessarily be different between the two KaZR inpainting scenarios. At the same time, it performs comparably to the classical CNN when placing large-scale features. Power spectral density is shown in panels (e)–(f). Unsurprisingly, the CGAN (purple) is the scheme that consistently performs well for both horizontal and vertical components of PSD. Interestingly, the <inline-formula><mml:math id="M154" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN performs well in the vertical component but worse in the horizontal component. The “repeat” scheme does well in the horizontal component of PSD because it simply copies frequency information from the boundary of the missing data region but cannot produce any variability in the vertical. Finally, the “Efros” scheme actually produces too much high-frequency variability in the horizontal component. This results from the structure of the dataset. The scheme is simply copying textures from the area above the missing data region, and the PSD reflects those textures. In conclusion, deep learning can considerably improve upon existing schemes for infilling missing low-level radar data. Furthermore, for this specific application, there appears to be little reason not to use a CGAN-based approach that can generate plausible small-scale variability because it does not lead to significant increases in pixel-level error compared to the <inline-formula><mml:math id="M155" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN.</p>
</sec>
<sec id="Ch1.S4.SS5">
  <label>4.5</label><title>Scanning radar beam blockage scenario</title>
      <p id="d1e3126">In this scenario, beam blockage is simulated using data from the C-SAPR2 scanning  radar. Beam blockage due to nearby objects (like trees, buildings, or terrain) is a common problem for scanning radars, particularly at lower elevation angles. Of the three missing data scenarios used in this paper, this one is likely the most widely applicable due to the prevalence of scanning radars in operational systems. The ability to accurately fill in missing data due to beam blockage is useful for operational weather radars, as it can provide more consistent inputs for weather models that ingest radar data, like nowcasting systems, and could be used to generate more appealing radar products for dissemination to the public. For the C-SAPR2 radar specifically, inpainting beam blockage areas would allow for easier application of high-level processing that might be used for research, such as feature detection and tracking. In particular, we would want an infilling system in these cases to accurately represent the distribution of the weather without straying too far from the ground truth. Furthermore, C-SAPR2 is deployed at ARM sites which may be more likely to suffer from beam blockage due to nearby objects because there is typically a large suite of other instruments deployed nearby and because clearing potential nearby blockages (tree removal or re-grading) or building a large structure to raise the antenna may not be an option.<?pagebreak page7741?> Missing data of this form can also be caused by attenuation of the radar wave due to strong precipitation, which is a common problem for C-band and higher frequency radars.</p>
      <p id="d1e3129">Here, we simulate beam blockages of 8–42<inline-formula><mml:math id="M156" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> starting anywhere from 1.6–25.6 km from the radar. Again, a buffer of variable size from 1–17 pixels during training and 8 pixels during testing is used around the simulated blockage region to merge inpainted data and observations via Eq. (<xref ref-type="disp-formula" rid="Ch1.E1"/>). Unlike the other two inpainting scenarios, this case has three boundaries with observations. The buffer at the corners where these boundaries meet is defined as the outer product of two vectors that linearly decrease from 1–0. The C-SAPR2 data used are defined in polar coordinates with 1<inline-formula><mml:math id="M157" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> resolution in the azimuth and 100 m range resolution. While it may be possible to inpaint the blockage region by applying a smaller CNN multiple times, it is preferable to inpaint the entire blockage region at once to avoid introducing edge artifacts in the middle of the inpainted area. It is also desirable to perform inpainting at the native resolution of the radar data to avoid introducing artifacts or degrading the data quality when converting to a different coordinate system. Because of this, we chose to use a slightly different configuration of the CNN for the beam blockage case. The inputs are trimmed to 1024 (range gates) <inline-formula><mml:math id="M158" display="inline"><mml:mrow><mml:mo>×</mml:mo><mml:mn mathvariant="normal">128</mml:mn></mml:mrow></mml:math></inline-formula> (<inline-formula><mml:math id="M159" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>) so that the whole scan is not processed at once but the entire blockage region is. Trimming to 128<inline-formula><mml:math id="M160" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula> in the azimuth as opposed to processing the full scan was largely due to memory limitations during training. This also means the trained CNN may be more versatile however because it does not require a scan with an azimuthal resolution of exactly 360<inline-formula><mml:math id="M161" display="inline"><mml:msup><mml:mi/><mml:mo>∘</mml:mo></mml:msup></mml:math></inline-formula>. The location and the size of the blockages were chosen randomly during training, but the samples are all rotated in the azimuth so that the blockage appears in the center of the inputs to the CNNs.</p>
      <p id="d1e3190">This scenario differs from the KaZR inpainting scenarios because the CNN operates directly on the polar data. We made this choice so that the data did not have to be re-gridded, which would sacrifice resolution near the radar and could potentially introduce unwanted artifacts. Conventional CNNs may not be well-suited for operating on polar data however. This is because the convolutional filters assume that the data are translationally invariant and that the features they learn are applicable everywhere in the image. The observed size (number of pixels occupied) of a weather feature of a fixed physical size can change drastically in polar coordinates depending on its distance from the antenna due to spread of the wave as it propagates from the radar however.<?pagebreak page7742?> This may mean that the CNNs have difficulty learning efficient physical representations of common weather features because physically similar features can appear quite different depending on their location. Put another way, CNNs may require more filters to learn the same representations in polar coordinates than they would need to represent the same objects in Cartesian space. Nonetheless, training the CNN to process C-SAPR2 data went smoothly, and the CNN learned to outperform conventional inpainting schemes by a comparable margin to the KaZR scenarios. Developing CNN architectures better suited to working in polar (or spherical) coordinate systems may be an important area of research in the future.</p>
      <p id="d1e3193">Example outputs of the beam blockage scenario are shown in Fig. <xref ref-type="fig" rid="Ch1.F7"/>. The ground-truth data are shown in panels (a)–(c), and the dashed lines represent the region where data were removed to simulate beam blockage. This sample shows some diverse cloud and precipitation structure, with heavy precipitation occurring in the left portion of the missing data region and weaker precipitation on the right, in addition to a clear sky area and smaller precipitating feature closer to the radar. The <inline-formula><mml:math id="M162" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>-optimizing CNN outputs are shown in panels (d)–(f), and the CGAN outputs are shown in panels (g)–(i). Both neural networks do a good job of extending large-scale features into the missing data region, the area of negative radial velocities near the left edge of the blockage for instance. Again, the output from the <inline-formula><mml:math id="M163" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN is too smooth, and it is obvious that inpainting has been performed. The CGAN introduces plausible small-scale variability in each of the fields however, and, qualitatively speaking, the output looks realistic, to the point that it may be difficult to notice that a beam blockage has been filled in if it were not for the dashed lines in the figure. We have made other samples from the test set available for download. Compared to the KaZR scenarios, the CGAN has a much larger tendency to produce qualitatively low-quality results when inpainting C-SAPR2 data. There are many possible reasons for this, but the most likely seems to simply be the fact that the C-SAPR2 data are significantly different than those collected by KaZR and contain different types of structures that may be more difficult for the CNN to represent. Particularly notable are cases where the observed data are spatially smooth. In these cases, the CGAN has a tendency to introduce too much variability and sometimes edge artifacts in the inpainted region, and this makes it obvious that inpainting was performed. These cases are not common in the dataset however, and it appears that the CGAN did not learn to generalize to them. Edge artifacts at the sides of the blockage region were more common in the C-SAPR2 case. Similar-looking radial lines are fairly common in the original C-SAPR2 data, and this may have made it difficult for the discriminator network to differentiate between naturally occurring radial features and edge artifacts.</p>

      <?xmltex \floatpos{t}?><fig id="Ch1.F7" specific-use="star"><?xmltex \currentcnt{7}?><?xmltex \def\figurename{Figure}?><label>Figure 7</label><caption><p id="d1e3223">An example of inpainting a C-SAPR2 beam blockage. <bold>(a</bold>–<bold>c)</bold> Ground truth, <bold>(d</bold>–<bold>f)</bold> conventional CNN inpainting, and <bold>(g</bold>–<bold>i)</bold> CGAN inpainting.</p></caption>
          <?xmltex \igopts{width=369.885827pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f07.png"/>

        </fig>

      <?xmltex \floatpos{t}?><fig id="Ch1.F8" specific-use="star"><?xmltex \currentcnt{8}?><?xmltex \def\figurename{Figure}?><label>Figure 8</label><caption><p id="d1e3253">Error metrics computed on the C-SAPR2 beam blockage. Panels <bold>(a)</bold>–<bold>(c)</bold> show mean absolute pixel errors. Panel <bold>(d)</bold> shows the earth mover's distance. Panels <bold>(e)</bold> and <bold>(f)</bold> show power spectral density.</p></caption>
          <?xmltex \igopts{width=384.112205pt}?><graphic xlink:href="https://amt.copernicus.org/articles/14/7729/2021/amt-14-7729-2021-f08.png"/>

        </fig>

      <p id="d1e3277">As in the other two scenarios, the <inline-formula><mml:math id="M164" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN significantly outperforms the baseline inpainting schemes in terms of both pixel-level MAE and EMD (Fig. <xref ref-type="fig" rid="Ch1.F8"/>a–d). The CGAN performs worse than the other schemes in terms of MAE, though this<?pagebreak page7743?> is expected because it introduces small-scale variability that, while plausible, is not necessarily in the correct location and inflates the MAE. On the other hand, the CGAN performs reasonably well in terms of EMD and dramatically outperforms all of the other schemes in reproducing the ground-truth PSD (panels e–f). Again, note that the linear interpolation scheme only appears to perform well in terms of PSD when computed along the range dimension because it copies real variability from the edge of the blockage region. In panel (e), there appear to be ringing artifacts in the CGAN PSD curve (purple). They are evenly spaced in frequency, and we hypothesize that they are related to weak checkerboard-like artifacts in the CNN output that result from the convolutional filters. In summary, the CNNs are effective for inpainting beam blockage regions for C-SAPR2 and likely will be for other scanning radars. The <inline-formula><mml:math id="M165" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>-optimizing CNN performs extremely well for pixel-level errors, and the CGAN produces outputs with realistic power spectra, which may be preferable if a more visually appealing or physically plausible output is desired.</p>
</sec>
<sec id="Ch1.S4.SS6">
  <label>4.6</label><title>Importance of the random seed</title>
      <p id="d1e3312">After finishing training, we found that the outputs from the CGANs were not dependent on the random seed that was supplied with the inputs. Traditionally, generative adversarial networks are designed to take a random vector as an input that represents a latent space. The GANs use this random data to introduce variability in their outputs. All three of the CGANs trained here do not appear to use the random input however (changing the random data for a given inpainting case does not alter the output). Nonetheless, they are still able to produce a diverse set of outputs with plausible small-scale variability, which seems to imply that the CGANs leverage the natural variability present in the radar data in the unmasked part of the scan to generate diverse outputs. Furthermore, the small-scale content in the inpainted regions changes as the size of the blockage region (and hence what observations are available to the CNN) is changed and is not a perfect match to ground truth, so the models have not over-fit. The authors of <xref ref-type="bibr" rid="bib1.bibx21" id="text.57"/> make a similar observation. Rather than a random input, they use dropout layers at both training and test time to introduce some<?pagebreak page7744?> variability in the generator network. They note that while using the dropout at test time does produce some additional variability in the outputs, it is less than expected. Developing inpainting schemes, and CGANs that are appropriate for use with meteorological data that can adequately represent a range of variance in their outputs may be an important area of future research. When using CGANs for a task like nowcasting, for instance, the ability to quantify uncertainty in the CGAN outputs by generating multiple realizations would be valuable.</p>
</sec>
</sec>
<sec id="Ch1.S5" sec-type="conclusions">
  <label>5</label><title>Discussion and conclusions</title>
      <p id="d1e3328">In this work we demonstrated the capabilities of modern deep-learning-based inpainting schemes for filling in missing radar data regions. Two approaches were tested: a convolutional neural network (CNN), that optimizes pixel-level error, and a conditional generative adversarial network (CGAN), that is capable of generating realistic outputs. The CNNs were compared to conventional inpainting and interpolation schemes, and CNN-based inpainting generally provides superior results in terms of pixel-level error, the distribution of the output data, and their ability to generate realistic power spectra.</p>
      <p id="d1e3331">The inpainting results for the two types of CNN make clear that a trade-off exists between pixel-level accuracy and physical realism. The <inline-formula><mml:math id="M166" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN was able to outperform all other schemes in terms of pixel-level errors but ultimately produced outputs that are smooth and are not representative of realistic atmospheric variability. On the other hand, realistic inpainting can be achieved using a CGAN approach, which can generate plausible cloud and precipitation structures to the degree that it may be difficult to notice that inpainting has been performed without a close inspection of the outputs. This is exemplified by the PSD curves computed on the CGAN output, which showed that CGANs can closely mimic the variability in the training data across spatial scales. Ultimately, this trade-off between pixel-level accuracy and physical realism is a fundamental limitation of the inpainting problem: the true small-scale variability in a missing data region is not recoverable, and the problem of filling it in is ill-posed because multiple physically plausible solutions exist. In other words, one must choose whether pixel accuracy or realistic features are of greater importance given their task. An MAE of zero represents an exact match to the original data, and so some might argue that, for scientific datasets, pixel- or observation-level errors are a priority. An alternative view, however, is that in practice, the inpainted output from the <inline-formula><mml:math id="M167" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN is always smoothed and unrealistic to the degree that we can say with near certainty that it is not representative of what actually occurred. The CGAN can at least provide a plausible result that is unlikely to be true but cannot immediately be dismissed as incorrect.</p>
      <?pagebreak page7745?><p id="d1e3356">The choice to use a <inline-formula><mml:math id="M168" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula>- or CGAN-style CNN for inpainting will ultimately be task-dependent and should be made with caution. If used for operational meteorology, and in particular, in scenarios where extreme weather is present, the CGAN's ability to hallucinate plausible weather features could pose a danger, if it hallucinates extreme weather in a location without it or vice versa, for example. In this type of application, a <inline-formula><mml:math id="M169" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN is likely a better, more conservative choice because it is unlikely to hallucinate an important feature. For generating visually appealing broadcast meteorology products that are not used for diagnosing severe weather or infilling blockages so that high-level processing can be applied, a CGAN may be a better choice. The CGAN's ability to generate very plausible turbulence and small-scale features that seamlessly integrate with their surroundings means it may be a good choice for a task like repairing a damaged scientific dataset. In this type of application, interpolation or an <inline-formula><mml:math id="M170" display="inline"><mml:mrow><mml:msub><mml:mi mathvariant="normal">ℓ</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> CNN might introduce unrealistic features that interfere with analysis of the data. In either case, our results demonstrate that CNN-based inpainting schemes can significantly outperform their conventional counterparts for filling in missing or damaged radar data. Finally, while the capabilities of these schemes were demonstrated here on radar data, it should be noted that none of the CNN-based methods themselves utilize anything unique about radar and have significant potential for application to other instrument data streams or even model data.</p>
</sec>

      
      </body>
    <back><notes notes-type="codedataavailability"><title>Code and data availability</title>

      <p id="d1e3396">The code used for this project is available at <ext-link xlink:href="https://doi.org/10.5281/zenodo.5643624" ext-link-type="DOI">10.5281/zenodo.5643624</ext-link> <xref ref-type="bibr" rid="bib1.bibx10" id="paren.58"/>.
Additional sample outputs for test set cases are available online at <ext-link xlink:href="https://doi.org/10.5281/zenodo.5744857" ext-link-type="DOI">10.5281/zenodo.5744857</ext-link> <xref ref-type="bibr" rid="bib1.bibx11" id="paren.59"/>.
The KaZR data are available from the Atmospheric Radiation Measurement program data discovery tool at <ext-link xlink:href="https://doi.org/10.5439/1615726" ext-link-type="DOI">10.5439/1615726</ext-link> and have filenames “corkazrcfrgeqcM1.b1” <xref ref-type="bibr" rid="bib1.bibx14" id="paren.60"/>.
The CSAPR-2 data used in this study were processed using the Taranis software package, which is currently in development and will be made publicly available in the near future <xref ref-type="bibr" rid="bib1.bibx17" id="paren.61"/>; some information can be found here: <uri>https://asr.science.energy.gov/meetings/stm/presentations/2019/776.pdf</uri> (last access: 19 March 2021). The original CSAPR-2 data from the CACTI field campaign (without Taranis processing) can also be found using the ARM data discovery tool and have filenames “corcsapr2cfrppiqcM1.b1” <xref ref-type="bibr" rid="bib1.bibx14" id="paren.62"/>.</p>
  </notes><app-group>
        <supplementary-material position="anchor"><p id="d1e3427">The supplement related to this article is available online at: <inline-supplementary-material xlink:href="https://doi.org/10.5194/amt-14-7729-2021-supplement" xlink:title="zip">https://doi.org/10.5194/amt-14-7729-2021-supplement</inline-supplementary-material>.</p></supplementary-material>
        </app-group><notes notes-type="authorcontribution"><title>Author contributions</title>

      <p id="d1e3436">AG performed experiments and wrote the manuscript, and JCH conceived the idea and contributed to the manuscript.</p>
  </notes><notes notes-type="competinginterests"><title>Competing interests</title>

      <p id="d1e3442">The contact author has declared that neither they nor their co-author have any competing interests.</p>
  </notes><notes notes-type="disclaimer"><title>Disclaimer</title>

      <p id="d1e3448">Publisher’s note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.</p>
  </notes><notes notes-type="financialsupport"><title>Financial support</title>

      <p id="d1e3454">This research has been supported by the U.S. Department of Energy (grant no. DE-AC05-76RL01830) and by the Department of Energy's Atmospheric Radiation Measurement program.</p>
  </notes><notes notes-type="reviewstatement"><title>Review statement</title>

      <p id="d1e3460">This paper was edited by Gianfranco Vulpiani and reviewed by Andrew Black and one anonymous referee.</p>
  </notes><ref-list>
    <title>References</title>

      <ref id="bib1.bibx1"><?xmltex \def\ref@label{{Agrawal et~al.(2019)Agrawal, Barrington, Bromberg, Burge, Gazen, and
Hickey}}?><label>Agrawal et al.(2019)Agrawal, Barrington, Bromberg, Burge, Gazen, and
Hickey</label><?label Agrawal2019?><mixed-citation>Agrawal, S., Barrington, L., Bromberg, C., Burge, J., Gazen, C., and Hickey,
J.: Machine learning for precipitation nowcasting from radar images, arXiv [preprint], <ext-link xlink:href="https://arxiv.org/abs/1912.12132">arXiv:1912.12132</ext-link>, 11 December 2019.</mixed-citation></ref>
      <ref id="bib1.bibx2"><?xmltex \def\ref@label{{Arjovsky et~al.(2017)Arjovsky, Chintala, and Bottou}}?><label>Arjovsky et al.(2017)Arjovsky, Chintala, and Bottou</label><?label Arjovsky2017?><mixed-citation>Arjovsky, M., Chintala, S., and Bottou, L.: Wasserstein generative adversarial networks, in: International conference on machine learning, PMLR, 70, 214–223, available at: <uri>https://proceedings.mlr.press/v70/arjovsky17a.html</uri> (last access: 22 March 2020), 2017.</mixed-citation></ref>
      <ref id="bib1.bibx3"><?xmltex \def\ref@label{{Bertalmio et~al.(2000)Bertalmio, Sapiro, Caselles, and
Ballester}}?><label>Bertalmio et al.(2000)Bertalmio, Sapiro, Caselles, and
Ballester</label><?label Bertalmio2000?><mixed-citation>Bertalmio, M., Sapiro, G., Caselles, V., and Ballester, C.: Image Inpainting,
in: Proceedings of the 27th annual conference on Computer graphics and
interactive techniques, SIGGRAPH '00, Addison-Wesley Publishing Co., USA, 417–424, <ext-link xlink:href="https://doi.org/10.1145/344779.344972" ext-link-type="DOI">10.1145/344779.344972</ext-link>, 2000.</mixed-citation></ref>
      <ref id="bib1.bibx4"><?xmltex \def\ref@label{{{Bertalmio} et~al.(2001){Bertalmio}, {Bertozzi}, and
{Sapiro}}}?><label>Bertalmio et al.(2001)Bertalmio, Bertozzi, and
Sapiro</label><?label Bertalmio2001?><mixed-citation>Bertalmio, M., Bertozzi, A. L., and Sapiro, G.: Navier-stokes, fluid
dynamics, and image and video inpainting, in: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001, IEEE CVPR, 1, <ext-link xlink:href="https://doi.org/10.1109/CVPR.2001.990497" ext-link-type="DOI">10.1109/CVPR.2001.990497</ext-link>, 2001.</mixed-citation></ref>
      <ref id="bib1.bibx5"><?xmltex \def\ref@label{{Bugeau et~al.(2010)Bugeau, Bertalmío, Caselles, and
Sapiro}}?><label>Bugeau et al.(2010)Bugeau, Bertalmío, Caselles, and
Sapiro</label><?label Bugeau2010?><mixed-citation>Bugeau, A., Bertalmío, M., Caselles, V., and Sapiro, G.: A Comprehensive
Framework for Image Inpainting, IEEE T. Image Process., 19,
2634–2645, <ext-link xlink:href="https://doi.org/10.1109/TIP.2010.2049240" ext-link-type="DOI">10.1109/TIP.2010.2049240</ext-link>, 2010.</mixed-citation></ref>
      <ref id="bib1.bibx6"><?xmltex \def\ref@label{{{Criminisi} et~al.(2004){Criminisi}, {Perez}, and
{Toyama}}}?><label>Criminisi et al.(2004)Criminisi, Perez, and
Toyama</label><?label Criminsi2004?><mixed-citation>Criminisi, A., Perez, P., and Toyama, K.: Region filling and object
removal by exemplar-based image inpainting, IEEE T. Image Process., 13, 1200–1212, <ext-link xlink:href="https://doi.org/10.1109/TIP.2004.833105" ext-link-type="DOI">10.1109/TIP.2004.833105</ext-link>, 2004.</mixed-citation></ref>
      <ref id="bib1.bibx7"><?xmltex \def\ref@label{{Efros and Leung(1999)}}?><label>Efros and Leung(1999)</label><?label Efros1999?><mixed-citation>Efros, A. A. and Leung, T. K.: Texture Synthesis by Non-parametric Sampling,
IEEE I. Conf. Comp. Vis., 2, 1033–1038, <ext-link xlink:href="https://doi.org/10.1109/ICCV.1999.790383" ext-link-type="DOI">10.1109/ICCV.1999.790383</ext-link>, 1999.</mixed-citation></ref>
      <ref id="bib1.bibx8"><?xmltex \def\ref@label{{Elharrouss et~al.(2020)Elharrouss, Almaadeed, Al-Maadeed, and
Akbari}}?><label>Elharrouss et al.(2020)Elharrouss, Almaadeed, Al-Maadeed, and
Akbari</label><?label Elharrouss2019?><mixed-citation>Elharrouss, O., Almaadeed, N., Al-Maadeed, S., and Akbari, Y.: Image
Inpainting: A Review, Neural Process. Lett., 51, 2007–2028,
<ext-link xlink:href="https://doi.org/10.1007/s11063-019-10163-0" ext-link-type="DOI">10.1007/s11063-019-10163-0</ext-link>, 2020.</mixed-citation></ref>
      <ref id="bib1.bibx9"><?xmltex \def\ref@label{{Feng et~al.(2018)Feng, Leung, Houze~Jr, Hagos, Hardin, Yang, Han, and Fan}}?><label>Feng et al.(2018)Feng, Leung, Houze Jr, Hagos, Hardin, Yang, Han, and Fan</label><?label Feng2018?><mixed-citation>Feng, Z., Leung, L. R., Houze Jr., R. A., Hagos, S., Hardin, J., Yang, Q., Han, B., and Fan, J.: Structure and evolution of mesoscale convective systems: Sensitivity to cloud microphysics in convection-permitting simulations over the United States, J. Adv. Model. Earth Sy., 10, 1470–1494, <ext-link xlink:href="https://doi.org/10.1029/2018MS001305" ext-link-type="DOI">10.1029/2018MS001305</ext-link>, 2018.</mixed-citation></ref>
      <?pagebreak page7746?><ref id="bib1.bibx10"><?xmltex \def\ref@label{{{Geiss} and {Hardin}(2021a)}}?><label>Geiss and Hardin(2021a)</label><?label Geiss2021a?><mixed-citation>Geiss, A. and Hardin, J. C.: avgeiss/radar_inpainting: AMT Supplementary Code, Zenodo [code], <ext-link xlink:href="https://doi.org/10.5281/zenodo.5643624" ext-link-type="DOI">10.5281/zenodo.5643624</ext-link>, 2021a.</mixed-citation></ref>
      <ref id="bib1.bibx11"><?xmltex \def\ref@label{{{Geiss} and {Hardin}(2021b)}}?><label>Geiss and Hardin(2021b)</label><?label Geiss2021b?><mixed-citation>
Geiss, A. and Hardin, J. C.: Additional Inpainting Examples, Zenodo [code], https://doi.org/10.5281/zenodo.5744857, 2021b.</mixed-citation></ref>
      <ref id="bib1.bibx12"><?xmltex \def\ref@label{{Goodfellow et~al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu,
Warde-Farley, Ozair, Courville, and Bengio}}?><label>Goodfellow et al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu,
Warde-Farley, Ozair, Courville, and Bengio</label><?label Goodfellow2014?><mixed-citation>Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,
Ozair, S., Courville, A., and Bengio, Y.: Generative Adversarial Networks,
arXiv, <ext-link xlink:href="https://arxiv.org/abs/1406.2661">arXiv:1406.2661</ext-link>, 10 June 2014.</mixed-citation></ref>
      <ref id="bib1.bibx13"><?xmltex \def\ref@label{{{Guillemot} and {Le Meur}(2014)}}?><label>Guillemot and Le Meur(2014)</label><?label Guillemot2014?><mixed-citation>Guillemot, C. and Le Meur, O.: Image Inpainting : Overview and Recent
Advances, IEEE Signal Proc. Mag., 31, 127–144,
<ext-link xlink:href="https://doi.org/10.1109/MSP.2013.2273004" ext-link-type="DOI">10.1109/MSP.2013.2273004</ext-link>, 2014.</mixed-citation></ref>
      <ref id="bib1.bibx14"><?xmltex \def\ref@label{{Hardin et~al.(2019a)Hardin, Hunzinger, Schuman, Matthews, Bharadwaj, Varble, Johnson, Giangrande}}?><label>Hardin et al.(2019a)Hardin, Hunzinger, Schuman, Matthews, Bharadwaj, Varble, Johnson, Giangrande</label><?label Hardin2019a?><mixed-citation>Hardin, J., Hunzinger, A., Schuman, E., Matthews, A., Bharadwaj, N., Varble, A., Johnson, K., and Giangrande, S.: Ka ARM Zenith Radar (KAZRCFRGEQC), Atmospheric Radiation Measurement (ARM) User Facility, ARM [dataset], <ext-link xlink:href="https://doi.org/10.5439/1615726" ext-link-type="DOI">10.5439/1615726</ext-link>, 2019a.</mixed-citation></ref>
      <ref id="bib1.bibx15"><?xmltex \def\ref@label{{Hardin~et~al.(2019b)Hardin, Hunzinger, Schuman, Matthews, Bharadwaj, Varble, Johnson, Giangrande}}?><label>Hardin et al.(2019b)Hardin, Hunzinger, Schuman, Matthews, Bharadwaj, Varble, Johnson, Giangrande</label><?label Hardin2019b?><mixed-citation>Hardin, J., Hunzinger, A., Schuman, E., Matthews, A., Bharadwaj, N., Varble, A., Johnson, K., and Giangrande, S: C-Band Scanning ARM Precipitation Radar (CSAPR2CFRPPIQC), Atmospheric Radiation Measurement (ARM) User Facility, ARM [dataset], <ext-link xlink:href="https://doi.org/10.5439/1615604" ext-link-type="DOI">10.5439/1615604</ext-link>, 2019b.</mixed-citation></ref>
      <ref id="bib1.bibx16"><?xmltex \def\ref@label{{Hardin et~al.(2020)Hardin, Hunzinger, Schuman,
Matthews, Bharadwaj, Varble, Johnson, and Giangrande}}?><label>Hardin et al.(2020)Hardin, Hunzinger, Schuman,
Matthews, Bharadwaj, Varble, Johnson, and Giangrande</label><?label Hardin2020?><mixed-citation>Hardin, J., Hunzinger, A., Schuman, E., Matthews, A., Bharadwaj, N., Varble,
A., Johnson, K., and Giangrande, S.: CACTI Radar b1 Processing: Corrections, Calibrations, and Processing Report, U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research, Technical Report, DOE/SC-ARM-TR-244, 1–55, <uri>https://arm.gov/publications/brochures/doe-sc-arm-tr-244.pdf</uri>, last access: 19 May 2020.</mixed-citation></ref>
      <ref id="bib1.bibx17"><?xmltex \def\ref@label{{Hardin et~al.(2021)Hardin, Bharadwaj, Giangrande, Varble, and
Feng}}?><label>Hardin et al.(2021)Hardin, Bharadwaj, Giangrande, Varble, and
Feng</label><?label taranis?><mixed-citation>
Hardin, J., Bharadwaj, N., Giangrande, S., Varble, A., and Feng, Z.: Taranis:
Advanced Precipitation and Cloud Products for ARM Radars, in preparation: 2021.</mixed-citation></ref>
      <ref id="bib1.bibx18"><?xmltex \def\ref@label{{{Huang} et~al.(2017){Huang}, {Liu}, {Van Der Maaten}, and
{Weinberger}}}?><label>Huang et al.(2017)Huang, Liu, Van Der Maaten, and
Weinberger</label><?label Huang2017?><mixed-citation>Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q.: Densely Connected Convolutional Networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017, IEEE CVPR, 2261–2269, <ext-link xlink:href="https://doi.org/10.1109/CVPR.2017.243" ext-link-type="DOI">10.1109/CVPR.2017.243</ext-link>, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx19"><?xmltex \def\ref@label{{Hubbert et~al.(2009a)Hubbert, Dixon, Ellis, and
Meymaris}}?><label>Hubbert et al.(2009a)Hubbert, Dixon, Ellis, and
Meymaris</label><?label Hubbert2009a?><mixed-citation>Hubbert, J. C., Dixon, M., Ellis, S. M., and Meymaris, G.: Weather Radar Ground Clutter. Part I: Identification, Modeling, and Simulation, J. Atmos. Ocean. Techn., 26, 1165–1180, <ext-link xlink:href="https://doi.org/10.1175/2009JTECHA1159.1" ext-link-type="DOI">10.1175/2009JTECHA1159.1</ext-link>, 2009a.</mixed-citation></ref>
      <ref id="bib1.bibx20"><?xmltex \def\ref@label{{Hubbert et~al.(2009b)Hubbert, Dixon, and Ellis}}?><label>Hubbert et al.(2009b)Hubbert, Dixon, and Ellis</label><?label Hubbert2009b?><mixed-citation>Hubbert, J. C., Dixon, M., and Ellis, S. M.: Weather Radar Ground Clutter. Part II: Real-Time Identification and Filtering, J. Atmos. Ocean. Techn., 26, 1181–1197, <ext-link xlink:href="https://doi.org/10.1175/2009JTECHA1160.1" ext-link-type="DOI">10.1175/2009JTECHA1160.1</ext-link>, 2009b.</mixed-citation></ref>
      <ref id="bib1.bibx21"><?xmltex \def\ref@label{{{Isola} et~al.(2017){Isola}, {Zhu}, {Zhou}, and {Efros}}}?><label>Isola et al.(2017)Isola, Zhu, Zhou, and Efros</label><?label Isola2017?><mixed-citation>Isola, P., Zhu, J., Zhou, T., and Efros, A. A.: Image-to-Image
Translation with Conditional Adversarial Networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017, IEEE CVPR, 5967–5976, <ext-link xlink:href="https://doi.org/10.1109/CVPR.2017.632" ext-link-type="DOI">10.1109/CVPR.2017.632</ext-link>, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx22"><?xmltex \def\ref@label{{Isom et~al.(2009)Isom, Palmer, Secrest, Rhoton, Saxion, Allmon, Reed, Crum, and Vogt}}?><label>Isom et al.(2009)Isom, Palmer, Secrest, Rhoton, Saxion, Allmon, Reed, Crum, and Vogt</label><?label Isom2009?><mixed-citation>Isom, B., Palmer, R., Secrest, G., Rhoton, R., Saxion, D., Allmon, T., Reed,
J., Crum, T., and Vogt, R.: Detailed observations of wind turbine clutter
with scanning weather radars, J. Atmos. Ocean. Techn., 26, 894–910, <ext-link xlink:href="https://doi.org/10.1175/2008JTECHA1136.1" ext-link-type="DOI">10.1175/2008JTECHA1136.1</ext-link>, 2009.</mixed-citation></ref>
      <ref id="bib1.bibx23"><?xmltex \def\ref@label{{Jain and Seung(2008)}}?><label>Jain and Seung(2008)</label><?label Jain2008?><mixed-citation>Jain, V. and Seung, H.: Natural Image Denoising with Convolutional Networks,
in: Advances in Neural Information Processing Systems 21–Proceedings of the 2008 Conference, NIPS'08, Curran Associates Inc., Red Hook, NY, USA, 769–776, <uri>https://dl.acm.org/doi/10.5555/2981780.2981876</uri> (last access: 22 March 2020), 2008.</mixed-citation></ref>
      <ref id="bib1.bibx24"><?xmltex \def\ref@label{{Jam et~al.(2021)Jam, Kendrick, Walker, Drouard, Hsu, and
Yap}}?><label>Jam et al.(2021)Jam, Kendrick, Walker, Drouard, Hsu, and
Yap</label><?label Jam2021?><mixed-citation>Jam, J., Kendrick, C., Walker, K., Drouard, V., Hsu, J. G.-S., and Yap, M. H.: A comprehensive review of past and present image inpainting methods, Computer Vision and Image Understanding, 203, 103147, <ext-link xlink:href="https://doi.org/10.1016/j.cviu.2020.103147" ext-link-type="DOI">10.1016/j.cviu.2020.103147</ext-link>, 2021.</mixed-citation></ref>
      <ref id="bib1.bibx25"><?xmltex \def\ref@label{{Johnson et~al.(2020)Johnson, Fairless, and Giangrande}}?><label>Johnson et al.(2020)Johnson, Fairless, and Giangrande</label><?label Johnson2020?><mixed-citation>Johnson, K., Fairless, T., and Giangrande, S.: Ka-Band ARM Zenith Radar
Corrections (KAZRCOR, KAZRCFRCOR) Value-Added Products, Technical Report, U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research, 18 pp., available at:
<uri>https://www.arm.gov/publications/tech_reports/doe-sc-arm-tr-203.pdf</uri> (last access: 22 March 2021), 2020.</mixed-citation></ref>
      <ref id="bib1.bibx26"><?xmltex \def\ref@label{{Lang et~al.(2009)Lang, Nesbitt, and Carey}}?><label>Lang et al.(2009)Lang, Nesbitt, and Carey</label><?label Lang2009?><mixed-citation>Lang, T. J., Nesbitt, S. W., and Carey, L. D.: On the Correction of Partial
Beam Blockage in Polarimetric Radar Data, J. Atmos. Ocean. Techn., 26, 943–957, <ext-link xlink:href="https://doi.org/10.1175/2008JTECHA1133.1" ext-link-type="DOI">10.1175/2008JTECHA1133.1</ext-link>, 2009.</mixed-citation></ref>
      <ref id="bib1.bibx27"><?xmltex \def\ref@label{{{Ledig} et~al.(2017){Ledig}, {Theis}, {Huszár}, {Caballero},
{Cunningham}, {Acosta}, {Aitken}, {Tejani}, {Totz}, {Wang}, and
{Shi}}}?><label>Ledig et al.(2017)Ledig, Theis, Huszár, Caballero,
Cunningham, Acosta, Aitken, Tejani, Totz, Wang, and
Shi</label><?label Ledig2017?><mixed-citation>Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A.,
Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., and Shi, W.: Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE CVPR, Honolulu, HI, USA, 21–26 July 2017, 105–114, <ext-link xlink:href="https://doi.org/10.1109/CVPR.2017.19" ext-link-type="DOI">10.1109/CVPR.2017.19</ext-link>, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx28"><?xmltex \def\ref@label{{Liu et~al.(2016)Liu, DiMego, Guan, Kumar, Keyser, Xu, Nai, Zhang,
Liu, Zhang, Howard, and Ator}}?><label>Liu et al.(2016)Liu, DiMego, Guan, Kumar, Keyser, Xu, Nai, Zhang,
Liu, Zhang, Howard, and Ator</label><?label Liu2016?><mixed-citation>Liu, S., DiMego, G., Guan, S., Kumar, V. K., Keyser, D., Xu, Q., Nai, K.,
Zhang, P., Liu, L., Zhang, J., Howard, K., and Ator, J.: WSR-88D Radar Data
Processing at NCEP, Weather Forecast., 31, 2047–2055,
<ext-link xlink:href="https://doi.org/10.1175/WAF-D-16-0003.1" ext-link-type="DOI">10.1175/WAF-D-16-0003.1</ext-link>, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx29"><?xmltex \def\ref@label{{Manabe and Ihara(1988)}}?><label>Manabe and Ihara(1988)</label><?label Manabe1988?><mixed-citation>Manabe, T. and Ihara, T.: A feasibility study of rain radar for the tropical rainfall measuring mission, 5. Effects of surface clutter on rain
measurements from satellite, J. Radio Res. Lab., 35, 163–181, <uri>http://www.nict.go.jp/publication/journal/35/145/Journal_Vol35_No145_pp163-181.pdf</uri> (last access: 22 March 2021), 1988.</mixed-citation></ref>
      <ref id="bib1.bibx30"><?xmltex \def\ref@label{{Moszkowicz et~al.(1994)Moszkowicz, Ciach, and
Krajewski}}?><label>Moszkowicz et al.(1994)Moszkowicz, Ciach, and
Krajewski</label><?label Moszkowicz1994?><mixed-citation>Moszkowicz, S., Ciach, G. J., and Krajewski, W. F.: Statistical Detection of
Anomalous Propagation in Radar Reflectivity Patterns, J. Atmos. Ocean. Techn., 11, 1026–1034,
<ext-link xlink:href="https://doi.org/10.1175/1520-0426(1994)011&lt;1026:SDOAPI&gt;2.0.CO;2" ext-link-type="DOI">10.1175/1520-0426(1994)011&lt;1026:SDOAPI&gt;2.0.CO;2</ext-link>, 1994.</mixed-citation></ref>
      <ref id="bib1.bibx31"><?xmltex \def\ref@label{{Odena et~al.(2016)Odena, Dumoulin, and Olah}}?><label>Odena et al.(2016)Odena, Dumoulin, and Olah</label><?label Odena2016?><mixed-citation>Odena, A., Dumoulin, V., and Olah, C.: Deconvolution and Checkerboard
Artifacts, Distill, <ext-link xlink:href="https://doi.org/10.23915/distill.00003" ext-link-type="DOI">10.23915/distill.00003</ext-link>, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx32"><?xmltex \def\ref@label{{Pathak et~al.(2016)Pathak, Krahenbuhl, Donahue, Darrell, and
Efros}}?><label>Pathak et al.(2016)Pathak, Krahenbuhl, Donahue, Darrell, and
Efros</label><?label Pathak2016?><mixed-citation>Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A. A.: Context Encoders: Feature Learning by Inpainting, in: 2016 IEEE Conference on
Computer Vision and Pattern Recognition, IEEE CVPR, Las Vegas, NV, USA, 27–30 June 2016, 2536–2544, <ext-link xlink:href="https://doi.org/10.1109/CVPR.2016.278" ext-link-type="DOI">10.1109/CVPR.2016.278</ext-link>, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx33"><?xmltex \def\ref@label{{Prudden et~al.(2020)Prudden, Adams, Kangin, Robinson, Ravuri,
Mohamed, and Arribas}}?><label>Prudden et al.(2020)Prudden, Adams, Kangin, Robinson, Ravuri,
Mohamed, and Arribas</label><?label Prudden2020?><mixed-citation>Prudden, R., Adams, S., Kangin, D., Robinson, N., Ravuri, S., Mohamed, S., and Arribas, A.: A review of radar-based nowcasting of precipitation and
applicable machine learning techniques, arXiv [preprint], <ext-link xlink:href="https://arxiv.org/abs/2005.04988">arXiv:2005.04988</ext-link>, 11 May 2020.</mixed-citation></ref>
      <ref id="bib1.bibx34"><?xmltex \def\ref@label{{Ronneberger et~al.(2015)Ronneberger, Fischer, and
Brox}}?><label>Ronneberger et al.(2015)Ronneberger, Fischer, and
Brox</label><?label Ronneberger2015?><mixed-citation>Ronneberger, O., Fischer, P., and Brox, T.: U-Net: Convolutional Networks for
Biomedical Image Segmentation, in: Medical Image Computing and
Computer-Assisted Intervention, MICCAI, 9351, 234–241,
<ext-link xlink:href="https://doi.org/10.1007/978-3-319-24574-4_28" ext-link-type="DOI">10.1007/978-3-319-24574-4_28</ext-link>, 2015.</mixed-citation></ref>
      <ref id="bib1.bibx35"><?xmltex \def\ref@label{{{Shih} et~al.(2003){Shih}, {Liang-Chen Lu}, {Ying-Hong Wang}, and
{Rong-Chi Chang}}}?><label>Shih et al.(2003)Shih, Liang-Chen Lu, Ying-Hong Wang, and
Rong-Chi Chang</label><?label Shih2003?><mixed-citation>Shih, T. K., Liang-Chen Lu, Ying-Hong Wang, and Rong-Chi <?pagebreak page7747?>Chang:
Multi-resolution image inpainting, in: 2003 International Conference on
Multimedia and Expo, ICME'03, Proceedings, Baltimore, MD, USA, 6–9 July 2003, 485, <ext-link xlink:href="https://doi.org/10.1109/ICME.2003.1220960" ext-link-type="DOI">10.1109/ICME.2003.1220960</ext-link>, 2003.</mixed-citation></ref>
      <ref id="bib1.bibx36"><?xmltex \def\ref@label{{{Tagawa} and {Okamoto}(2003)}}?><label>Tagawa and Okamoto(2003)</label><?label Tagawa2003?><mixed-citation>Tagawa, T. and Okamoto, K.: Calculations of surface clutter interference
with precipitation measurement from space by 35.5 GHz radar for Global
Precipitation Measurement Mission, in: 2003 IEEE International
Geoscience and Remote Sensing Symposium, IGARSS, Proceedings, Toulouse, France, Int. Geosci. Remote. Se., 21–25 July 2003, 5, 3172–3174, <ext-link xlink:href="https://doi.org/10.1109/IGARSS.2003.1294719" ext-link-type="DOI">10.1109/IGARSS.2003.1294719</ext-link>, 2003.</mixed-citation></ref>
      <ref id="bib1.bibx37"><?xmltex \def\ref@label{{Telea(2004)}}?><label>Telea(2004)</label><?label Telea2004?><mixed-citation>Telea, A.: An Image Inpainting Technique Based on the Fast Marching Method,
Journal of Graphics Tools, 9, 23–24, <ext-link xlink:href="https://doi.org/10.1080/10867651.2004.10487596" ext-link-type="DOI">10.1080/10867651.2004.10487596</ext-link>, 2004.</mixed-citation></ref>
      <ref id="bib1.bibx38"><?xmltex \def\ref@label{{Varble et~al.(2018)Varble, Nesbitt, Salio, Zipser, van~den Heever,
MdFarquhar, Kollias, Kreidenweis, DeMott, Jensen, Houze, Rasmussen, Leung,
Romps, Gochis, Avila, Williams, and Borque}}?><label>Varble et al.(2018)Varble, Nesbitt, Salio, Zipser, van den Heever,
MdFarquhar, Kollias, Kreidenweis, DeMott, Jensen, Houze, Rasmussen, Leung,
Romps, Gochis, Avila, Williams, and Borque</label><?label Varble2018?><mixed-citation>Varble, A., Nesbitt, S., Salio, P., Zipser, E., van den Heever, S., MdFarquhar, G., Kollias, P., Kreidenweis, S., DeMott, P., Jensen, M., Houze, Jr, R., Rasmussen, K., Leung, R., Romps, D., Gochis, D., Avila, E., Williams, C. R., and Borque, P.: Cloud, Aerosol, and Complex Terrain Interactions (CACTI), Science Plan, DOE Technical Report, DOE/SC-ARM-17-004, <uri>https://www.arm.gov/publications/programdocs/doe-sc-arm-17-004.pdf</uri> (last access: 15 May 2020), 2018.</mixed-citation></ref>
      <ref id="bib1.bibx39"><?xmltex \def\ref@label{{Veillette et~al.(2018)Veillette, Hassey, Mattioli, Iskenderian, and
Lamey}}?><label>Veillette et al.(2018)Veillette, Hassey, Mattioli, Iskenderian, and
Lamey</label><?label Veillette2018?><mixed-citation>Veillette, M. S., Hassey, E. P., Mattioli, C. J., Iskenderian, H., and Lamey,
P. M.: Creating Synthetic Radar Imagery Using Convolutional Neural Networks, J. Atmos. Ocean. Techn., 35, 2323–2338, <ext-link xlink:href="https://doi.org/10.1175/JTECH-D-18-0010.1" ext-link-type="DOI">10.1175/JTECH-D-18-0010.1</ext-link>, 2018.</mixed-citation></ref>
      <ref id="bib1.bibx40"><?xmltex \def\ref@label{{Westrick et~al.(1999)Westrick, Mass, and Colle}}?><label>Westrick et al.(1999)Westrick, Mass, and Colle</label><?label Westrick1999?><mixed-citation>Westrick, K. J., Mass, C. F., and Colle, B. A.: The Limitations of the WSR-88D Radar Network for Quantitative Precipitation Measurement over the Coastal Western United States, B. Am. Meteorol. Soc., 80, 2289–2298, <ext-link xlink:href="https://doi.org/10.1175/1520-0477(1999)080&lt;2289:TLOTWR&gt;2.0.CO;2" ext-link-type="DOI">10.1175/1520-0477(1999)080&lt;2289:TLOTWR&gt;2.0.CO;2</ext-link>, 1999.</mixed-citation></ref>
      <ref id="bib1.bibx41"><?xmltex \def\ref@label{{Widener et~al.(2012)Widener, Bharadwaj, and Johnson}}?><label>Widener et al.(2012)Widener, Bharadwaj, and Johnson</label><?label Widener2012?><mixed-citation>Widener, K., Bharadwaj, N., and Johnson, K.: Ka-Band ARM Zenith Radar (KAZR)
Instrument Handbook, DOE Office of Science Atmospheric Radiation Measurement (ARM) Program, United States, Technical Report, DOE/SC-ARM/TR-106, <ext-link xlink:href="https://doi.org/10.2172/1035855" ext-link-type="DOI">10.2172/1035855</ext-link>, 2012.</mixed-citation></ref>
      <ref id="bib1.bibx42"><?xmltex \def\ref@label{{{Yang} et~al.(2017){Yang}, {Lu}, {Lin}, {Shechtman}, {Wang}, and
{Li}}}?><label>Yang et al.(2017)Yang, Lu, Lin, Shechtman, Wang, and
Li</label><?label Yang2017?><mixed-citation>Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and Li, H.:
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis,
in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017, IEEE CVPR, 4076–4084, <ext-link xlink:href="https://doi.org/10.1109/CVPR.2017.434" ext-link-type="DOI">10.1109/CVPR.2017.434</ext-link>, 2017.</mixed-citation></ref>
      <ref id="bib1.bibx43"><?xmltex \def\ref@label{{Young et~al.(1999)Young, Nelson, Bradley, Smith, Peters-Lidard,
Kruger, and Baeck}}?><label>Young et al.(1999)Young, Nelson, Bradley, Smith, Peters-Lidard,
Kruger, and Baeck</label><?label Young1999?><mixed-citation>Young, C. B., Nelson, B. R., Bradley, A. A., Smith, J. A., Peters-Lidard,
C. D., Kruger, A., and Baeck, M. L.: An evaluation of NEXRAD precipitation
estimates in complex terrain, J. Geophys. Res.-Atmos., 104, 19691–19703, <ext-link xlink:href="https://doi.org/10.1029/1999JD900123" ext-link-type="DOI">10.1029/1999JD900123</ext-link>, 1999.</mixed-citation></ref>
      <ref id="bib1.bibx44"><?xmltex \def\ref@label{{Zhang et~al.(2011)Zhang, Howard, Langston, Vasiloff, Kaney, Arthur,
Cooten, Kelleher, Kitzmiller, Ding, Seo, Wells, and Dempsey}}?><label>Zhang et al.(2011)Zhang, Howard, Langston, Vasiloff, Kaney, Arthur,
Cooten, Kelleher, Kitzmiller, Ding, Seo, Wells, and Dempsey</label><?label Zhang2011?><mixed-citation>Zhang, J., Howard, K., Langston, C., Vasiloff, S., Kaney, B., Arthur, A.,
Cooten, S. V., Kelleher, K., Kitzmiller, D., Ding, F., Seo, D.-J., Wells, E., and Dempsey, C.: National Mosaic and Multi-Sensor QPE (NMQ) System:
Description, Results, and Future Plans, B. Am. Meteorol. Soc., 92, 1321–1338, <ext-link xlink:href="https://doi.org/10.1175/2011BAMS-D-11-00047.1" ext-link-type="DOI">10.1175/2011BAMS-D-11-00047.1</ext-link>, 2011.
</mixed-citation></ref><?xmltex \hack{\newpage}?>
      <ref id="bib1.bibx45"><?xmltex \def\ref@label{{Zhang et~al.(2013)Zhang, Zrnic, and Ryzhkov}}?><label>Zhang et al.(2013)Zhang, Zrnic, and Ryzhkov</label><?label Zhang2013?><mixed-citation>Zhang, P., Zrnic, D., and Ryzhkov, A.: Partial Beam Blockage Correction Using
Polarimetric Radar Measurements, J. Atmos. Ocean. Techn., 30, 861–872, <ext-link xlink:href="https://doi.org/10.1175/JTECH-D-12-00075.1" ext-link-type="DOI">10.1175/JTECH-D-12-00075.1</ext-link>, 2013.</mixed-citation></ref>
      <ref id="bib1.bibx46"><?xmltex \def\ref@label{{Zhou et~al.(2018)Zhou, Siddiquee, Tajbakhsh, and Liang}}?><label>Zhou et al.(2018)Zhou, Siddiquee, Tajbakhsh, and Liang</label><?label Zhou2018?><mixed-citation>Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., and Liang, J.: UNet++: A Nested
U-Net Architecture for Medical Image Segmentation, in: Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support, DLMIA, 11045, 3–1, <ext-link xlink:href="https://doi.org/10.1007/978-3-030-00889-5_1" ext-link-type="DOI">10.1007/978-3-030-00889-5_1</ext-link>, 2018.</mixed-citation></ref>

  </ref-list></back>
    <!--<article-title-html>Inpainting radar missing data regions with deep learning</article-title-html>
<abstract-html/>
<ref-html id="bib1.bib1"><label>Agrawal et al.(2019)Agrawal, Barrington, Bromberg, Burge, Gazen, and
Hickey</label><mixed-citation>
Agrawal, S., Barrington, L., Bromberg, C., Burge, J., Gazen, C., and Hickey,
J.: Machine learning for precipitation nowcasting from radar images, arXiv [preprint], <a href="https://arxiv.org/abs/1912.12132" target="_blank">arXiv:1912.12132</a>, 11 December 2019.
</mixed-citation></ref-html>
<ref-html id="bib1.bib2"><label>Arjovsky et al.(2017)Arjovsky, Chintala, and Bottou</label><mixed-citation>
Arjovsky, M., Chintala, S., and Bottou, L.: Wasserstein generative adversarial networks, in: International conference on machine learning, PMLR, 70, 214–223, available at: <a href="https://proceedings.mlr.press/v70/arjovsky17a.html" target="_blank"/> (last access: 22 March 2020), 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib3"><label>Bertalmio et al.(2000)Bertalmio, Sapiro, Caselles, and
Ballester</label><mixed-citation>
Bertalmio, M., Sapiro, G., Caselles, V., and Ballester, C.: Image Inpainting,
in: Proceedings of the 27th annual conference on Computer graphics and
interactive techniques, SIGGRAPH '00, Addison-Wesley Publishing Co., USA, 417–424, <a href="https://doi.org/10.1145/344779.344972" target="_blank">https://doi.org/10.1145/344779.344972</a>, 2000.
</mixed-citation></ref-html>
<ref-html id="bib1.bib4"><label>Bertalmio et al.(2001)Bertalmio, Bertozzi, and
Sapiro</label><mixed-citation>
Bertalmio, M., Bertozzi, A. L., and Sapiro, G.: Navier-stokes, fluid
dynamics, and image and video inpainting, in: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001, IEEE CVPR, 1, <a href="https://doi.org/10.1109/CVPR.2001.990497" target="_blank">https://doi.org/10.1109/CVPR.2001.990497</a>, 2001.
</mixed-citation></ref-html>
<ref-html id="bib1.bib5"><label>Bugeau et al.(2010)Bugeau, Bertalmío, Caselles, and
Sapiro</label><mixed-citation>
Bugeau, A., Bertalmío, M., Caselles, V., and Sapiro, G.: A Comprehensive
Framework for Image Inpainting, IEEE T. Image Process., 19,
2634–2645, <a href="https://doi.org/10.1109/TIP.2010.2049240" target="_blank">https://doi.org/10.1109/TIP.2010.2049240</a>, 2010.
</mixed-citation></ref-html>
<ref-html id="bib1.bib6"><label>Criminisi et al.(2004)Criminisi, Perez, and
Toyama</label><mixed-citation>
Criminisi, A., Perez, P., and Toyama, K.: Region filling and object
removal by exemplar-based image inpainting, IEEE T. Image Process., 13, 1200–1212, <a href="https://doi.org/10.1109/TIP.2004.833105" target="_blank">https://doi.org/10.1109/TIP.2004.833105</a>, 2004.
</mixed-citation></ref-html>
<ref-html id="bib1.bib7"><label>Efros and Leung(1999)</label><mixed-citation>
Efros, A. A. and Leung, T. K.: Texture Synthesis by Non-parametric Sampling,
IEEE I. Conf. Comp. Vis., 2, 1033–1038, <a href="https://doi.org/10.1109/ICCV.1999.790383" target="_blank">https://doi.org/10.1109/ICCV.1999.790383</a>, 1999.
</mixed-citation></ref-html>
<ref-html id="bib1.bib8"><label>Elharrouss et al.(2020)Elharrouss, Almaadeed, Al-Maadeed, and
Akbari</label><mixed-citation>
Elharrouss, O., Almaadeed, N., Al-Maadeed, S., and Akbari, Y.: Image
Inpainting: A Review, Neural Process. Lett., 51, 2007–2028,
<a href="https://doi.org/10.1007/s11063-019-10163-0" target="_blank">https://doi.org/10.1007/s11063-019-10163-0</a>, 2020.
</mixed-citation></ref-html>
<ref-html id="bib1.bib9"><label>Feng et al.(2018)Feng, Leung, Houze Jr, Hagos, Hardin, Yang, Han, and Fan</label><mixed-citation>
Feng, Z., Leung, L. R., Houze Jr., R. A., Hagos, S., Hardin, J., Yang, Q., Han, B., and Fan, J.: Structure and evolution of mesoscale convective systems: Sensitivity to cloud microphysics in convection-permitting simulations over the United States, J. Adv. Model. Earth Sy., 10, 1470–1494, <a href="https://doi.org/10.1029/2018MS001305" target="_blank">https://doi.org/10.1029/2018MS001305</a>, 2018.
</mixed-citation></ref-html>
<ref-html id="bib1.bib10"><label>Geiss and Hardin(2021a)</label><mixed-citation>
Geiss, A. and Hardin, J. C.: avgeiss/radar_inpainting: AMT Supplementary Code, Zenodo [code], <a href="https://doi.org/10.5281/zenodo.5643624" target="_blank">https://doi.org/10.5281/zenodo.5643624</a>, 2021a.
</mixed-citation></ref-html>
<ref-html id="bib1.bib11"><label>Geiss and Hardin(2021b)</label><mixed-citation>
Geiss, A. and Hardin, J. C.: Additional Inpainting Examples, Zenodo [code], https://doi.org/10.5281/zenodo.5744857, 2021b.
</mixed-citation></ref-html>
<ref-html id="bib1.bib12"><label>Goodfellow et al.(2014)Goodfellow, Pouget-Abadie, Mirza, Xu,
Warde-Farley, Ozair, Courville, and Bengio</label><mixed-citation>
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D.,
Ozair, S., Courville, A., and Bengio, Y.: Generative Adversarial Networks,
arXiv, <a href="https://arxiv.org/abs/1406.2661" target="_blank">arXiv:1406.2661</a>, 10 June 2014.
</mixed-citation></ref-html>
<ref-html id="bib1.bib13"><label>Guillemot and Le Meur(2014)</label><mixed-citation>
Guillemot, C. and Le Meur, O.: Image Inpainting : Overview and Recent
Advances, IEEE Signal Proc. Mag., 31, 127–144,
<a href="https://doi.org/10.1109/MSP.2013.2273004" target="_blank">https://doi.org/10.1109/MSP.2013.2273004</a>, 2014.
</mixed-citation></ref-html>
<ref-html id="bib1.bib14"><label>Hardin et al.(2019a)Hardin, Hunzinger, Schuman, Matthews, Bharadwaj, Varble, Johnson, Giangrande</label><mixed-citation>
Hardin, J., Hunzinger, A., Schuman, E., Matthews, A., Bharadwaj, N., Varble, A., Johnson, K., and Giangrande, S.: Ka ARM Zenith Radar (KAZRCFRGEQC), Atmospheric Radiation Measurement (ARM) User Facility, ARM [dataset], <a href="https://doi.org/10.5439/1615726" target="_blank">https://doi.org/10.5439/1615726</a>, 2019a.
</mixed-citation></ref-html>
<ref-html id="bib1.bib15"><label>Hardin et al.(2019b)Hardin, Hunzinger, Schuman, Matthews, Bharadwaj, Varble, Johnson, Giangrande</label><mixed-citation>
Hardin, J., Hunzinger, A., Schuman, E., Matthews, A., Bharadwaj, N., Varble, A., Johnson, K., and Giangrande, S: C-Band Scanning ARM Precipitation Radar (CSAPR2CFRPPIQC), Atmospheric Radiation Measurement (ARM) User Facility, ARM [dataset], <a href="https://doi.org/10.5439/1615604" target="_blank">https://doi.org/10.5439/1615604</a>, 2019b.
</mixed-citation></ref-html>
<ref-html id="bib1.bib16"><label>Hardin et al.(2020)Hardin, Hunzinger, Schuman,
Matthews, Bharadwaj, Varble, Johnson, and Giangrande</label><mixed-citation>
Hardin, J., Hunzinger, A., Schuman, E., Matthews, A., Bharadwaj, N., Varble,
A., Johnson, K., and Giangrande, S.: CACTI Radar b1 Processing: Corrections, Calibrations, and Processing Report, U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research, Technical Report, DOE/SC-ARM-TR-244, 1–55, <a href="https://arm.gov/publications/brochures/doe-sc-arm-tr-244.pdf" target="_blank"/>, last access: 19 May 2020.
</mixed-citation></ref-html>
<ref-html id="bib1.bib17"><label>Hardin et al.(2021)Hardin, Bharadwaj, Giangrande, Varble, and
Feng</label><mixed-citation>
Hardin, J., Bharadwaj, N., Giangrande, S., Varble, A., and Feng, Z.: Taranis:
Advanced Precipitation and Cloud Products for ARM Radars, in preparation: 2021.
</mixed-citation></ref-html>
<ref-html id="bib1.bib18"><label>Huang et al.(2017)Huang, Liu, Van Der Maaten, and
Weinberger</label><mixed-citation>
Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K. Q.: Densely Connected Convolutional Networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017, IEEE CVPR, 2261–2269, <a href="https://doi.org/10.1109/CVPR.2017.243" target="_blank">https://doi.org/10.1109/CVPR.2017.243</a>, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib19"><label>Hubbert et al.(2009a)Hubbert, Dixon, Ellis, and
Meymaris</label><mixed-citation>
Hubbert, J. C., Dixon, M., Ellis, S. M., and Meymaris, G.: Weather Radar Ground Clutter. Part I: Identification, Modeling, and Simulation, J. Atmos. Ocean. Techn., 26, 1165–1180, <a href="https://doi.org/10.1175/2009JTECHA1159.1" target="_blank">https://doi.org/10.1175/2009JTECHA1159.1</a>, 2009a.
</mixed-citation></ref-html>
<ref-html id="bib1.bib20"><label>Hubbert et al.(2009b)Hubbert, Dixon, and Ellis</label><mixed-citation>
Hubbert, J. C., Dixon, M., and Ellis, S. M.: Weather Radar Ground Clutter. Part II: Real-Time Identification and Filtering, J. Atmos. Ocean. Techn., 26, 1181–1197, <a href="https://doi.org/10.1175/2009JTECHA1160.1" target="_blank">https://doi.org/10.1175/2009JTECHA1160.1</a>, 2009b.
</mixed-citation></ref-html>
<ref-html id="bib1.bib21"><label>Isola et al.(2017)Isola, Zhu, Zhou, and Efros</label><mixed-citation>
Isola, P., Zhu, J., Zhou, T., and Efros, A. A.: Image-to-Image
Translation with Conditional Adversarial Networks, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017, IEEE CVPR, 5967–5976, <a href="https://doi.org/10.1109/CVPR.2017.632" target="_blank">https://doi.org/10.1109/CVPR.2017.632</a>, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib22"><label>Isom et al.(2009)Isom, Palmer, Secrest, Rhoton, Saxion, Allmon, Reed, Crum, and Vogt</label><mixed-citation>
Isom, B., Palmer, R., Secrest, G., Rhoton, R., Saxion, D., Allmon, T., Reed,
J., Crum, T., and Vogt, R.: Detailed observations of wind turbine clutter
with scanning weather radars, J. Atmos. Ocean. Techn., 26, 894–910, <a href="https://doi.org/10.1175/2008JTECHA1136.1" target="_blank">https://doi.org/10.1175/2008JTECHA1136.1</a>, 2009.
</mixed-citation></ref-html>
<ref-html id="bib1.bib23"><label>Jain and Seung(2008)</label><mixed-citation>
Jain, V. and Seung, H.: Natural Image Denoising with Convolutional Networks,
in: Advances in Neural Information Processing Systems 21–Proceedings of the 2008 Conference, NIPS'08, Curran Associates Inc., Red Hook, NY, USA, 769–776, <a href="https://dl.acm.org/doi/10.5555/2981780.2981876" target="_blank"/> (last access: 22 March 2020), 2008.
</mixed-citation></ref-html>
<ref-html id="bib1.bib24"><label>Jam et al.(2021)Jam, Kendrick, Walker, Drouard, Hsu, and
Yap</label><mixed-citation>
Jam, J., Kendrick, C., Walker, K., Drouard, V., Hsu, J. G.-S., and Yap, M. H.: A comprehensive review of past and present image inpainting methods, Computer Vision and Image Understanding, 203, 103147, <a href="https://doi.org/10.1016/j.cviu.2020.103147" target="_blank">https://doi.org/10.1016/j.cviu.2020.103147</a>, 2021.
</mixed-citation></ref-html>
<ref-html id="bib1.bib25"><label>Johnson et al.(2020)Johnson, Fairless, and Giangrande</label><mixed-citation>
Johnson, K., Fairless, T., and Giangrande, S.: Ka-Band ARM Zenith Radar
Corrections (KAZRCOR, KAZRCFRCOR) Value-Added Products, Technical Report, U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research, 18 pp., available at:
<a href="https://www.arm.gov/publications/tech_reports/doe-sc-arm-tr-203.pdf" target="_blank"/> (last access: 22 March 2021), 2020.
</mixed-citation></ref-html>
<ref-html id="bib1.bib26"><label>Lang et al.(2009)Lang, Nesbitt, and Carey</label><mixed-citation>
Lang, T. J., Nesbitt, S. W., and Carey, L. D.: On the Correction of Partial
Beam Blockage in Polarimetric Radar Data, J. Atmos. Ocean. Techn., 26, 943–957, <a href="https://doi.org/10.1175/2008JTECHA1133.1" target="_blank">https://doi.org/10.1175/2008JTECHA1133.1</a>, 2009.
</mixed-citation></ref-html>
<ref-html id="bib1.bib27"><label>Ledig et al.(2017)Ledig, Theis, Huszár, Caballero,
Cunningham, Acosta, Aitken, Tejani, Totz, Wang, and
Shi</label><mixed-citation>
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A.,
Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., and Shi, W.: Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, IEEE CVPR, Honolulu, HI, USA, 21–26 July 2017, 105–114, <a href="https://doi.org/10.1109/CVPR.2017.19" target="_blank">https://doi.org/10.1109/CVPR.2017.19</a>, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib28"><label>Liu et al.(2016)Liu, DiMego, Guan, Kumar, Keyser, Xu, Nai, Zhang,
Liu, Zhang, Howard, and Ator</label><mixed-citation>
Liu, S., DiMego, G., Guan, S., Kumar, V. K., Keyser, D., Xu, Q., Nai, K.,
Zhang, P., Liu, L., Zhang, J., Howard, K., and Ator, J.: WSR-88D Radar Data
Processing at NCEP, Weather Forecast., 31, 2047–2055,
<a href="https://doi.org/10.1175/WAF-D-16-0003.1" target="_blank">https://doi.org/10.1175/WAF-D-16-0003.1</a>, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib29"><label>Manabe and Ihara(1988)</label><mixed-citation>
Manabe, T. and Ihara, T.: A feasibility study of rain radar for the tropical rainfall measuring mission, 5. Effects of surface clutter on rain
measurements from satellite, J. Radio Res. Lab., 35, 163–181, <a href="http://www.nict.go.jp/publication/journal/35/145/Journal_Vol35_No145_pp163-181.pdf" target="_blank"/> (last access: 22 March 2021), 1988.
</mixed-citation></ref-html>
<ref-html id="bib1.bib30"><label>Moszkowicz et al.(1994)Moszkowicz, Ciach, and
Krajewski</label><mixed-citation>
Moszkowicz, S., Ciach, G. J., and Krajewski, W. F.: Statistical Detection of
Anomalous Propagation in Radar Reflectivity Patterns, J. Atmos. Ocean. Techn., 11, 1026–1034,
<a href="https://doi.org/10.1175/1520-0426(1994)011&lt;1026:SDOAPI&gt;2.0.CO;2" target="_blank">https://doi.org/10.1175/1520-0426(1994)011&lt;1026:SDOAPI&gt;2.0.CO;2</a>, 1994.
</mixed-citation></ref-html>
<ref-html id="bib1.bib31"><label>Odena et al.(2016)Odena, Dumoulin, and Olah</label><mixed-citation>
Odena, A., Dumoulin, V., and Olah, C.: Deconvolution and Checkerboard
Artifacts, Distill, <a href="https://doi.org/10.23915/distill.00003" target="_blank">https://doi.org/10.23915/distill.00003</a>, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib32"><label>Pathak et al.(2016)Pathak, Krahenbuhl, Donahue, Darrell, and
Efros</label><mixed-citation>
Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A. A.: Context Encoders: Feature Learning by Inpainting, in: 2016 IEEE Conference on
Computer Vision and Pattern Recognition, IEEE CVPR, Las Vegas, NV, USA, 27–30 June 2016, 2536–2544, <a href="https://doi.org/10.1109/CVPR.2016.278" target="_blank">https://doi.org/10.1109/CVPR.2016.278</a>, 2016.
</mixed-citation></ref-html>
<ref-html id="bib1.bib33"><label>Prudden et al.(2020)Prudden, Adams, Kangin, Robinson, Ravuri,
Mohamed, and Arribas</label><mixed-citation>
Prudden, R., Adams, S., Kangin, D., Robinson, N., Ravuri, S., Mohamed, S., and Arribas, A.: A review of radar-based nowcasting of precipitation and
applicable machine learning techniques, arXiv [preprint], <a href="https://arxiv.org/abs/2005.04988" target="_blank">arXiv:2005.04988</a>, 11 May 2020.
</mixed-citation></ref-html>
<ref-html id="bib1.bib34"><label>Ronneberger et al.(2015)Ronneberger, Fischer, and
Brox</label><mixed-citation>
Ronneberger, O., Fischer, P., and Brox, T.: U-Net: Convolutional Networks for
Biomedical Image Segmentation, in: Medical Image Computing and
Computer-Assisted Intervention, MICCAI, 9351, 234–241,
<a href="https://doi.org/10.1007/978-3-319-24574-4_28" target="_blank">https://doi.org/10.1007/978-3-319-24574-4_28</a>, 2015.
</mixed-citation></ref-html>
<ref-html id="bib1.bib35"><label>Shih et al.(2003)Shih, Liang-Chen Lu, Ying-Hong Wang, and
Rong-Chi Chang</label><mixed-citation>
Shih, T. K., Liang-Chen Lu, Ying-Hong Wang, and Rong-Chi Chang:
Multi-resolution image inpainting, in: 2003 International Conference on
Multimedia and Expo, ICME'03, Proceedings, Baltimore, MD, USA, 6–9 July 2003, 485, <a href="https://doi.org/10.1109/ICME.2003.1220960" target="_blank">https://doi.org/10.1109/ICME.2003.1220960</a>, 2003.
</mixed-citation></ref-html>
<ref-html id="bib1.bib36"><label>Tagawa and Okamoto(2003)</label><mixed-citation>
Tagawa, T. and Okamoto, K.: Calculations of surface clutter interference
with precipitation measurement from space by 35.5&thinsp;GHz radar for Global
Precipitation Measurement Mission, in: 2003 IEEE International
Geoscience and Remote Sensing Symposium, IGARSS, Proceedings, Toulouse, France, Int. Geosci. Remote. Se., 21–25 July 2003, 5, 3172–3174, <a href="https://doi.org/10.1109/IGARSS.2003.1294719" target="_blank">https://doi.org/10.1109/IGARSS.2003.1294719</a>, 2003.
</mixed-citation></ref-html>
<ref-html id="bib1.bib37"><label>Telea(2004)</label><mixed-citation>
Telea, A.: An Image Inpainting Technique Based on the Fast Marching Method,
Journal of Graphics Tools, 9, 23–24, <a href="https://doi.org/10.1080/10867651.2004.10487596" target="_blank">https://doi.org/10.1080/10867651.2004.10487596</a>, 2004.
</mixed-citation></ref-html>
<ref-html id="bib1.bib38"><label>Varble et al.(2018)Varble, Nesbitt, Salio, Zipser, van den Heever,
MdFarquhar, Kollias, Kreidenweis, DeMott, Jensen, Houze, Rasmussen, Leung,
Romps, Gochis, Avila, Williams, and Borque</label><mixed-citation>
Varble, A., Nesbitt, S., Salio, P., Zipser, E., van den Heever, S., MdFarquhar, G., Kollias, P., Kreidenweis, S., DeMott, P., Jensen, M., Houze, Jr, R., Rasmussen, K., Leung, R., Romps, D., Gochis, D., Avila, E., Williams, C. R., and Borque, P.: Cloud, Aerosol, and Complex Terrain Interactions (CACTI), Science Plan, DOE Technical Report, DOE/SC-ARM-17-004, <a href="https://www.arm.gov/publications/programdocs/doe-sc-arm-17-004.pdf" target="_blank"/> (last access: 15 May 2020), 2018.
</mixed-citation></ref-html>
<ref-html id="bib1.bib39"><label>Veillette et al.(2018)Veillette, Hassey, Mattioli, Iskenderian, and
Lamey</label><mixed-citation>
Veillette, M. S., Hassey, E. P., Mattioli, C. J., Iskenderian, H., and Lamey,
P. M.: Creating Synthetic Radar Imagery Using Convolutional Neural Networks, J. Atmos. Ocean. Techn., 35, 2323–2338, <a href="https://doi.org/10.1175/JTECH-D-18-0010.1" target="_blank">https://doi.org/10.1175/JTECH-D-18-0010.1</a>, 2018.
</mixed-citation></ref-html>
<ref-html id="bib1.bib40"><label>Westrick et al.(1999)Westrick, Mass, and Colle</label><mixed-citation>
Westrick, K. J., Mass, C. F., and Colle, B. A.: The Limitations of the WSR-88D Radar Network for Quantitative Precipitation Measurement over the Coastal Western United States, B. Am. Meteorol. Soc., 80, 2289–2298, <a href="https://doi.org/10.1175/1520-0477(1999)080&lt;2289:TLOTWR&gt;2.0.CO;2" target="_blank">https://doi.org/10.1175/1520-0477(1999)080&lt;2289:TLOTWR&gt;2.0.CO;2</a>, 1999.
</mixed-citation></ref-html>
<ref-html id="bib1.bib41"><label>Widener et al.(2012)Widener, Bharadwaj, and Johnson</label><mixed-citation>
Widener, K., Bharadwaj, N., and Johnson, K.: Ka-Band ARM Zenith Radar (KAZR)
Instrument Handbook, DOE Office of Science Atmospheric Radiation Measurement (ARM) Program, United States, Technical Report, DOE/SC-ARM/TR-106, <a href="https://doi.org/10.2172/1035855" target="_blank">https://doi.org/10.2172/1035855</a>, 2012.
</mixed-citation></ref-html>
<ref-html id="bib1.bib42"><label>Yang et al.(2017)Yang, Lu, Lin, Shechtman, Wang, and
Li</label><mixed-citation>
Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., and Li, H.:
High-Resolution Image Inpainting Using Multi-scale Neural Patch Synthesis,
in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017, IEEE CVPR, 4076–4084, <a href="https://doi.org/10.1109/CVPR.2017.434" target="_blank">https://doi.org/10.1109/CVPR.2017.434</a>, 2017.
</mixed-citation></ref-html>
<ref-html id="bib1.bib43"><label>Young et al.(1999)Young, Nelson, Bradley, Smith, Peters-Lidard,
Kruger, and Baeck</label><mixed-citation>
Young, C. B., Nelson, B. R., Bradley, A. A., Smith, J. A., Peters-Lidard,
C. D., Kruger, A., and Baeck, M. L.: An evaluation of NEXRAD precipitation
estimates in complex terrain, J. Geophys. Res.-Atmos., 104, 19691–19703, <a href="https://doi.org/10.1029/1999JD900123" target="_blank">https://doi.org/10.1029/1999JD900123</a>, 1999.
</mixed-citation></ref-html>
<ref-html id="bib1.bib44"><label>Zhang et al.(2011)Zhang, Howard, Langston, Vasiloff, Kaney, Arthur,
Cooten, Kelleher, Kitzmiller, Ding, Seo, Wells, and Dempsey</label><mixed-citation>
Zhang, J., Howard, K., Langston, C., Vasiloff, S., Kaney, B., Arthur, A.,
Cooten, S. V., Kelleher, K., Kitzmiller, D., Ding, F., Seo, D.-J., Wells, E., and Dempsey, C.: National Mosaic and Multi-Sensor QPE (NMQ) System:
Description, Results, and Future Plans, B. Am. Meteorol. Soc., 92, 1321–1338, <a href="https://doi.org/10.1175/2011BAMS-D-11-00047.1" target="_blank">https://doi.org/10.1175/2011BAMS-D-11-00047.1</a>, 2011.

</mixed-citation></ref-html>
<ref-html id="bib1.bib45"><label>Zhang et al.(2013)Zhang, Zrnic, and Ryzhkov</label><mixed-citation>
Zhang, P., Zrnic, D., and Ryzhkov, A.: Partial Beam Blockage Correction Using
Polarimetric Radar Measurements, J. Atmos. Ocean. Techn., 30, 861–872, <a href="https://doi.org/10.1175/JTECH-D-12-00075.1" target="_blank">https://doi.org/10.1175/JTECH-D-12-00075.1</a>, 2013.
</mixed-citation></ref-html>
<ref-html id="bib1.bib46"><label>Zhou et al.(2018)Zhou, Siddiquee, Tajbakhsh, and Liang</label><mixed-citation>
Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N., and Liang, J.: UNet++: A Nested
U-Net Architecture for Medical Image Segmentation, in: Deep Learning in
Medical Image Analysis and Multimodal Learning for Clinical Decision Support, DLMIA, 11045, 3–1, <a href="https://doi.org/10.1007/978-3-030-00889-5_1" target="_blank">https://doi.org/10.1007/978-3-030-00889-5_1</a>, 2018.
</mixed-citation></ref-html>--></article>
