The Education and Research 3D Radiative Transfer Toolbox (EaR3T) – Towards the Mitigation of 3D Bias in Airborne and Spaceborne Passive Imagery Cloud Retrievals
- 1Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, CO, USA
- 2Laboratory for Atmospheric and Space Physics, University of Colorado, Boulder, CO, USA
- 3Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO, USA
- 4NOAA Chemical Sciences Laboratory, Boulder, CO, USA
- 5Space Science and Engineering Center, University of Wisconsin–Madison, Madison, WI, USA
- 6Center for Atmospheric and Oceanic Studies, Tohoku University, Sendai, Miyagi, Japan
- 1Department of Atmospheric and Oceanic Sciences, University of Colorado, Boulder, CO, USA
- 2Laboratory for Atmospheric and Space Physics, University of Colorado, Boulder, CO, USA
- 3Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO, USA
- 4NOAA Chemical Sciences Laboratory, Boulder, CO, USA
- 5Space Science and Engineering Center, University of Wisconsin–Madison, Madison, WI, USA
- 6Center for Atmospheric and Oceanic Studies, Tohoku University, Sendai, Miyagi, Japan
Abstract. We introduce the Education and Research 3D Radiative Transfer Toolbox (EaR3T) for quantifying and mitigating artifacts in atmospheric radiation science algorithms due to spatially inhomogeneous clouds and surfaces, and show the benefits of automated, realistic radiance and irradiance generation along extended satellite orbits, flight tracks from entire aircraft field missions, and synthetic data generation from model data. EaR3T is a modularized Python package that provides high-level interfaces to automate the process of 3D radiative transfer (RT) calculations. After introducing the package, we present initial findings from four applications, which are intended as blueprints to future in-depth scientific studies. The first two applications use EaR3T as a satellite radiance simulator for the NASA OCO-2 and MODIS missions, which generate synthetic satellite observations with 3D-RT on the basis of cloud field properties from imagery-based retrievals and other input data. In the case of inhomogeneous cloud fields, we show that the synthetic radiances are often inconsistent with the original radiance measurements. This lack of radiance consistency points to biases in heritage imagery cloud retrievals due to sub-pixel resolution clouds and 3D-RT effects. They come to light because the simulator’s 3D-RT engine replicates processes in nature that conventional 1D-RT retrievals do not capture. We argue that 3D radiance consistency (closure) can serve as a metric for assessing the performance of a cloud retrieval in presence of spatial cloud inhomogeneity even with limited independent validation data. The other two applications show how airborne measured irradiance data can be used to independently validate imagery-derived cloud products via radiative closure in irradiance. This is accomplished by simulating downwelling irradiance from geostationary cloud retrievals of Advanced Himawari Imager (AHI) along all the below-cloud aircraft flight tracks of the Cloud, Aerosol and Monsoon Processes Philippines Experiment (CAMP2Ex, NASA 2019), and comparing the irradiances with the collocated airborne measurements. In contrast to isolated case studies in the past, EaR3T facilitates the use of observations from entire field campaigns for the statistical validation of satellite-derived irradiance. From the CAMP2Ex mission, we find a low bias of 10 % in the satellite-derived cloud transmittance, which we are able to attribute to a combination of the coarse resolution of the geostationary imager and 3D-RT biases. Finally, we apply a recently developed context-aware CNN cloud retrieval framework to high-resolution airborne imagery from CAMP2Ex and show that the retrieved cloud optical thickness fields lead to better 3D radiance consistency than the heritage independent pixel algorithm, opening the door to future mitigation of 3D-RT cloud retrieval biases.
Hong Chen et al.
Status: final response (author comments only)
-
RC1: 'Comment on amt-2022-143', Hartwig Deneke, 20 Jul 2022
The article describes a Python project for 3D radiative transer, the EAR3T toolbox. While somewhat technical in scope , the article is generally well written, likely of interest to a wider scientific audience, and falls within the scope of AMT. There are however a few aspects which could be improved., which I list below Hence, I recommend publication of the article after minor revisions.
* For reproducability, I strongly recommend to obtain a DOI for the described version of the code in the github repository, e.g. via zenodo, see https://docs.github.com/en/repositories/archiving-a-github-repository/referencing-and-citing-content . While the article mentions “in the current version”, no clear information on versioning of the code is given, this needs to be rectified, in particular, the article needs to clarify which version of the code is referred to
.
* Usage of APP for application: why not App? Its used as an abbreviation, not as an acronym.* As mentioned in the text , APP5 is not described, but it is included in Fig.1. I propose to also remove it from Fig.1. The description “four of which are described in this paper” at least for me raises the question why, maybe motivate this choice somewhat?
* Summary and Outlook: I do find the outlook somewhat too short / lacking a clear vision about future development of the code. The following sentence also raises some questions: “EaR3T will continue to be an educational tool driven by graduate students.” I did not find anything indicating which parts of the code so far have been actually written by graduate students (who of the authors is at that stage?), given that several co-authors are rather senior. I also would assume that it takes someone with significant experience to maintain such a project in the long term. Please ellaborate at least to some detail on these points.
Please also note the following minor language comments:
* L264: “MODIS is currently flying on …” I doubt this will change anytime soon, rephrase sentence?
* L265: “They are ...”: Please clarify “They”, I guess it refers to MODIS.-
AC1: 'Reply on RC1', Hong Chen, 15 Dec 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-143/amt-2022-143-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Hong Chen, 15 Dec 2022
-
RC2: 'Comment on amt-2022-143', Anonymous Referee #2, 31 Aug 2022
Text:
The paper introduces the versatile EaR3T Radiative Transfer Toolbox and showcases several applications with focus on analyzing and mitigating 3D effects in observations and simulations. I thank the authors for this overall very nice paper. I particularly find its education targeted approach very interesting. I have few, both general and specific, comments (see below) and recommend to publich the paper when these overall minor issues are adressed.
General comments
---Some, as I understand, mandatory or highly recommended sections are missing, incl. Data availability and Author contributions.
As the other reviewer pointed out, it would be highly desireable if code & examples used here are available from a long-term archive.You characterize EaR3T as "automated" and running with "minimal user input" and statet that "automation of EaR3T permits calculations at any time and over any region". Could you be a bit more specific, what you mean by that (automated)? On the one hand, please summarize, what user input is actually needed (what is mandatory, what optional) and how the user has to provide/setup. What does the user interface look like, how does it work? An example of a call/setup could be helpful, eg illustrating how the Tab2 settings (at least for one APP are realized). Also, how is the model executed; e.g. how are the steps mentioned in L425f executed? Does the user have to do that with a ascript, is there a ready-made script available, ...? Make clear, what is the "normal" way to do these things with EaR3T.
On the other hand, "with minimal user input" implies that many parameter that need to be known for radiative transfer model are implicitly assumed (aka "hardcoded") - which are these and what setting/assumptions are made there? What effects do these assumptions have on the RT results?
Also, certain manual adjustments/user settings are obviously still necessary, e.g. setting of an (appropriate) SZA. I lack an overview of these additonal required setup beyond time & location. Also, I'd appreciate some remarks on the ease or difficulty to adapt the examples to more or less different applications e.g. the use of different sensor channels.
One examplary aspect here: you do not seem to consider, eg, cloud vertical location & extent as "user input", although that needs to be specified somehow and at least partly seems to require an "educated guess" by the user, which in turn introduces a user dependent source of uncertainty.
The example scene contains different cloud types (from low to vertically extended). Here I would like to see a short discussion on the effect of fixed cloud geo thickness in the setup. How realistic is the 1km-thick-clouds setting in APP1&2 (also considering that the clouds in this scene all seem to be high-level clouds according to Fig3c (CTH>8km)? How sensitive are the outputs to this, ie what errors can result from that? How realistic is that for the cumulonimbus contained in the scene? Similarly, for APP3&4, what uncertainties do the cloud location choices induce; how sensitive are results to those choices (considering they might be off by a few km for individual clouds in a scene and that those seemingly need to be chosen by the user beforehand. what about (frequently occuring) multi-level clouds?In Sec.3, I have some difficulties to understand, how exactly cloud extinction, phase, and Reff calculated from the input data? From which input data, specifically?
What input parameters are required for MCARaTS, specifically? How are they derived from the pre-processed cloud & surface properties - is this just a re-gridding, or does it require a(nother) parameter transformation? How would that (need to be) different for libRadtran?
Some equations would be helpful here.What determines the RT grid - in the horizontal as well as in the vertical? User input? Input data resolution?
Sort out, explain, and correctly use your terminology: Specifically, for App4 (and summary item c), disentangle IPA/2-stream vs. 3D/CNN (why does Fig11 talk about COT estimated by IPA, Fig12 of COT estimated by 2-stream? that's the same data from the same method, isn't it?).
Avoid the impression to claim that EaR3T is the only 3D-RT capable model/tool(box) (aeg around L848) - there is and has been for a long while a variety of those. libRadTran/MYSTIC, SASKTRAN (Bourassa08), McSCIA (Spada06), and SPARTA (Barlakas16) in the solar radiation part of the spectrum as well as e.g. ARTS (Buehler18) in the MW/IR region are just a few that come to mind. Also, there's the WCRP's I3RC project (Cahalan04; https://www.wcrp-climate.org/modelling-wgcm-mip-catalogue/modelling-wgcm-mips-2/261-modelling-wgcm-catalogue-i3rc). Please put your model/toolbox in context with those.
Specific comments
---Add axis labels incl. unit specification to all figures, and colorbars to all 2D plots.
Tab1/L201f: Improve color coding explanation: I had to read a couple of time to get, e.g., the difference between black & blue coded surface albedo and what the relation between input data & pre-processing step color coding is.
L203f: Explicitly point out/summarize, what input is required by MCARaTS.
Tab1: Why is radiance limited to 2D output?
L238: I like your scene selection and the corresponding reasoning a lot. However, the results are not (yet?) analysed and discussed explicitly for those diverse conditions. Maybe a few summarizing word on performance depending on the different surface & cloud types could be added?
L290f: Clarify relation of surface reflectance and surface albedo; how is (RT-input?) albedo converted
from (observed) reflectance? does it imply an assumption on surface reflection type (Lambertian?).L310ff: Are 10m winds really a good proxy for cloud altitude winds? What about using AMV for the wind correction?
Fig3: What from are the fine white lines in the figure (eg at lon ~ -108.0 between lat ~ 37.4-37.6)? Correctional shift artifacts? Are those indeed threated as clear-sky then?
L330: Why only a slope is fitted, not an offset, too? Looking at Fig4b it seems to me, a steeper slope with negative bias (passing through he two maxima) would fit the data better. Do you have any hypotheisi on the origin of the two occurence maxima? Would a surface type dependent fitting/transformation mpossibly be better?
L332: Are the same fitted-a values used for the other two channels, or do you just refer to the same fitting procedure?
L440: I don't get that sentence. Does "each wavelength" here refer to monochromatic wavelengths? or individual channels? What are the "hundreds of individual absorption coefficient profiles"? why are they spectrally spaced for "each wavelength"?
L462: Again, what is the "target wavelength"? A channel?L502f: Could you shortly mention, how the parallelization can be applied (incl. install requirements & how to control the use of the parallelization) and whether the APPs in their current setup use this feature?
Fig5: Why is this done on latitude-averaged rather than on footprint base? As the text argues with it (for cloudy locations; L544), could you (also) provide an equivalent plot on footprint base? A difference plot could be helpful, too. Why is there no spread/uncertainty shading on the IPA results? What is the range of SZA in the observations within the plot range?
Fig6: Again, what is the range of SZA in the observations over the scene? What is the errors/uncertainties introduced by using a fixed SZA in the simulations?L555: "This commonly known problem" - please add references.
L581ff: How can you be sure the simulation bias is (only or mainly) due to COT? What about effects of further forward model errors (e.g. errors in Reff, PFCT, surface reflection model)
L607ff: Please add (again?) what the domain size (Nx, Ny, Nz) of this simulation is.
L644f: What kind of interpolation is used?
L669f: Why does a low-biased AHI COT introduce a high bias in IPA-based simulations here, in contrast to APP1?
L662f & 700f: Shouldn't the comparison rather be made on the resolution of the input data, ie AHI resolution, ie integrating/averaging SPN-S data? how does it look if that is done?
L722f: "We found that the bias [...] is partially caused by the coarse imager resolution" - in my understanding of the manuscript, that is rather a hypothesis. have you tested that somehow (eg by comparing on an AHI resolution basis, averaging/integrating the SPN-S obs to the AVI resolution)?
L872ff: "bias [...] was either due" - reformulate. The bias is quite surely not exclusively due to only one of these (as "either" implicates). Those two are very likely two main contributors (as stated above, I think, regarding the role of coarse resolution, this remains a hypothesis so far as in my understanding you have not demonstrated that yet.L686: "their distributions are completely different" - I do not agree at all. Apart from the Trans>1 tail (and the equivalent higher peak in IPA at Trans~0.9, they are fairly similar. Particularly when compared to the observations.
Fig9: For my feeling(!), the observations have a surprisingly high amount of Trans>1 (I'd by-eye-guess ~20% of all data). Is that expected, ie is the phenomenon that common? Or could other things contribute (like calibration)?
Sec6: I'd like to see some characterizing stats of the CNN, e.g. histograms of COT and/or radiance of training and validation data and retrieval uncertainties for either data subset.
L739ff: Does that mean that for more global/operational application, separate CNNs for different SZA(-ranges) would need to be trained (SAA could still be handled by automatized rotation, I assume)? what further work do you see for a more global application?
Fig10: Add indicators of the direction of the sun (or of north) in (a). Remove colored dots in (a) and their mentioning in caption when not otherwise explained and used (did i miss that in text?). Is the (6.4km)2 area the one inside the black or the red rectangle in (b), or the circular area? Does the circular are in (b) correspond to the yellow-circled area in (a)? "Inside [...] shown instead" - Formulate more straightforward to ease understanding: Inside the rectangle = regridded obs, circular area outside the rectangle = observed red channel radiance at native image resolution? why is the regridding done specifically to 100m resolution and how? caption refers to solid black lines indicated square area - I can't spot that; missing or badly visible?
Fig10, Fig12: Please make sure, figure, caption, and text are consistent. Eg. text mentions 12 edge pixels for Fig12, caption 7 on each side.
Fig12: Would be nice to have the observation data (red rectangle region) repeated here in the same style as the simulations for easier comparison.
Fig13: Reconsider the colors used. black dots on low histogram value colors are badly discernible. why are there dots outside the histrogram colored area? why are there no dots in the low-radiance corner, but histogram colors indicate high density? Is that an interpolation effect of the plotting (better to plot original binned data w/o interplation/smoothing)? Are the data shown from native imagery or regridded image resolution (or mixed???)?
Fig14 & discussion (L821-824): First, together with the ambiguous plotting of the mean CRE lines in Fig14a, the formulation of the "minor finding" is not fully clear. Does it mean that meanCRE(COT_CNN) is similar for both IPA and 3D RT simulations, and the same applies for meanCRE(COT_IPA)? I.e. mean of black solid is similar to mean of dashed blue on the one hand and solid red similar to dashed green? That would be in agreement with the following concluding sentence (L823f) and at least with the mean indicating lines in Fig14b (Fig14a I can't judge since the overlap of the lines renders the colors indistinguishable). While I buy that for Fig14b (ie CRE above clouds), from by-eye-judgement, I doubt that for Fig14a (CRE below clouds), since the blue dashed compared to the black solid curve seems to be shifted towards lower CRE (ie I expect mean(blue) < mean(black)) while green dashed compared to solid red seems shifted towards higher CRE (ie I expect mean(green) > mean(red)). Furthermore, in Fig14a both black solid and dashed green on the one hand as well as red solid and blue dashed on the other are very similar to each other (ie the two IPA and the two 3D RT simulations, respectively, regardless of COT retrieval method) - in both shape (that is, regardless of any other issues, regarding below-cloud CRE I dont not agree with your L823 statement that the PDFs are very dissimilar) and location along the CRE axis. Is there anything wrong, maybe, with the color coding of either the PDFs or the mean indication lines? Please check color coding and your respective conclusions.
L904: Over what data is that median taken? The processed scene? Does that work satisfactorily, e.g. over bright surfaces like deserts or snow covered areas?
Technical corrections
---Fig1: Make the figure larger (eg in landscape orientation) to make it better readable. Add color coding info to the caption.
L203: "includes Monte" -> "includes the Monte"
L267: Are only channels 1 and 2 of MODIS L1B product used? Please specify their wavelenghts here.
L313: A reference to the correction effect figure in AppendixB would be useful.
L486f: I am unsure what the documentation sentence here is supposed to tell the reader. Maybe it's just that the "only" is out of place here? However, there no other mention of (further) documentation elsewhere in the text, so this seems odd here.
L531: "scale the MYD09 field" - Please add, which parameter this refers to (surface albedo/reflectance, I assume).
Fig8a: The flight track line is hardly visible; make it thicker. Caption misses an explanation what the thin and thick line sections are.
Fig9: "Vice versa for the green" - Misleading formulation. Like for yellow, PD(obs)>PD(sim) here, not PD(obs)<PD(sim) as "vice versa" implies.
L699: "the simulation histogram peaks" -> "the simulation histograms peak"
L728: "use a high-resolution imagery" - remove "a"
L801f: "By contrast" -> "In contrast"
Fig14: Please use larger font in legend for improved readability. Find a better way to indicate the overlapping mean-value lines - it's indiscernible which of the dashed lines lies where, particularly in (a).
L887: "introduce a warming bias" - I rather suggest "warm bias"; with CRE<0 the term "warming" feels odd.
L946: "in the black box" -> rather "in the inset"? (equivalently at L952)
L1093ff: Add info where to you intend to submit Schmidt et al., 2022 (same Special Issue?)
-
AC2: 'Reply on RC2', Hong Chen, 15 Dec 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-143/amt-2022-143-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Hong Chen, 15 Dec 2022
Hong Chen et al.
Hong Chen et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
624 | 190 | 15 | 829 | 10 | 6 |
- HTML: 624
- PDF: 190
- XML: 15
- Total: 829
- BibTeX: 10
- EndNote: 6
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1