|Title: The Education and Research 3D Radiative Transfer Toolbox (Ear3T) – Towards the mitigation of 3D Bias in Airborne and Spaceborne Passive Imagery Cloud Retrievals|
Authors: Chen et al.
The authors of this manuscript developed a modularized Python package EaR3T which automates the process of 3D radiative transfer calculation. They illustrated the broad range of applications of this 3D-RT package by showing four examples of 3D radiance simulation and cloud retrievals.
The work is solid and requires tremendous effort. The package developed by the authors is also very useful and has significant potential for 3D-RT-related applications.
However, I found that the current structure of this manuscript is difficult to follow. This is because there are too many low-level technical details in the first 4 sections, which could severely distract readers from the primary scientific findings of this manuscript (See my major comments).
Also, there are a few places where more explanations and clarifications are needed (see my major comments). Considering all this, I recommend a major revision for this paper.
Comments on the paper structure:
1. Though I understand that it takes tremendous effort to develop this package, I found the description in Section 2.1 Overview (especially L180-229 and Table 1) is too technical (not very scientific-related). It is better to move this part into an appendix and focus on the four applications showing the advantage of using 3D RT model in radiance simulation and cloud retrievals, the scientific part of this manuscript.
2. The same comments to the Section “3. Ear3T Procedures” (L400-434), L488-509. You can create a separate manual for your package, but in the text, you might want to focus on the scientific part. Only discuss the input/output sources for your applications. For example, some description of Table 2 is good enough.
Comments on the results:
L541: Figure 5. It might be better to indicate the cloud and clear-sky regions in Figure 5. Also, show the RGB image here for more easy pairing your results with the cloud and clear-sky areas of the RGB image. For the Ear3T IPA calculation, what is your input? Column gas and temperature profiles or still 3-D gas and temperature fields?
L544: “In the cloudy regions”, where are exactly cloudy regions? It seems to be ot mentioned in the previous context.
L534: In this context, you attributed the biased clear-sky 3D-RT radiance bias to the surface reflectance (red). But could the diffuse radiance from the nearby clouds contribute to your biased simulations at the clear-sky regions? If true, how do you determine which components contribute more to the bias radiance?
I would suggest doing the following experiment: conduct a simulation over a large clear-sky region (so no diffuse radiance from nearby clouds) to see if the clear-sky 3D-RT still overestimates the radiance. If so, you can attribute the bias to the surface reflectance. Now, I cannot determine this because I’m not sure how large is your clear-sky region in Figure 5.
L582-584: “Since the MODIS reflectance is not self-consistent…of COT”. Here you have implicitly assumed that Era3T calculation is the truth. It would be good to discuss the input of IPA calculation, especially the different input components to Ear3T IPA and a standard plane-parallel 1D RT model, since those different components contribute to the simulation difference here. It will be self-consistent if you use the same plane-parallel 1D RT model to retrieve those cloud parameters.
L790: Figure 13 shows you are using a Two-stream approximation. Have you tried larger stream numbers (at least 4 streams)? For your CNN trained on Ear3T-based 3D Radiance field, how many streams do you use? It will be a fair comparison only if these two use the same number of streams.
L41: “In contrast to isolated case studies in the past, EaR3T… irradiance.”: This claim seems misleading. Even the RT calculations with the plane-parallel RT models are made independently for each pixel, they are usually verified against over large regions as long as we have observations. Also, they are verified against multiple sources. I would recommend deleting this claim from your abstract.
L77: “Once the CNNs are trained”: remove “the”
L107: “cloud fields with minimal user input”: Please rephrase this sentence. It would be better to emphasize how it automates the whole 3D-RT calculations instead of saying “with minimal user input”. Readers can have different interpretations of “With minimal user input”. One unpleasant interpretation is to use this tool as a black box.
L148-149: “The code, along…”: You can move this sentence to the Section of “Data and Code Availability”, as required by EGU journals.
L248: Figure 2: Are those circles representing OCO-2 showing their actual spatial resolution, or just an illustration, since those footprints in the figure are not distorted and shown as circles?
L306-307: Move the data source of OCO-2 data to the Section of “Data and Code Availability”
L388: “Only radiance data from the red channel were used in this paper.”: Is there any reason why you only use observations from 626nm band?
L410: Again, move the link to the Section of “Data and Code Availability”
L478: “In addition to MCARaTS, planned solvers…”: Move this part to the “Summary and Conclusion” Section as future work.
L607: “For technical references,”: change to “for the running time of simulation,” or something more specific.