the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Description and validation of the Japanese algorithm for radiative flux and heating rate products with all four EarthCARE instruments: Pre-launch test with A-Train
Abstract. This study developed the Level 2 (L2) atmospheric radiation flux and heating rate product by a Japanese team for Earth Clouds, Aerosols, and Radiation Explorer (EarthCARE). This product offers vertical profiles of downward and upward longwave (LW) and shortwave (SW) radiative fluxes and their atmospheric heating rates. This paper describes the algorithm developed for generating products, including the atmospheric radiative transfer model and input datasets, and its validation against measurement data of radiative fluxes. In the testing phase before the EarthCARE launch, we utilized A-Train data that provided input and output variables analogous to EarthCARE, so that the developed algorithm could be directly applied to EarthCARE after its launch. The results include comparisons of radiative fluxes between radiative transfer simulations and satellite/ground-based observations that quantify errors in computed radiative fluxes at the top of the atmosphere against Clouds and Earth's Radiant Energy System (CERES) observations and their dependence on cloud type with varying thermodynamic phases. For SW fluxes, the bias was 24.4 Wm-2, and the root mean square error (RMSE) was 36.3 Wm-2 relative to the CERES observations at spatial and temporal scales of 5° and 1 month, respectively. On the other hand, LW exhibits a bias of -10.7 Wm-2 and an RMSE of 14.2 Wm-2. When considering different cloud phases, the SW water cloud exhibited a bias of -11.7 Wm-2 and an RMSE of 46.2 Wm-2, while the LW showed a bias of 0.8 Wm-2 and an RMSE of 6.0 Wm-2. When ice clouds were included, the SW bias ranged from 58.7 to 81.5 Wm-2 and the RMSE from 72.8 to 91.6 Wm-2 depending on the ice-containing cloud types, while the corresponding LW bias ranged from -8.8 to -28.4 Wm-2 and the RMSE from 25.9 to 31.8 Wm-2, indicating that the primary source of error was ice-containing clouds. The comparisons were further extended to various spatiotemporal scales to investigate the scale dependency of the flux errors. The SW component of this product exhibited an RMSE of approximately 30 Wm-2 at spatial and temporal scales of 40° and 40 days, respectively, whereas the LW component did not show a significant decrease in RMSE with increasing spatiotemporal scale. Radiative transfer simulations were also compared with ground-based observations of the surface downward SW and LW radiative fluxes at selected locations. The results show that the bias and RMSE for SW are -17.6 Wm-2 and 172.0 Wm-2, respectively, which are larger than those for LW that are -5.6 Wm-2 and 19.0 Wm-2, respectively.
- Preprint
(5837 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 09 Jul 2024)
-
RC1: 'Comment on amt-2024-78', Anonymous Referee #1, 12 Jun 2024
reply
Description and validation of the Japanese algorithm for radiative flux and heating rate products with all four EarthCARE instruments : Pre-launch test with A-Train
Yamauchi et al.
A description of the methods to perform forward radiative transfer calculations, including creation of required inputs, to support the EarthCARE mission are described. The algorithms are applied to A-Train data which is then used to evaluate radiative transfer calculations using CERES measurements at top of atmosphere (TOA) and using BSRN observations at the surface. The evaluation of the radiative fluxes at TOA are broken down by cloud type and over a range of spatial and time scales. The manuscript describes the algorithm used to produce the radiative flux and heating products, evaluation using A-Train and BSRN data, and resulting aerosol and cloud radiative effects. Although the manuscript describes the algorithms and performed an evaluation of the results, there are several items in the manuscript that require clarification.
General comments
- The evaluation of the algorithms using A-Train are useful but it would be helpful if the text could be restructured so that it is clearer what inputs will be used as input when EarthCARE data are available and what is being used from A-Train. For example, in Section 2.1 there is a mixture of EarthCARE products and A-Train data which is a bit challenging to parse.
- In the manuscript evaluation of the RT products are compared with different datasets. It is easier to keep track of things if the products are introduced then consistently referred to after that point.
- The methodology used for the TOA flux evaluation, Section 3.2, is difficult to understand in detail, especially the analysis of fluxes by cloud phase. Detailed comments have been provided below.
Detailed comments
Line 47-48: This sentence should be rewritten to make it clearer that you are discussing space-based estimates of in atmosphere and surface radiative fluxes. The current text is a bit confusing, at to me, since I wondered why RT calculations were used for surface radiation fluxes instead of surface-based radiometers.
Line 75: References for EarthCARE instruments or products?
Line 86: The +/- 10 W/m^2 has a particular spatial scale (averaged over 100 km^2) and temporal scale (instantaneous) which should be noted or referenced here.
Line 126: Is this total cloud water content (liquid + ice) or liquid water content? If it is the former, how is it parsed into liquid and ice?
Line 140: Why is a constant sea surface albedo used?
Line 142: Significantly more detail is required about why the data was averaged, how the averaging was performed and the effect of the averaging on the resulting radiative transfer. Examples of some questions that should be addressed include:
- Why is the data not on the target 1 km along track grid for EarthCARE?
- What is the original resolution of the individual datasets (cloud, aerosol, surface and meteorological fields)?
- If I assume that the retrieved cloud profiles are meant to represent ~ 1 km footprint (line 180), how were the cloud properties averaged in the horizontal? Are the resulting cloud profiles on the 5 km grid assumed to be overcast and horizontally homogeneous or instead partial cloud (cloud fraction < 100%) and inhomogeneous?
- How was the data averaged in the vertical?
Line 151: Is the Voronoi ice particle shape consistent with the EarthCARE retrievals?
Line 154: The CERES product and its version that was used for evaluation should be specified.
Line 157: The method used to compute the diurnal fluxes should be explained. For example, is there a consistent method used for the CERES and the 2B-FLXHR-Lidar algorithms. Is the data in the product diurnal fluxes or instantaneous? Are diurnal fluxes computed for comparison with BSRN data? How was that done with the calculations and with the BSRN data?
Line 160: Is the analysis split to periods when MODIS was and was not available? This affects the availability of the COT constraint on the cloud properties.
Line 165: When averaging the RT results, they are an average of 20 km along orbit? I assume the CERES footprint is not just along orbit but roughly a 20x20 km footprint.
Line 176: Please indicate the value of specific heat content of air at constant pressure used in the calculation.
Line 190: It would be clearer to refer to products used for comparison after they have been introduced earlier in the text. It is not clear what data is “the NASA CloudSat CALIPSO team”.
Line 195: Maybe more precise to call it “cloud top phase of MODIS”?
Line 202: What is the latitude resolution of the data shown in Figure 1 b-e? Is it 5 km?
Line 217: The 24.4 W/m^2 bias is significantly larger than 2B-FLXHR-Lidar.
Line 225: It would be good to indicate here the fraction of the full set of RT calculation that are used for the cloud type analysis. The text in this paragraph suggests that only data for which CloudSat, CALIPSO and MODIS are available will be used. As noted in 160, when the MODIS COT is not available that constraint is removed from the cloud properties used for the RT calculations.
Line 229-235: The categories are confusing, at least to me, and I suggest some restructuring and rewriting of the text to try and clarify them. Summarizing my understanding of the current text, cloud phase based on CloudSat/CALIPSO data is “Water” when all layers are liquid phase, “Ice” when all layers are ice phase and “Mixed” when both are present. However, only for single layer clouds is the combined CloudSat/CALIPSO and MODIS cloud phase categories defined. This results in the categories “Water/Water”, “Water/Ice”, “Ice/Water”. Are the cloud phase categories unique? It is also not quite clear what is a single layer for the analysis. Is it a single Cloudsat/CALIPSO layer or it can be multiple adjacent layers?
Summing the “N” values in the Figure 3a, 3b and 3c, does not result in a total “N” that matches “N” shown in Figure 2a so it is not clear if the categories are unique.
Line 234: No need to restate MODIS(MOD) since it is done in line 229.
Line 237: How is the “Mixed” category a single layer cloud when it is defined as “a mixture of ice and water within the vertical profile”? This goes back to the comment about definition of a single cloud layer.
Line 239: It is stated that Figure 3 is the same as Figure 2 but broken down by cloud phase. This can be taken to mean that the data used to construct Figure 3 are derived from data averaged over 5 degrees and 1 month. If this assumption is correct, then it is unclear how to interpret the statement that the comparisons was limited to points when cloud in the CERES footprint were of the same type since that occurs on ~20 km and instantaneous data. When accumulated over space and time wouldn’t there be heterogeneity arising from the CERES footprint level data, even if it was the same cloud type?
Line 243: Compared to what are the bias and RMSE are relatively small? While not necessary to include in the paper, it would be helpful to have the cloud phase analysis applied to the 2B-FLXHR-Lidar product to provide a point of comparison results using the EarthCARE algorithm.
Line 312: Could the biases also be compared with computed surface fluxes from CERES and 2B-FLXHR-Lidar? While not direct observations they would increase the amount of data that could be used for comparison with the EarthCARE RT algorithm.
Line 319: It would be good to explicitly document how the aerosol and cloud radiative forcing is computed in Section 2.1 since it is part of the product output.
Figure 1: What is the wavelength for the extinction shown in panels “c”? Panel “e” is a bit hard to follow. Could it be split into a panel for SW and a panel for LW? For the current panel “e”, the “obs” legend markers at the bottom of the plot are barely visible. It would also be good to have panel “e” aligned along the x-axis with the panels above it. Also, it is quite challenging to compare the markers for the computed and observed fluxes since they are fluctuating significantly, perhaps a line plot would be better.
Figure 8: It is difficult to see any structure to the cloud forcing on the plots. It would be helpful to consider modifying the plots so that some of the structure can be seen.
Citation: https://doi.org/10.5194/amt-2024-78-RC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
294 | 47 | 10 | 351 | 8 | 7 |
- HTML: 294
- PDF: 47
- XML: 10
- Total: 351
- BibTeX: 8
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1