the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Description and validation of the Japanese algorithm for radiative flux and heating rate products with all four EarthCARE instruments: Pre-launch test with A-Train
Abstract. This study developed the Level 2 (L2) atmospheric radiation flux and heating rate product by a Japanese team for Earth Clouds, Aerosols, and Radiation Explorer (EarthCARE). This product offers vertical profiles of downward and upward longwave (LW) and shortwave (SW) radiative fluxes and their atmospheric heating rates. This paper describes the algorithm developed for generating products, including the atmospheric radiative transfer model and input datasets, and its validation against measurement data of radiative fluxes. In the testing phase before the EarthCARE launch, we utilized A-Train data that provided input and output variables analogous to EarthCARE, so that the developed algorithm could be directly applied to EarthCARE after its launch. The results include comparisons of radiative fluxes between radiative transfer simulations and satellite/ground-based observations that quantify errors in computed radiative fluxes at the top of the atmosphere against Clouds and Earth's Radiant Energy System (CERES) observations and their dependence on cloud type with varying thermodynamic phases. For SW fluxes, the bias was 24.4 Wm-2, and the root mean square error (RMSE) was 36.3 Wm-2 relative to the CERES observations at spatial and temporal scales of 5° and 1 month, respectively. On the other hand, LW exhibits a bias of -10.7 Wm-2 and an RMSE of 14.2 Wm-2. When considering different cloud phases, the SW water cloud exhibited a bias of -11.7 Wm-2 and an RMSE of 46.2 Wm-2, while the LW showed a bias of 0.8 Wm-2 and an RMSE of 6.0 Wm-2. When ice clouds were included, the SW bias ranged from 58.7 to 81.5 Wm-2 and the RMSE from 72.8 to 91.6 Wm-2 depending on the ice-containing cloud types, while the corresponding LW bias ranged from -8.8 to -28.4 Wm-2 and the RMSE from 25.9 to 31.8 Wm-2, indicating that the primary source of error was ice-containing clouds. The comparisons were further extended to various spatiotemporal scales to investigate the scale dependency of the flux errors. The SW component of this product exhibited an RMSE of approximately 30 Wm-2 at spatial and temporal scales of 40° and 40 days, respectively, whereas the LW component did not show a significant decrease in RMSE with increasing spatiotemporal scale. Radiative transfer simulations were also compared with ground-based observations of the surface downward SW and LW radiative fluxes at selected locations. The results show that the bias and RMSE for SW are -17.6 Wm-2 and 172.0 Wm-2, respectively, which are larger than those for LW that are -5.6 Wm-2 and 19.0 Wm-2, respectively.
- Preprint
(5837 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on amt-2024-78', Anonymous Referee #1, 12 Jun 2024
Description and validation of the Japanese algorithm for radiative flux and heating rate products with all four EarthCARE instruments : Pre-launch test with A-Train
Yamauchi et al.
A description of the methods to perform forward radiative transfer calculations, including creation of required inputs, to support the EarthCARE mission are described. The algorithms are applied to A-Train data which is then used to evaluate radiative transfer calculations using CERES measurements at top of atmosphere (TOA) and using BSRN observations at the surface. The evaluation of the radiative fluxes at TOA are broken down by cloud type and over a range of spatial and time scales. The manuscript describes the algorithm used to produce the radiative flux and heating products, evaluation using A-Train and BSRN data, and resulting aerosol and cloud radiative effects. Although the manuscript describes the algorithms and performed an evaluation of the results, there are several items in the manuscript that require clarification.
General comments
- The evaluation of the algorithms using A-Train are useful but it would be helpful if the text could be restructured so that it is clearer what inputs will be used as input when EarthCARE data are available and what is being used from A-Train. For example, in Section 2.1 there is a mixture of EarthCARE products and A-Train data which is a bit challenging to parse.
- In the manuscript evaluation of the RT products are compared with different datasets. It is easier to keep track of things if the products are introduced then consistently referred to after that point.
- The methodology used for the TOA flux evaluation, Section 3.2, is difficult to understand in detail, especially the analysis of fluxes by cloud phase. Detailed comments have been provided below.
Detailed comments
Line 47-48: This sentence should be rewritten to make it clearer that you are discussing space-based estimates of in atmosphere and surface radiative fluxes. The current text is a bit confusing, at to me, since I wondered why RT calculations were used for surface radiation fluxes instead of surface-based radiometers.
Line 75: References for EarthCARE instruments or products?
Line 86: The +/- 10 W/m^2 has a particular spatial scale (averaged over 100 km^2) and temporal scale (instantaneous) which should be noted or referenced here.
Line 126: Is this total cloud water content (liquid + ice) or liquid water content? If it is the former, how is it parsed into liquid and ice?
Line 140: Why is a constant sea surface albedo used?
Line 142: Significantly more detail is required about why the data was averaged, how the averaging was performed and the effect of the averaging on the resulting radiative transfer. Examples of some questions that should be addressed include:
- Why is the data not on the target 1 km along track grid for EarthCARE?
- What is the original resolution of the individual datasets (cloud, aerosol, surface and meteorological fields)?
- If I assume that the retrieved cloud profiles are meant to represent ~ 1 km footprint (line 180), how were the cloud properties averaged in the horizontal? Are the resulting cloud profiles on the 5 km grid assumed to be overcast and horizontally homogeneous or instead partial cloud (cloud fraction < 100%) and inhomogeneous?
- How was the data averaged in the vertical?
Line 151: Is the Voronoi ice particle shape consistent with the EarthCARE retrievals?
Line 154: The CERES product and its version that was used for evaluation should be specified.
Line 157: The method used to compute the diurnal fluxes should be explained. For example, is there a consistent method used for the CERES and the 2B-FLXHR-Lidar algorithms. Is the data in the product diurnal fluxes or instantaneous? Are diurnal fluxes computed for comparison with BSRN data? How was that done with the calculations and with the BSRN data?
Line 160: Is the analysis split to periods when MODIS was and was not available? This affects the availability of the COT constraint on the cloud properties.
Line 165: When averaging the RT results, they are an average of 20 km along orbit? I assume the CERES footprint is not just along orbit but roughly a 20x20 km footprint.
Line 176: Please indicate the value of specific heat content of air at constant pressure used in the calculation.
Line 190: It would be clearer to refer to products used for comparison after they have been introduced earlier in the text. It is not clear what data is “the NASA CloudSat CALIPSO team”.
Line 195: Maybe more precise to call it “cloud top phase of MODIS”?
Line 202: What is the latitude resolution of the data shown in Figure 1 b-e? Is it 5 km?
Line 217: The 24.4 W/m^2 bias is significantly larger than 2B-FLXHR-Lidar.
Line 225: It would be good to indicate here the fraction of the full set of RT calculation that are used for the cloud type analysis. The text in this paragraph suggests that only data for which CloudSat, CALIPSO and MODIS are available will be used. As noted in 160, when the MODIS COT is not available that constraint is removed from the cloud properties used for the RT calculations.
Line 229-235: The categories are confusing, at least to me, and I suggest some restructuring and rewriting of the text to try and clarify them. Summarizing my understanding of the current text, cloud phase based on CloudSat/CALIPSO data is “Water” when all layers are liquid phase, “Ice” when all layers are ice phase and “Mixed” when both are present. However, only for single layer clouds is the combined CloudSat/CALIPSO and MODIS cloud phase categories defined. This results in the categories “Water/Water”, “Water/Ice”, “Ice/Water”. Are the cloud phase categories unique? It is also not quite clear what is a single layer for the analysis. Is it a single Cloudsat/CALIPSO layer or it can be multiple adjacent layers?
Summing the “N” values in the Figure 3a, 3b and 3c, does not result in a total “N” that matches “N” shown in Figure 2a so it is not clear if the categories are unique.
Line 234: No need to restate MODIS(MOD) since it is done in line 229.
Line 237: How is the “Mixed” category a single layer cloud when it is defined as “a mixture of ice and water within the vertical profile”? This goes back to the comment about definition of a single cloud layer.
Line 239: It is stated that Figure 3 is the same as Figure 2 but broken down by cloud phase. This can be taken to mean that the data used to construct Figure 3 are derived from data averaged over 5 degrees and 1 month. If this assumption is correct, then it is unclear how to interpret the statement that the comparisons was limited to points when cloud in the CERES footprint were of the same type since that occurs on ~20 km and instantaneous data. When accumulated over space and time wouldn’t there be heterogeneity arising from the CERES footprint level data, even if it was the same cloud type?
Line 243: Compared to what are the bias and RMSE are relatively small? While not necessary to include in the paper, it would be helpful to have the cloud phase analysis applied to the 2B-FLXHR-Lidar product to provide a point of comparison results using the EarthCARE algorithm.
Line 312: Could the biases also be compared with computed surface fluxes from CERES and 2B-FLXHR-Lidar? While not direct observations they would increase the amount of data that could be used for comparison with the EarthCARE RT algorithm.
Line 319: It would be good to explicitly document how the aerosol and cloud radiative forcing is computed in Section 2.1 since it is part of the product output.
Figure 1: What is the wavelength for the extinction shown in panels “c”? Panel “e” is a bit hard to follow. Could it be split into a panel for SW and a panel for LW? For the current panel “e”, the “obs” legend markers at the bottom of the plot are barely visible. It would also be good to have panel “e” aligned along the x-axis with the panels above it. Also, it is quite challenging to compare the markers for the computed and observed fluxes since they are fluctuating significantly, perhaps a line plot would be better.
Figure 8: It is difficult to see any structure to the cloud forcing on the plots. It would be helpful to consider modifying the plots so that some of the structure can be seen.
Citation: https://doi.org/10.5194/amt-2024-78-RC1 -
AC1: 'Reply on RC1', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC1-supplement.pdf
-
RC2: 'Comment on amt-2024-78', Anonymous Referee #2, 21 Jun 2024
Comments on Yamauchi et al. submitted to AMT
This study describes how the radiative transfer calculations were performed using clouds and aerosol properties planned for the EarthCARE mission. The algorithm was tested using satellite products in the A-train mission. When comparing the computed TOA fluxes with CERES satellite observations, SW fluxes computed from this study are biased high by 24 W m-2, which is larger than those from existing FLXHR-LIDAR product, but this study derived a smaller RMSE. The bias of computed LW fluxes in this study is comparable to those found in the FLXHR-LIDAR product. This algorithm was developed by the Japanese team in parallel with the European team, and these two algorithms can be compared to each other for further validation.
I feel that this study is important for the EarthCare mission, and the developed algorithm will provide radiative flux quantities at the surface and in the atmosphere that cannot be directly measured from satellite measurements. However, this manuscript needs further technical clarifications and a description of the algorithm. Particularly, a detailed description is needed about how aerosol and cloud profiles are described in the radiative transfer model. In many places, the method was described in a sentence without any references.
Specific Comments:
Line 85: It is not clear how the target accuracy is set as 10 W m-2. Certainly, the SW biases shown in this study are higher than this target number. Is this target number based on existing products? If so, please include the relevant references.
Line 113: Do these two sets of satellite products (CloudSat/CALIPSO/MODIS) and (CPR/ATLID/MSI) provide consistent cloud and aerosol parameters? The algorithm developed in this study was tested using the A-train products. Therefore, it is important whether those two sets of products have comparable parameters.
Line 123-124: It is not clear how the attenuated backscatter coefficient and depolarization ratio were used to derive the vertical profiles of three aerosol types. Please include the relevant references or description of it. In addition, what are specifically vertical profiles of aerosol types? Are these aerosol extinction profiles, single scattering albedo, and asymmetry parameters?
How was the MODIS COT used for constraining cloud radiative properties? Please provide detailed information.
Line 138: Is the GEOS-4,5 different from MERRA-1 or MERRA-2?
Line 140: Please provide the information about the surface emissivity assumptions used for RT calculations. Was the skin temperature from GEOS?
Line 143: For the area of 5 km, is it assumed as completely clear and cloudy? I think the homogenous assumption would be okay for most cases, but for partly cloudy cases, this homogeneous assumption can cause positive SW biases, as discussed in earlier 3D cloud studies.
Line 154: Which CERES product was used for the observed TOA fluxes?
Line 154 or later in the result section: The surface radiation significantly varies by region as the authors noted. Therefore, it would be helpful where the ground sites are located.
Line 155: The results in this study were compared with two versions (R04 and R05) of the 2B-FLXHR-Lidar product. Therefore, it would be necessary to provide a brief description of how these versions differ.
Line 157: Those four months were used for validations at TOA and surface? I was wondering why the sampling number is so small for the ground comparison.
Eq. (3): I don’t see any comparison of heating rate profiles, besides the example shown in Fig. 1d.
Line 183: I believe that the CERES CCCM product also provides flux at 20 km resolution. Have the authors compared the results with what this product provides?
Line 216: Does it mean that each point in the scatter plots was from monthly 5-degree grided points for four months in 2007?
Line 425: Figure 3 is the same as “Figs 2a and 2d” but separated by cloud types. I guess that the 2B-FLXHR-Lidar fluxes are not included in Fig. 3. If so, the scatter plots shown in Fig. 3a–3e are subsets of Fig. 2a? Likewise, the scatter plots shown in Figs. 3f–j are subsets of Fig. 2d? Please clarify it in the figure caption. Why some outliers shown in Fig. 3g are not shown in Fig. 2d?
Line 244-246: If the consistent ice scattering model (i.e., Voronoi-type) was used for cloud retrievals and RT calculations, this would not be a problem. Please include more discussion about it.
Line 249: Was the sensitivity study using NASA MAC06S0 performed using a consistent ice scattering model between cloud retrievals and RT calculations?
Line 253: “LW bias by providing more accurate cloud detection through improved measurement instrumentation” It is not clear what this statement specifically refers to. Please provide relevant references or expand the discussion.
Line 299: As mentioned earlier, it would be helpful if the authors could provide the location of the BSRN sites on a map.
Line 299: What is “a minor bias”?
Citation: https://doi.org/10.5194/amt-2024-78-RC2 -
AC2: 'Reply on RC2', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Akira Yamauchi, 21 Sep 2024
-
RC3: 'Comment on amt-2024-78', Anonymous Referee #3, 26 Jun 2024
The manuscript describes the algorithm to compute top-of-atmosphere and surface radiative fluxes and radiative flux profiles in the atmosphere using measurements from EarthCERE instruments, CPR, ATLID, MSI, and BBR. The algorithm was developed by the EarthCARE Japanese group. TOA flux is used to evaluate retrieved properties by comparing fluxes with fluxes derived from BBR. The EarthCERE’s goal is to achieve the difference less than 10 Wm-2. The authors test the algorithm using A-train data. When instantaneous fluxes are averaged in 5-degree grids and over a month, the bias and RMS difference compared with CERES derived fluxes are, respectively, 24 Wm-2 and 36 Wm-2 for shortwave and -11 Wm-2 and 14 Wm-2 for longwave. The purpose of the manuscript is to describe the algorithm and evaluation of the algorithm. While the manuscript meet this goal, it does not provide new information other than these purposes. Using 1D radiative transfer and comparing with observed TOA fluxes with A-train data are not new. New science results are missing. In addition, given similar products are available from the European team, the manuscript needs to highlight unique aspects of the Japanese flux products, distinguishing from European products.
Major comments.
The authors describe radiation budget, especially downward longwave radiation at the surface in the introduction section. However, given what the authors describe in this manuscript, I do not see how the algorithm and data products described in this manuscript will contribute to improving global surface radiation budget and downward longwave, in particular, from the level where we are with A-train data. If Japanese flux products are to improve surface radiation, please describe how to improve in the manuscript.
Similarly, the authors mention that aerosol and cloud vertical profiles affect vertical profile of radiative fluxes. The number of aerosol type is increased from three to four in the algorithm. This is still less that the number of aerosol types used in CALIPSO algorithms (see for example Omar et al. 2009; Burton et al. 2012, 2013) and flux computations. Please provide thoughts of how to improve our knowledge of vertical flux profiles with the flux products described in this manuscript.
The introduction provides some background of surface radiation budget. EarthCARE data are, however, likely to contribute improving our knowledge of vertical flux profiles than improving global radiation budget.
The approach described in the manuscript has been used with A-train data for at least 10 years. Could you describe the uniqueness of the data products? What do they offer scientifically that is not available from European products and A-train products (e.g. FlxHR or CCCM)? Unless the authors describe clearly here, users are not motivated to use the Japanese products unless they are involved in the project.
It is not critical but given the bias at TOA, how the Japanese team is going to achieve the goal of EarthCARE of 10 Wm-2? In addition, this manuscript is revealing that the Japanese flux algorithm is more primitive compared to European flux algorithms. I think that it is useful for the international community having independent flux results from the Japanese and European teams. From this point, it is useful if the authors provide their thoughts on how the international community will benefit having the Japanese flux products in addition to Europeans.
Minor comments
Line 134: Could you explain what the Voronoi particles are?
Line 247: Could you justify reducing the optical thickness by 30%?
Line 253: If the authors claim EarthCARE instruments detect more clouds, then computed OLR is even lower, which increases the bias. Please explain why EarthCARE is expected to reduce the LW bias.
Line 330: Could you elaborate why aerosol radiative forcing is important in the upper atmosphere? Also, does the Japanese team retrieve aerosol properties everywhere all the time? What do you use when retrieved aerosol properties are not available (e.g. below clouds)?
References
Burton, S. P., and coauthors, (2012). Aerosol classification using airborne high spectral resolution lidar measurements – methodology and examples, Atms. Meas. Tech., 5, 73-98.
Burton, S. P., Ferrare, R. A., Vaughan, M. A., Omar, A. H., Rogers, R. R., Hostetler, C. A., and Hair, J. W., (2013). Aerosol classification from airborne HSRL and comparison with the CALIPSO vertical feature mask, Atmos. Meas. Tech., 6, 1397-1421.
Omar, A., Winker, D., Kittaka, C., Vaughan, M., Liu, Z., Hu, Y., Trepte, C., Rogers, R., Ferrare, R., Kuehn, R., Hostetler, C., (2009). The CALIPSO Automated Aerosol Classification and Lidar Ratio Selection Algorithm, J. Atmos. Oceanic Technol., 26, 1994–2014.
Citation: https://doi.org/10.5194/amt-2024-78-RC3 -
AC3: 'Reply on RC3', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC3-supplement.pdf
-
AC3: 'Reply on RC3', Akira Yamauchi, 21 Sep 2024
-
RC4: 'Comment on amt-2024-78', Anonymous Referee #4, 23 Jul 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-RC4-supplement.pdf
-
AC4: 'Reply on RC4', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC4-supplement.pdf
-
AC4: 'Reply on RC4', Akira Yamauchi, 21 Sep 2024
Status: closed
-
RC1: 'Comment on amt-2024-78', Anonymous Referee #1, 12 Jun 2024
Description and validation of the Japanese algorithm for radiative flux and heating rate products with all four EarthCARE instruments : Pre-launch test with A-Train
Yamauchi et al.
A description of the methods to perform forward radiative transfer calculations, including creation of required inputs, to support the EarthCARE mission are described. The algorithms are applied to A-Train data which is then used to evaluate radiative transfer calculations using CERES measurements at top of atmosphere (TOA) and using BSRN observations at the surface. The evaluation of the radiative fluxes at TOA are broken down by cloud type and over a range of spatial and time scales. The manuscript describes the algorithm used to produce the radiative flux and heating products, evaluation using A-Train and BSRN data, and resulting aerosol and cloud radiative effects. Although the manuscript describes the algorithms and performed an evaluation of the results, there are several items in the manuscript that require clarification.
General comments
- The evaluation of the algorithms using A-Train are useful but it would be helpful if the text could be restructured so that it is clearer what inputs will be used as input when EarthCARE data are available and what is being used from A-Train. For example, in Section 2.1 there is a mixture of EarthCARE products and A-Train data which is a bit challenging to parse.
- In the manuscript evaluation of the RT products are compared with different datasets. It is easier to keep track of things if the products are introduced then consistently referred to after that point.
- The methodology used for the TOA flux evaluation, Section 3.2, is difficult to understand in detail, especially the analysis of fluxes by cloud phase. Detailed comments have been provided below.
Detailed comments
Line 47-48: This sentence should be rewritten to make it clearer that you are discussing space-based estimates of in atmosphere and surface radiative fluxes. The current text is a bit confusing, at to me, since I wondered why RT calculations were used for surface radiation fluxes instead of surface-based radiometers.
Line 75: References for EarthCARE instruments or products?
Line 86: The +/- 10 W/m^2 has a particular spatial scale (averaged over 100 km^2) and temporal scale (instantaneous) which should be noted or referenced here.
Line 126: Is this total cloud water content (liquid + ice) or liquid water content? If it is the former, how is it parsed into liquid and ice?
Line 140: Why is a constant sea surface albedo used?
Line 142: Significantly more detail is required about why the data was averaged, how the averaging was performed and the effect of the averaging on the resulting radiative transfer. Examples of some questions that should be addressed include:
- Why is the data not on the target 1 km along track grid for EarthCARE?
- What is the original resolution of the individual datasets (cloud, aerosol, surface and meteorological fields)?
- If I assume that the retrieved cloud profiles are meant to represent ~ 1 km footprint (line 180), how were the cloud properties averaged in the horizontal? Are the resulting cloud profiles on the 5 km grid assumed to be overcast and horizontally homogeneous or instead partial cloud (cloud fraction < 100%) and inhomogeneous?
- How was the data averaged in the vertical?
Line 151: Is the Voronoi ice particle shape consistent with the EarthCARE retrievals?
Line 154: The CERES product and its version that was used for evaluation should be specified.
Line 157: The method used to compute the diurnal fluxes should be explained. For example, is there a consistent method used for the CERES and the 2B-FLXHR-Lidar algorithms. Is the data in the product diurnal fluxes or instantaneous? Are diurnal fluxes computed for comparison with BSRN data? How was that done with the calculations and with the BSRN data?
Line 160: Is the analysis split to periods when MODIS was and was not available? This affects the availability of the COT constraint on the cloud properties.
Line 165: When averaging the RT results, they are an average of 20 km along orbit? I assume the CERES footprint is not just along orbit but roughly a 20x20 km footprint.
Line 176: Please indicate the value of specific heat content of air at constant pressure used in the calculation.
Line 190: It would be clearer to refer to products used for comparison after they have been introduced earlier in the text. It is not clear what data is “the NASA CloudSat CALIPSO team”.
Line 195: Maybe more precise to call it “cloud top phase of MODIS”?
Line 202: What is the latitude resolution of the data shown in Figure 1 b-e? Is it 5 km?
Line 217: The 24.4 W/m^2 bias is significantly larger than 2B-FLXHR-Lidar.
Line 225: It would be good to indicate here the fraction of the full set of RT calculation that are used for the cloud type analysis. The text in this paragraph suggests that only data for which CloudSat, CALIPSO and MODIS are available will be used. As noted in 160, when the MODIS COT is not available that constraint is removed from the cloud properties used for the RT calculations.
Line 229-235: The categories are confusing, at least to me, and I suggest some restructuring and rewriting of the text to try and clarify them. Summarizing my understanding of the current text, cloud phase based on CloudSat/CALIPSO data is “Water” when all layers are liquid phase, “Ice” when all layers are ice phase and “Mixed” when both are present. However, only for single layer clouds is the combined CloudSat/CALIPSO and MODIS cloud phase categories defined. This results in the categories “Water/Water”, “Water/Ice”, “Ice/Water”. Are the cloud phase categories unique? It is also not quite clear what is a single layer for the analysis. Is it a single Cloudsat/CALIPSO layer or it can be multiple adjacent layers?
Summing the “N” values in the Figure 3a, 3b and 3c, does not result in a total “N” that matches “N” shown in Figure 2a so it is not clear if the categories are unique.
Line 234: No need to restate MODIS(MOD) since it is done in line 229.
Line 237: How is the “Mixed” category a single layer cloud when it is defined as “a mixture of ice and water within the vertical profile”? This goes back to the comment about definition of a single cloud layer.
Line 239: It is stated that Figure 3 is the same as Figure 2 but broken down by cloud phase. This can be taken to mean that the data used to construct Figure 3 are derived from data averaged over 5 degrees and 1 month. If this assumption is correct, then it is unclear how to interpret the statement that the comparisons was limited to points when cloud in the CERES footprint were of the same type since that occurs on ~20 km and instantaneous data. When accumulated over space and time wouldn’t there be heterogeneity arising from the CERES footprint level data, even if it was the same cloud type?
Line 243: Compared to what are the bias and RMSE are relatively small? While not necessary to include in the paper, it would be helpful to have the cloud phase analysis applied to the 2B-FLXHR-Lidar product to provide a point of comparison results using the EarthCARE algorithm.
Line 312: Could the biases also be compared with computed surface fluxes from CERES and 2B-FLXHR-Lidar? While not direct observations they would increase the amount of data that could be used for comparison with the EarthCARE RT algorithm.
Line 319: It would be good to explicitly document how the aerosol and cloud radiative forcing is computed in Section 2.1 since it is part of the product output.
Figure 1: What is the wavelength for the extinction shown in panels “c”? Panel “e” is a bit hard to follow. Could it be split into a panel for SW and a panel for LW? For the current panel “e”, the “obs” legend markers at the bottom of the plot are barely visible. It would also be good to have panel “e” aligned along the x-axis with the panels above it. Also, it is quite challenging to compare the markers for the computed and observed fluxes since they are fluctuating significantly, perhaps a line plot would be better.
Figure 8: It is difficult to see any structure to the cloud forcing on the plots. It would be helpful to consider modifying the plots so that some of the structure can be seen.
Citation: https://doi.org/10.5194/amt-2024-78-RC1 -
AC1: 'Reply on RC1', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC1-supplement.pdf
-
RC2: 'Comment on amt-2024-78', Anonymous Referee #2, 21 Jun 2024
Comments on Yamauchi et al. submitted to AMT
This study describes how the radiative transfer calculations were performed using clouds and aerosol properties planned for the EarthCARE mission. The algorithm was tested using satellite products in the A-train mission. When comparing the computed TOA fluxes with CERES satellite observations, SW fluxes computed from this study are biased high by 24 W m-2, which is larger than those from existing FLXHR-LIDAR product, but this study derived a smaller RMSE. The bias of computed LW fluxes in this study is comparable to those found in the FLXHR-LIDAR product. This algorithm was developed by the Japanese team in parallel with the European team, and these two algorithms can be compared to each other for further validation.
I feel that this study is important for the EarthCare mission, and the developed algorithm will provide radiative flux quantities at the surface and in the atmosphere that cannot be directly measured from satellite measurements. However, this manuscript needs further technical clarifications and a description of the algorithm. Particularly, a detailed description is needed about how aerosol and cloud profiles are described in the radiative transfer model. In many places, the method was described in a sentence without any references.
Specific Comments:
Line 85: It is not clear how the target accuracy is set as 10 W m-2. Certainly, the SW biases shown in this study are higher than this target number. Is this target number based on existing products? If so, please include the relevant references.
Line 113: Do these two sets of satellite products (CloudSat/CALIPSO/MODIS) and (CPR/ATLID/MSI) provide consistent cloud and aerosol parameters? The algorithm developed in this study was tested using the A-train products. Therefore, it is important whether those two sets of products have comparable parameters.
Line 123-124: It is not clear how the attenuated backscatter coefficient and depolarization ratio were used to derive the vertical profiles of three aerosol types. Please include the relevant references or description of it. In addition, what are specifically vertical profiles of aerosol types? Are these aerosol extinction profiles, single scattering albedo, and asymmetry parameters?
How was the MODIS COT used for constraining cloud radiative properties? Please provide detailed information.
Line 138: Is the GEOS-4,5 different from MERRA-1 or MERRA-2?
Line 140: Please provide the information about the surface emissivity assumptions used for RT calculations. Was the skin temperature from GEOS?
Line 143: For the area of 5 km, is it assumed as completely clear and cloudy? I think the homogenous assumption would be okay for most cases, but for partly cloudy cases, this homogeneous assumption can cause positive SW biases, as discussed in earlier 3D cloud studies.
Line 154: Which CERES product was used for the observed TOA fluxes?
Line 154 or later in the result section: The surface radiation significantly varies by region as the authors noted. Therefore, it would be helpful where the ground sites are located.
Line 155: The results in this study were compared with two versions (R04 and R05) of the 2B-FLXHR-Lidar product. Therefore, it would be necessary to provide a brief description of how these versions differ.
Line 157: Those four months were used for validations at TOA and surface? I was wondering why the sampling number is so small for the ground comparison.
Eq. (3): I don’t see any comparison of heating rate profiles, besides the example shown in Fig. 1d.
Line 183: I believe that the CERES CCCM product also provides flux at 20 km resolution. Have the authors compared the results with what this product provides?
Line 216: Does it mean that each point in the scatter plots was from monthly 5-degree grided points for four months in 2007?
Line 425: Figure 3 is the same as “Figs 2a and 2d” but separated by cloud types. I guess that the 2B-FLXHR-Lidar fluxes are not included in Fig. 3. If so, the scatter plots shown in Fig. 3a–3e are subsets of Fig. 2a? Likewise, the scatter plots shown in Figs. 3f–j are subsets of Fig. 2d? Please clarify it in the figure caption. Why some outliers shown in Fig. 3g are not shown in Fig. 2d?
Line 244-246: If the consistent ice scattering model (i.e., Voronoi-type) was used for cloud retrievals and RT calculations, this would not be a problem. Please include more discussion about it.
Line 249: Was the sensitivity study using NASA MAC06S0 performed using a consistent ice scattering model between cloud retrievals and RT calculations?
Line 253: “LW bias by providing more accurate cloud detection through improved measurement instrumentation” It is not clear what this statement specifically refers to. Please provide relevant references or expand the discussion.
Line 299: As mentioned earlier, it would be helpful if the authors could provide the location of the BSRN sites on a map.
Line 299: What is “a minor bias”?
Citation: https://doi.org/10.5194/amt-2024-78-RC2 -
AC2: 'Reply on RC2', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Akira Yamauchi, 21 Sep 2024
-
RC3: 'Comment on amt-2024-78', Anonymous Referee #3, 26 Jun 2024
The manuscript describes the algorithm to compute top-of-atmosphere and surface radiative fluxes and radiative flux profiles in the atmosphere using measurements from EarthCERE instruments, CPR, ATLID, MSI, and BBR. The algorithm was developed by the EarthCARE Japanese group. TOA flux is used to evaluate retrieved properties by comparing fluxes with fluxes derived from BBR. The EarthCERE’s goal is to achieve the difference less than 10 Wm-2. The authors test the algorithm using A-train data. When instantaneous fluxes are averaged in 5-degree grids and over a month, the bias and RMS difference compared with CERES derived fluxes are, respectively, 24 Wm-2 and 36 Wm-2 for shortwave and -11 Wm-2 and 14 Wm-2 for longwave. The purpose of the manuscript is to describe the algorithm and evaluation of the algorithm. While the manuscript meet this goal, it does not provide new information other than these purposes. Using 1D radiative transfer and comparing with observed TOA fluxes with A-train data are not new. New science results are missing. In addition, given similar products are available from the European team, the manuscript needs to highlight unique aspects of the Japanese flux products, distinguishing from European products.
Major comments.
The authors describe radiation budget, especially downward longwave radiation at the surface in the introduction section. However, given what the authors describe in this manuscript, I do not see how the algorithm and data products described in this manuscript will contribute to improving global surface radiation budget and downward longwave, in particular, from the level where we are with A-train data. If Japanese flux products are to improve surface radiation, please describe how to improve in the manuscript.
Similarly, the authors mention that aerosol and cloud vertical profiles affect vertical profile of radiative fluxes. The number of aerosol type is increased from three to four in the algorithm. This is still less that the number of aerosol types used in CALIPSO algorithms (see for example Omar et al. 2009; Burton et al. 2012, 2013) and flux computations. Please provide thoughts of how to improve our knowledge of vertical flux profiles with the flux products described in this manuscript.
The introduction provides some background of surface radiation budget. EarthCARE data are, however, likely to contribute improving our knowledge of vertical flux profiles than improving global radiation budget.
The approach described in the manuscript has been used with A-train data for at least 10 years. Could you describe the uniqueness of the data products? What do they offer scientifically that is not available from European products and A-train products (e.g. FlxHR or CCCM)? Unless the authors describe clearly here, users are not motivated to use the Japanese products unless they are involved in the project.
It is not critical but given the bias at TOA, how the Japanese team is going to achieve the goal of EarthCARE of 10 Wm-2? In addition, this manuscript is revealing that the Japanese flux algorithm is more primitive compared to European flux algorithms. I think that it is useful for the international community having independent flux results from the Japanese and European teams. From this point, it is useful if the authors provide their thoughts on how the international community will benefit having the Japanese flux products in addition to Europeans.
Minor comments
Line 134: Could you explain what the Voronoi particles are?
Line 247: Could you justify reducing the optical thickness by 30%?
Line 253: If the authors claim EarthCARE instruments detect more clouds, then computed OLR is even lower, which increases the bias. Please explain why EarthCARE is expected to reduce the LW bias.
Line 330: Could you elaborate why aerosol radiative forcing is important in the upper atmosphere? Also, does the Japanese team retrieve aerosol properties everywhere all the time? What do you use when retrieved aerosol properties are not available (e.g. below clouds)?
References
Burton, S. P., and coauthors, (2012). Aerosol classification using airborne high spectral resolution lidar measurements – methodology and examples, Atms. Meas. Tech., 5, 73-98.
Burton, S. P., Ferrare, R. A., Vaughan, M. A., Omar, A. H., Rogers, R. R., Hostetler, C. A., and Hair, J. W., (2013). Aerosol classification from airborne HSRL and comparison with the CALIPSO vertical feature mask, Atmos. Meas. Tech., 6, 1397-1421.
Omar, A., Winker, D., Kittaka, C., Vaughan, M., Liu, Z., Hu, Y., Trepte, C., Rogers, R., Ferrare, R., Kuehn, R., Hostetler, C., (2009). The CALIPSO Automated Aerosol Classification and Lidar Ratio Selection Algorithm, J. Atmos. Oceanic Technol., 26, 1994–2014.
Citation: https://doi.org/10.5194/amt-2024-78-RC3 -
AC3: 'Reply on RC3', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC3-supplement.pdf
-
AC3: 'Reply on RC3', Akira Yamauchi, 21 Sep 2024
-
RC4: 'Comment on amt-2024-78', Anonymous Referee #4, 23 Jul 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-RC4-supplement.pdf
-
AC4: 'Reply on RC4', Akira Yamauchi, 21 Sep 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-78/amt-2024-78-AC4-supplement.pdf
-
AC4: 'Reply on RC4', Akira Yamauchi, 21 Sep 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
550 | 109 | 26 | 685 | 18 | 18 |
- HTML: 550
- PDF: 109
- XML: 26
- Total: 685
- BibTeX: 18
- EndNote: 18
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1