the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Cloud masks and cloud type classification using EarthCARE CPR and ATLID
Abstract. We develop the Japan Aerospace Exploration Agency (JAXA) level 2 cloud mask and cloud type classification algorithms for the Earth Clouds, Aerosols, and Radiation Explorer (EarthCARE), a joint JAXA and European Space Agency (ESA) satellite mission. Cloud profiling radar (CPR)-only, atmospheric lidar (ATLID)-only, and combined CPR–ATLID algorithms for the cloud mask and cloud particle type are described. The algorithms are developed and evaluated using ground-based data, space-borne data from CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) and simulation data from a Japanese global cloud-resolving model, the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) with Joint simulator. The algorithms are based on our algorithms for CloudSat and CALIPSO with several improvements. The cloud particle type for ATLID is derived from an attenuation–depolarization diagram trained using 355 nm multiple-field-of-view multiple-scattering polarization lidar and changing the diagram from that developed for CALIPSO. The retrieved cloud particle phases (ice, water, and mixed phases) and those reported in the NICAM output data are compared. We found that the agreement for CPR-only, ATLID-only, and combined CPR–ATLID algorithms averaged roughly 80 %, 85 %, and 80 %, respectively, for 15 different cloud scenes corresponding to two EarthCARE orbits.
- Preprint
(7113 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on amt-2024-103', Anonymous Referee #1, 23 Aug 2024
This paper concerns cloud masks and classification algorithms for two active sensors on the recently-launched EarthCARE mission: the Cloud Profiling Radar (CPR) and ATmospheric LIDar. As well as single-sensor classification algorithms, there is one combining both instruments. The paper specifically deals with JAXA algorithms (my understanding is there might be multiple algorithms from different teams though this is not 100% clear). These algorithms have heritage with the previous CALIPSO-CloudSAT missions, and ground-based active sensing systems, so this study is mostly about adapting them to the new sensors (both how and why). Results from simulated retrievals are provided. So this is more incremental progress and adaptation to a new sensor, than fundamental new research.
The paper is in scope for the journal. I am not a developer of classification algorithms from active remote sensing, though I do use such data in my research, so it is possible there are some aspects I am missing here. I recommend at least one other reviewer is someone who works on radar and lidar scene classification algorithms. The quality of writing and presentation is reasonable, although I feel there is some information missing from the study that I would expect to have been included. I therefore recommend minor revisions (but suggest the Editor defer to the opinion of reviewers involved in algorithm development if there is a split of opinion). I would be happy to review the revision if requested. Some specific comments in support of my recommendation are below:
Lines 25-26: It was initially surprising to me that the accuracies were so low, and that the joint CPR-ATLID algorithm had a lower accuracy than the ATLID-only accuracy. It was only while going through in more detail that I realized this was much more than a “cloudy or not” classifier, and that the number of classes is different for all three approaches. So in my view these accuracy numbers are not directly comparable to one another. I suggest revising the abstract to state the number of possible classifications for each of the three algorithms, and then the accuracies on simulated data. That will make the abstract more informative for the reader (as many people skim abstracts to get the big picture).
Lines 85-86: So C1 is CPR, C2 is ATLID, C4 is both. It would be interesting to note why no C3 (or if there is one, why it isn’t discussed here). Also It’s not clear if these names C1, C2, and C4 are only used in the paper or if these are also the official product names.
Line 125: I would like to see a bit more detail about the derivation of this equation, since it seems key to phase separation. It must rely on some independent ground truth knowledge of when you have ice vs. liquid – when does this come from? If it is only from the lidar itself, isn’t this circular logic? Is it just that if you know the cloud is low you know it cannot be ice phase due to the temperature?
Figure 2: I think a different color scale, and different horizontal binning (the current resolution is too fine which makes things noisy) would help to better show the split between the data under and over the curve.
Section 2-2-2: More detail is needed here, this is supposed to be an algorithm description but is more of a data format description and does not give quantitative specifics for anything. As one example, how precipitation vs. non-precipitating classes are identified. We need more detail and maybe another figure showing an example of melting layer calculation.
Section 2-2-3: More detail is needed on how exactly the C1 and C2 masks are combined, and what happens in the case of inconsistency (e.g. if one says liquid and the other says ice).
Section 3: More quantitative information is needed here. For example, instead of just giving accuracies, how about confusion matrices? These would allow us to see the proportion of data in each class and see, when classifications are wrong, what the typical misclassifications are
Citation: https://doi.org/10.5194/amt-2024-103-RC1 - RC2: 'Comment on amt-2024-103', Anonymous Referee #2, 12 Oct 2024
-
RC3: 'Comment on amt-2024-103', Anonymous Referee #3, 18 Oct 2024
The paper describes the cloud mask algorithms used for observations by the EarthCARE CPR and ATLID instruments. These algorithms are based on previous works developed primarily by the coauthors. This paper describes how they were adapted for use by the EarthCARE instruments. There are adequate references to these foundational works, though a few places should be revised to be clear that these are not new algorithms being introduced in this paper. Doing so will help readers understand the heritage and evolution of these algorithms in their adoption for EarthCARE. Some basis validation is provided in section 3 based on three simulated scenes and then two full orbits. Statistics are provided demonstrating frequency of agreement in phase type between the cloud mask algorithms and the simulated truth. This is helpful information. Some qualitative discussion on the performance for these scenes would better illustrate the behavior of the algorithms. For example, there are instances where the NICAM simulation reported mixed phase clouds that were not captured by the algorithm. It would be helpful to include a general statement about why this might occur. More information about the two-full orbits should be included in the final validation assessment so readers can understand what types of scenes are represented. The content in the figures provided is suitable, with the optimum amount of detail. The colorbars on several figures should be revised for clarity. Several contain more colors than are plotted, making interpretation difficult.
Altogether, Atmospheric Measurement Techniques is a suitable journal for this manuscript. Given the importance of distributing EarthCARE algorithm information to the community, this is a valuable manuscript to publish. I recommend publishing after the following minor comments are addressed. These suggestions aim to improve the clarity of algorithm heritage and improve the interpretability of the results.
General comments:
Recommend adding a single table that shows the numerical value and description for the phases in each cloud mask. Right now these the mapping of value to description is buried within three different sections of text which makes them difficult to find and to compare with one another. Having a table would be an effective reference for the reader.
The introduction should stress that the algorithms described in this paper are not entirely new. Rather, they are adaptations of existing algorithms for the EarthCARE instruments. For example, the ATLID-only cloud ice-water phase algorithm is an adaptation of the technique described in Yoshida et al, 2010. The combined CPR-ATLID cloud mask is an adaptation of the technique described in Kikuchi et al., 2017.
Specific comments:
Lines 21-22. “The algorithms are based on our algorithms for CloudSat and CALIPSO with several improvements.” What were the improvements? Based on the descriptions in section 3, the algorithms of Yoshida et al., 2010, Kikuchi et al, 2017, and Hagihara et al., 2010 were adopted for use by the EarthCARE instruments. The thresholds just needed to be adjusted to account for the different wavelengths and FOV compared to CALIOP & CPR. If this is the case, the “with several improvements” should be rephrased to better represent how the algorithms were used.
Line 71. The capitalized letters are incorrect for CALIOP. It should be Cloud-Aerosol Lidar with Orthogonal Polarization.
Lines 88-92. Please make clear what is unique to the EarthCARE implementation versus what is the Hagihara et al., 2010/2014 implementation. The Hagihara papers both discuss thresholds and spatial continuity tests, so it is unclear what is new with the EarthCARE implementation.
Line 120. Should add citation to Yoshida et al., 2010 here to show that is where the δ[%] = 10 + 60X^2 comes from.
Line 130. The phrase “spatial continuity tests (as a continuity filter) are introduced…” implies that these tests are newly introduced as part of the algorithm described in this paper. However, they were introduced in Yoshida et al., 2010. Please rephrase to clarify the heritage of the spatial continuity test. Also, there should be a short description of those tests added to this paper for the convenience of readers so they do not have to find that information in a separate paper.
Figure 4. There are too many colors in the colorbar which makes it difficult to interpret. Since 8 colors are represented, there should only be 8 colors on the colormap. Additionally, the green and red colors used are not accessible for individuals with color vision deficiencies (8% of males, 0.5% of females). Recommend revising these color choices.
Lines 184-185. What are the rules for combining the classifications? What if they disagree? It looks like this is described in Kikuchi et al., 2017. A statement should be added letting the reader know that the procedure is detailed in Kikuchi et al., 2017.
Line 185. “…reclassify the particle type similar to the CloudSat and CALIPSO type” Is this referring to the CloudSat and CALIPSO cloud particle types in the official products, or does it refer to the classification types of Hagihara et al., 2010? Please clarify. Also, please check if there other places in the manuscript where this type of phrasing could potentially confuse readers.
Line 186. “Kikuchi et al., 2016” should be Kikuchi et al., 2017.
Figure 5c, 8c, 10c. The colormap is ineffective at illustrating depolarization ratios in the range of 40-70%. It all looks green. Recommend choosing a different colormap, preferably a perceptually linear colormap that better represents the variation in magnitude.
Figure 6a-c. There are too many colors in the colormap which makes it difficult to read a specific value. For example, figure 6a only plots 8 values, so there should be only 8 values on the color scale. In addition, it is difficult to compare between the panels. Since types 1-4 are the same for each algorithm, they should be the same color so that at least those types can be compared. I recognize that there are different interpretations between the cloud masks for the higher values that make this impractical there. It would also help to have the descriptions of each numerical value in the caption so that the reader does not have to refer to three different sections within the text to remember what the numerical values mean.
Line 235. Why were the mixed phase layers around 6-8 km in NICAM not captured by the cloud mask algorithms? This is one obvious discrepancy that would be worth discussing in the text.
Line 234. Why is there a value C3? Does this mean to imply that C1, C2, C3 are the three phase types (1, 2, 3)? If so this is confusing because C1, C2, and C4 refer to distinct cloud mask algorithms elsewhere in the paragraph, but to phase in this sentence. Please clarify.
Figures 7, 9(b,c), 11(b,c). There are too many colors on the colormap, making interpretation difficult. There should be at most four colors on the colorbar. In addition, because there are only three phases being plotted, it would be much easier to interpret these figures if the phase type descriptions were on the colorbar instead of the numerical values. There should be adequate space for that text.
Line 271. Why were the mixed phase layers at low altitude in NICAM (Fig. 11c) missed by the cloud mask algorithms? This is worth discussing to help the reader understand the behavior of the algorithm.
Line 282. It would be valuable for the reader to see images of the two full orbits that were used in this characterization. This could be added as a supplement. Also, some detail about the scenes should be added to section 3.4. What parts of the planet were sampled? What types of clouds?
Lines 288-289. “The EarthCARE L2 cloud mask algorithms for C1, C2, and C4 are similar to those for CloudSat and
CALIPSO.” Because this is the summary, what is meant by C1, C2, and C4 should be given again (CPR-only, ATLID-only, etc…). Also, the text should be more specific about what is meant by “similar to those for CloudSat and CALIPSO”. Those satellite missions have their own cloud mask algorithms that are different than what is described in this paper. A more accurate statement would be something like, “…are similar to the techniques of Hagihara et al., 2010 that was applied to CloudSat and CALIPSO data.”Line 291. “For the C1 type for CPR, the algorithm developed for CloudSat was extended”. Please add the appropriate reference for clarity, “the Kikuchi et al. 2017 algorithm developed for CloudSat was extended…)”
Citation: https://doi.org/10.5194/amt-2024-103-RC3
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
298 | 128 | 65 | 491 | 16 | 10 |
- HTML: 298
- PDF: 128
- XML: 65
- Total: 491
- BibTeX: 16
- EndNote: 10
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1