the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Cloud masks and cloud type classification using EarthCARE CPR and ATLID
Abstract. We develop the Japan Aerospace Exploration Agency (JAXA) level 2 cloud mask and cloud type classification algorithms for the Earth Clouds, Aerosols, and Radiation Explorer (EarthCARE), a joint JAXA and European Space Agency (ESA) satellite mission. Cloud profiling radar (CPR)-only, atmospheric lidar (ATLID)-only, and combined CPR–ATLID algorithms for the cloud mask and cloud particle type are described. The algorithms are developed and evaluated using ground-based data, space-borne data from CloudSat and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) and simulation data from a Japanese global cloud-resolving model, the Non-hydrostatic Icosahedral Atmospheric Model (NICAM) with Joint simulator. The algorithms are based on our algorithms for CloudSat and CALIPSO with several improvements. The cloud particle type for ATLID is derived from an attenuation–depolarization diagram trained using 355 nm multiple-field-of-view multiple-scattering polarization lidar and changing the diagram from that developed for CALIPSO. The retrieved cloud particle phases (ice, water, and mixed phases) and those reported in the NICAM output data are compared. We found that the agreement for CPR-only, ATLID-only, and combined CPR–ATLID algorithms averaged roughly 80 %, 85 %, and 80 %, respectively, for 15 different cloud scenes corresponding to two EarthCARE orbits.
- Preprint
(7113 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 18 Oct 2024)
-
RC1: 'Comment on amt-2024-103', Anonymous Referee #1, 23 Aug 2024
reply
This paper concerns cloud masks and classification algorithms for two active sensors on the recently-launched EarthCARE mission: the Cloud Profiling Radar (CPR) and ATmospheric LIDar. As well as single-sensor classification algorithms, there is one combining both instruments. The paper specifically deals with JAXA algorithms (my understanding is there might be multiple algorithms from different teams though this is not 100% clear). These algorithms have heritage with the previous CALIPSO-CloudSAT missions, and ground-based active sensing systems, so this study is mostly about adapting them to the new sensors (both how and why). Results from simulated retrievals are provided. So this is more incremental progress and adaptation to a new sensor, than fundamental new research.
The paper is in scope for the journal. I am not a developer of classification algorithms from active remote sensing, though I do use such data in my research, so it is possible there are some aspects I am missing here. I recommend at least one other reviewer is someone who works on radar and lidar scene classification algorithms. The quality of writing and presentation is reasonable, although I feel there is some information missing from the study that I would expect to have been included. I therefore recommend minor revisions (but suggest the Editor defer to the opinion of reviewers involved in algorithm development if there is a split of opinion). I would be happy to review the revision if requested. Some specific comments in support of my recommendation are below:
Lines 25-26: It was initially surprising to me that the accuracies were so low, and that the joint CPR-ATLID algorithm had a lower accuracy than the ATLID-only accuracy. It was only while going through in more detail that I realized this was much more than a “cloudy or not” classifier, and that the number of classes is different for all three approaches. So in my view these accuracy numbers are not directly comparable to one another. I suggest revising the abstract to state the number of possible classifications for each of the three algorithms, and then the accuracies on simulated data. That will make the abstract more informative for the reader (as many people skim abstracts to get the big picture).
Lines 85-86: So C1 is CPR, C2 is ATLID, C4 is both. It would be interesting to note why no C3 (or if there is one, why it isn’t discussed here). Also It’s not clear if these names C1, C2, and C4 are only used in the paper or if these are also the official product names.
Line 125: I would like to see a bit more detail about the derivation of this equation, since it seems key to phase separation. It must rely on some independent ground truth knowledge of when you have ice vs. liquid – when does this come from? If it is only from the lidar itself, isn’t this circular logic? Is it just that if you know the cloud is low you know it cannot be ice phase due to the temperature?
Figure 2: I think a different color scale, and different horizontal binning (the current resolution is too fine which makes things noisy) would help to better show the split between the data under and over the curve.
Section 2-2-2: More detail is needed here, this is supposed to be an algorithm description but is more of a data format description and does not give quantitative specifics for anything. As one example, how precipitation vs. non-precipitating classes are identified. We need more detail and maybe another figure showing an example of melting layer calculation.
Section 2-2-3: More detail is needed on how exactly the C1 and C2 masks are combined, and what happens in the case of inconsistency (e.g. if one says liquid and the other says ice).
Section 3: More quantitative information is needed here. For example, instead of just giving accuracies, how about confusion matrices? These would allow us to see the proportion of data in each class and see, when classifications are wrong, what the typical misclassifications are
Citation: https://doi.org/10.5194/amt-2024-103-RC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
211 | 87 | 60 | 358 | 14 | 9 |
- HTML: 211
- PDF: 87
- XML: 60
- Total: 358
- BibTeX: 14
- EndNote: 9
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1