the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Total column optical depths retrieved from CALIPSO lidar ocean surface backscatter
Robert A. Ryan
Mark A. Vaughan
Sharon D. Rodier
Jason L. Tackett
John A. Reagan
Richard A. Ferrare
Johnathan W. Hair
John A. Smith
Brian J. Getzewich
Download
- Final revised paper (published on 14 Nov 2024)
- Preprint (discussion started on 07 Mar 2024)
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2024-23', Anonymous Referee #1, 29 Apr 2024
- Original Submission
1.1. Recommendation
Major Revision
- Comments to Author:
Overall opinion: The paper reports an attempt to retrieve Aerosol Optical Depth (AOD) using CALIPSO observations via alternative, non-conventional technique based on combination of ocean surface returns and modelled near-surface wind speeds over ocean. The method is called Ocean Derived Column Optical Depth (ODCOD) algorithm and is claimed to be new with regards to previously introduced methods of this type. Besides the aspects I elaborate below (such as methodological unclarities, poor structure of the manuscript, questionable choice of statistical metrics, lack of numerical arguments in the abstract, multiple non-academic English formulations), most critically – the novelty of the method is not explained. What is the fundamental difference with He et al. 2016 method for retrieving AOD using CALIPSO or with Venkata and Reagan 2016 approach? If there is a methodological difference, it has been not emphasized enough in the abstract. Moreover, it is not explained why can’t you simply use surface integrated attenuated backscatter signal included in the official CALIOP output for linking it to near-surface ocean wind speed, so you need to find the CALIOP IRM. Does official CALIOP algorithm have problems with identifying the surface bin/bins boundaries? I guess the answer is hidden somewhere on page 9 in among the statement like “In an ideal signal, any measurement that is not part of the surface return would be completely zero and any measurement on or after the surface return onset would be non-zero. However, the measured downlinked samples before the surface return onset are rarely if ever actually zero so it is critical that whatever consecutive sample pair is selected is part of the surface return.”. Without going deeper into that, I think you should articulate this unique aspect and advantages of your retrieval better in the revision. See the other comments below please.
2.1. Comments:
- Abstract: The paper is very long, but the abstract reports only five numbers of statistical agreement with references. This reflects a general problem of your manuscript that was confirmed after I fully read it. For instance, you are discussing the noisiness of your retrievals depending on the shot averaging strategy (lines 694 …), but you have not quantified the rate of this noise (or temporal variability?). Thus, you have no numbers to report for the abstract. This is a systematic problem that can be seen throughout the entire “Results” section that ultimately yielded a critical lack of numerical arguments in your abstract. Moreover, I think that the abstract is imbalanced because you give too much introductory-alike information about the method itself and insufficient information about the findings form your study. Also, as mentioned above, this method does not look like “new” method, but it is rather built on the heritage of previous CALIPSO-based works (see comment about the introduction below). I also think clearer scientific implications of your method should be given in the end of the introduction. Namely, why this method is beneficial for scientists or users, compared to previous ODCOD-alike methods like He et al. [2016] method or conventional extinction-based AOD retrievals. Lastly, it is not clear from the abstract whether you are retrieving only aerosol optical depth (AOD) or total optical depth including molecular contribution.
- Introduction: The introduction reminds a section from technical report of CALIPSO product or algorithm development document, not the introduction of a scientific study.
- I think you should include paragraphs describing importance of retrieving AOD using spaceborne lidars and which gaps remained in this research field so your attempt looks justified.
- Moreover, your paper is partially built on the heritage of previous works, which exploited lidar surface returns over water (or land in some cases) to retrieve AOD. However, you only selectively included some works that were focused on this topic. I’d suggest you to include or go quickly over the following studies including fundamental works, demonstrate relationship between ocean surface returns and wind speed [Barrick, 1968; Bufton et al. 1983; Menzies et al, 1998], also mostly CALIPSO-based works on this topic [Josset et al., 2008; 2010; 2011; 2018; He et al. 2016], but also Aeolus-based studies [Li et al. 2010; Labzovskii et al., 2023] You can find most these works easily in Google Scholar by keywords or cross-references. Fundamental works here serve as a useful introduction into your research niche, CALIPSO-based works demonstrate what has been done already by previous ODCOD-alike methods using CALIPSO (so which research gap you intend to close, Aeolus works indirectly indicate that CALIPSO is best-suited for such retrievals due to weakness of Fresnel reflection and weakness of ocean surface returns at non-nadir incidence.
- Methodology: There is a room for improvement in the methodological part of the paper.
- First, please avoid using unintroduced terms right away like “modelled surface return” or “idealized impulse return model” which might be confusing for a general reader.
- Second, the methodological description is not logical. From general reader perspective, it is more sensible to first describe CALIOP instrument with its specifications and the CALIOP optical parameters you used as input and only then start describing methodology behind retrieving optical depth using surface echoes. Otherwise, you assume that a general reader is familiar with a given lidar system, which is not true.
- Cloud screening issue. Although you discussed the inclusion of the quality flags through “additional screening” (2.2.3) I have not noticed any discussion on how you tackled thick cloud cases and very hazy aerosol conditions. Assume you have a thick cloud undetected – it will attenuate your surface echo not because the ocean roughness has changed, but due to atmospheric extinction. Many previous studies exploiting lidar surface returns have previously discussed this issue. Labzovskii et al. [2023] in the Aeolus-focused work showed that the resultant lidar surface return statistics in the cases where (a) clouds were not filtered and (b) clouds were filtered are very different. The presence of clouds and high aerosol load cases will plague the statistics with attenuated, weakened and noisy surface returns not suitable for ODCOD-alike retrievals.
- Molecular contribution issue. As I said, it is not clear whether you are retrieving only aerosol optical depth (AOD) or total optical depth including molecular contribution. Please clarify this aspect and explain how you tackled molecular contribution (even if it is small) if you retrieved AOD only.
- Line 484 “samples are averaged vertically prior to downlink, surface saturation can still go undetected” this is critical. In other words, you are stating that your quality flagging procedure is not effective?
- If you refer to some parameters totally unknown to a general reader like SIDR (which is even worse, it is unintroduced acronym) or MODIS confidence flag, ensure you explain what are these and ideally – where they can be found.
- I think the entire section 3 belongs to the methodology as a sub-section, but under some different name. The title ‘Performance Assessment’ is currently misleading and sounds like you are going to present results.
- Results:
- The first problem is that you start demonstrating your results from three year seasonal medians. I assume you realize how many things are put together into these three years? Would it be logical to go from smaller time and spatial scales (orbit-based or something like this?).
- In line with this recommendation, I think the validation results should go before analyzing general patterns of AOD behavior from ODCD algorithm plotted on maps (Fig. 11). We first need to understand how biased is your AOD retrieval and then after understanding the actual validity of ODCD algorithm we can see how AOD from ODCOD is distributed regionally.
- The ODCOD-reference comparison lacks a detailed scrutiny. You report median/mean differences between hugely populated samples of 3-year seasonal global aggregates. What such differences could potentially tell you? Month-specific correlations and biases would be much more appropriate to make any conclusion here. This applies to comparisons vs HSRL, MODIS and SODA.
- Conclusions: I think conclusions should be revisited after revision and better aligned with results and abstract. Don’t forget to emphasize why your method of AOD retrieval is unique and different from previous SIAB-based attempts to derive AOD using CALIOP data.
- Language and Format: Although I am not qualified to evaluate the language of the article (being non-native English speaker), I still notice the use of non-academic style in writing. Short forms of verbs like “don’t” or non-academic formulations like “…time delay tells us” or “this is unfortunate” are spotted. Please upscale the style of your writing to the minimum requirements of academic English.
Minor comments (line references on the left)
107 – Which concepts they outlined? Readers are not aware about that.
128 – Particulates = Aerosol particles? Particulate matter?
144 – You can use Li et al. [2010] Aeolus-based work as a reference to justify this statement.
152 – estimate or take from references?
154 – Few words why you chose Hu et al. (2008) approximation and not previous approximations like Wu et al or anything else? For clarity.
172 – “Don’t” -> do not use short versions of verbs like this in academic studies please.
191 – To take the ratio of what?
197 – What is “the largest measurement of the suspected surface return”? By magnitude? Suspected surface return is hidden where? In which parameter here?
200 – Even over mountainous regions? It is hard to make my own conclusion here because you just state and announce what you do/did methodologically without explaining why you introduced such assumption in many cases including the explanation here. Apply here and elsewhere. “…surface detection failure is suspected” in such case because “abc… xyz… -> **reason why you think such assumption is valid ideally supported by numerical arguments, references or common sense***”
210 – Time delay between what and what?
225 – Where have you found these biases? In order to make a statement like this, you need either to rely on your new or previous results (reference).
233 – The knowledge about total extinction of 532 nm light few meters below the ocean surface is not coming from VR 2016 reference, but from previous studies. Do not overexploit the references from a specific expert pool when you refer to results, previously published by other studies and more specifically focused on these topics please.
236 – Can you find a reference to prove this statement about insignificance of subsurface return at this wavelength? Perhaps, Josset et al. [2010] paper about ocean surface reflectance model for different angles and wavelengths or Li et al. [2010] study that applied similar model for Aeolus pre-launch study where different incidence angles were tested. Also, what are the “two largest points”?
239 – Once again, a very crude assumption without attempt of justifying it from previous works’ experience or sensitivity analysis.
241 – Do you refer to saturation or attenuation here? I am sure there is an optical term to nail down this effect.
245 – ‘Largest points of surface returns’ is ambiguous term even for lidar experts, so it will be a puzzle for general readers.
264 – Another crude assumption without justification here. This seems to be a systematic problem in the paper.
286 – Unfortunate? Does not sound good for academic paper.
287 – Please introduce the commonly accepted term AOD (Aerosol Optical Depth) for a reader and stick to it in the paper instead of “aerosol-only optical depth”.
304 – Call figures either Fig. or Figure please
305 – Outside which range?
306 – Can it be also related to the effect of noise at low AOD values?
308 – What is “statistically higher wind speed”? From significance point of view or?
319 – This raises a question on what is the skill of your AOD retrieval in terms of precision? What is the threshold below which, one can assume AOD output value as noise? AOD = 0.05? 0.03? Please reflect on this
329 – ‘Mixture of retrievals’, what is the mixture of retrievals? You use many unclear terms like this in your paper.
322 – What is bright surface return in terms of backscattering? Strong backscattering?
329 – Do mean and median random uncertainties have statistical sense for non-systematically biased systems?
342 – It is a very large uncertainty, no? Can you find a reference of some organization or a study which recommends a minimum accuracy of AOD retrieval like WMO or something like this and explain how your method can satisfy minimum accuracy requirement for aerosol retrievals of this type?
346 – MERRA-2 is a model and they do not provide instrumental uncertainties. I do not think that your uncertainty assumption is valid here. Put it simply, 2005 was almost 20 years ago and ocean wind speed measurements have been significantly advanced since then; see all scatterometer-based studies available in the literature. The modern requirements of wind speed uncertainties is around 3 m/s. Coming back to the modelling data, their uncertainties can be normally inferred from the bias estimation-focused studies performed using comparison of modelled data versus true measurement data.
365 – Show this comparison or refer to previous studies where you have done it please.
369 – Give references to these products as well as to GMAO MERRA-2 10 m wind speeds.
379 – AMSR does not retrieve wind speeds over land, no?
405 – What are the consequences of not including this uncertainty. What if one suggests it is critical?
420 – What about potential attenuation of surface return by thick clouds or very high aerosol conditions? See the major comment about the methodology above in this document.
454 – 470 This information looks like it fits methodological description
477 – In peer-review, “significantly” in most cases means statistical term. Also, did I get it right that you filtered out all winds of < 3 m/s from your analysis. If yes, please mention this in the methodology as well.
487 – 490 You stated here that you applied statistical filtering of the data instead of physical-based filtering because “surface saturation can still go undetected”, but value-based thresholds are introduced below without taking physical attenuation into account. Can you comment on this in the major response to the major comment about the methodology I placed above?
505 – During study period or for which period?
509 – What is SIDR? Do not use unexplained acronyms please
517 – How have you come to the conclusion about the slight wind dependance?
529 – 537 – A general reader would not understand what is “averaging the surface return before retrieving”. You averaged SIAB before retrieving what? AOD? I assume you mean that signal-to-noise ratio of horizontally averaged SIAB is higher and is therefore more applicable to be used in the final AOD retrieval, thus reducing resultant uncertainties. Alas, I can only assume because you use somewhat unclear language.
541 – It seems that the methodological description continues from here and much further through the results further…
545 – “Significantly lower” -> statistically speaking? If yes, please provide arguments.
551 – What is the difference between background particulate optical depth and aerosol optical depth (AOD)?
558 – Why don’t you use CALIPSO classification scenes instead of this? By applying lidar ratio assumptions, you nullify one of the best advantages of ocean surface return-based AOD retrievals using lidar – you do not normally need to assume lidar ratio. Thus, no bias contribution stemming from this aspect is expected. Am I wrong?
570 – Did I miss the description of cloud filtering using CALIPSO data?
577 – General users have no idea what is ‘confidence flag’ in MODIS data.
585 – I think you missed a better moment to familiarize your reader with SODA-alike approaches and other lidar surface return-based approaches in the introduction.
600 – I think some kind of diagram, showing how you filter your data and which (1) satellite, (2) parameter, (3) criterion applied to this parameter you use is needed to understand the pipeline of filtering.
601 – 610 – These remarks about clouds sound irrelevant, no results about AOD were shown yet.
614 – 620 I see two problems here. First, you refer to some consistency of AOD with MODIS. Numerical agreement analysis is missing though. Does the consistency mean that AOD is distributed in slightly similar way to MODIS? If yes, where the agreement is highest, where is the lowest? Can you elaborate? Ideally, there should be numerical agreement analysis. At least, you should provide articulated textual description on where you see highest agreement with MODIS, where it is lower. Can you add a figure, to which you refer to while talking about agreement (from Remer et al., 2008 as I assume). Second you tailor these AOD anomalies to hand-picked aerosol-related events from the literature. However, how did you ensure these are the exact phenomena you are talking about? If I follow the same rationale, I can say, there are frequent biomass burning events in Amazon as well, but do we see their effects?
630 – Does it mean we are talking about bias in your algorithms? I still do not understand how you are going to address the actual bias without introducing a validation AOD dataset like MODIS or CALIPSO original AOD retrieval or AERONET if some cases from islands are applicable for such comparison. Can you reflect on this?
633 – 635 What does the fact about statistically significant difference between night and day observations? Time-driven bias also? If it is a serious issue you are addressing in your paper, I think you need to make a sub-section of ODCD called ‘Nighttime vs daytime measurements’.
660 – 667 – “Some good quality low daytime data” -> please explain how much to make your experiment reproducible. Also, it is desirable to illustrate in in figure or tabular form not just mention it in the text. The same applied so “modified the surface saturation filter” -> modified to what value? Only one value or a range of values to estimate an overall sensitivity? Please clarify.
669 – Did you mention original resolution of ODCOD before in the text (like methodology)? Please point out where, if not available, the description of the resolution choices should be provided in the methodology.
671 – Is it a common procedure to average SIAB horizontally? Provide some arguments if no, provide some references to previous CALIPSO-based studies if yes.
680 – Your language is unclear here, can you reformulate this sentence?
691 – You normally describe the figure first and then provide a figure itself below in peer-reviewed studies.
695 – 700 Please provide numerical arguments behind such statements as “noisy” (maybe compare their standard deviations in the text?)
710 – What about correlations?
725 – Statistically significant difference here is not giving any valuable information I think. What do you intend to infer from such comparison? Also, mean differences might be not representative. You are having 3 years of global data populated with 3 months seasonally. Are not you interested in region-specific biases and correlations rather median bias?
731 – Table 3 or 2? Also, why you are not interested in correlations? If your correlation is high, are not you interested in which AOD intervals or maybe regions contribute to the lack of ideal correlation, thus plaguing your linear relationship pattern between ODCD and the reference?
770 – 784 – Likewise, stick to correlations, seasonal biases, regional differences, not to long-term mean differences.
787 – What is anomalous SODA data from numerical point of view? Indicate a range here at least
823 – Result summary is a very strange name of the section. Normally, you would have “Discussion” and then “Conclusions” or “Summary” then “Conclusions” or “Discussion and Conclusions” at once.
830 – I would still call it discussion…
830 – 854 This is a very weird paragraph. First, you say that future work can focus on comparing ODCOD with CALIOP standard AOD profiles? Or standard ODCOD-alike SIAB-based AOD profiles? Unclear here. If these are your future plans, just move them to discussion shortly.
855 – 910 Please move this comparison to validation section is possible. The structure of the manuscript becomes non-conventional here. You showed results, then went to future work discussion and now showing some new results again.
913 – Once again, it’s unclear why this development is different from He et al. 2016 or Venkata and Reagan 2016 attempts? Apart from the fact that it will be the official Level 2 data product
915 – Multiyear -> name exact years
916 – Once again, I think statistical significance in the difference between these datasets make no sense from remote sensing point of view. You can easily have two similar AOD density distributions with statistically insignificant differences (because AOD is normally distributed across the globe within the same range), but with great biases (due to regional differences) and very low correlation between each other.
931 – 940 Do not overgeneralize the conclusions about your own method (which you seemingly do according to ample references provided here) using very general rationale based on previous methods. Just state shortly why your attempt of using ODCOD is successful and useful for future studies? This conclusion should be based on the results of YOUR study and not the general benefits of any SIAB-based AOD retrieval for CALIPSO.
Citation: https://doi.org/10.5194/amt-2024-23-RC1 -
AC1: 'Reply on RC1', Robert Ryan, 16 Aug 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-23/amt-2024-23-AC1-supplement.pdf
-
RC2: 'Comment on amt-2024-23', Anonymous Referee #2, 20 May 2024
This work basically documents an algorithm that is now being used to produce aerosol column optical depth over oceans from CALIPSO surface returns and MERRA-2 10m wind speed. The first part that describes the method is really more suited as an ATBD (Algorithm Theoretical Basis Document) than a research paper. In general, it is a well written and thought-out paper. A strength is that they present comparisons with MODIS, HSRL, SODA and CALISPO layer optical depths. Also important is their treatment of the CALIOP surface return, realizing that it can be saturated and using a fitting method to the instrument impulse response function to retrieve a more accurate measurement of the surface return magnitude. The filtering of suspect data based on surface signal magnitude, depolarization and wind speed is well done.
However, I think the paper is too long. At around 50 pages it becomes a very tedious read. I would suggest breaking the paper into two parts. The first would present the ODCOD algorithm and the uncertainty analysis and the second part would present results and comparisons. This would allow a better and more in-depth description of the surface return fitting to the IRF (which I think at present is confusing) and the results paper to include more examples and comparisons. A comparison that should be added are some island-based or costal AERONET comparisons of column optical depth.
-
AC2: 'Reply on RC2', Robert Ryan, 16 Aug 2024
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2024-23/amt-2024-23-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Robert Ryan, 16 Aug 2024