Sensitivity of aerosol optical depth trends using long term measurements of different sun-photometers
- 1World Radiation Center, Physikalisch-Meteorologisches Observatorium Davos (PMOD/WRC), Davos Dorf, Dorfstrasse 33 7260, Switzerland
- 2Physics department, ETH Zurich, Zurich, Franscini-Platz 5 8093, Switzerland
- 1World Radiation Center, Physikalisch-Meteorologisches Observatorium Davos (PMOD/WRC), Davos Dorf, Dorfstrasse 33 7260, Switzerland
- 2Physics department, ETH Zurich, Zurich, Franscini-Platz 5 8093, Switzerland
Abstract. This work aims to assess differences in the aerosol optical depth (AOD) trend estimations when using high quality AOD measurements from two different instruments with different technical characteristics, operational (e.g. measurement frequency), calibration and processing protocols. The different types of Sun photometers are the CIMEL that is part of AERONET (AErosol RObotic NETwork) and a precision Filter Radiometer (PFR), part of the Global Atmosphere Watch Precision Filter Radiometer network. The analysis operated for two wavelengths (500/501 nm and 870/862 nm for CIMEL/PFR) in Davos, Switzerland, for the period 2007–2019.
For the synchronous AOD measurements, more than 95 % of the CIMEL-PFR AOD differences are within the WMO accepted limits, showing very good measurement agreement and homogeneity in calibration and post correction procedures. AOD trends per decade in AOD for Davos for the 13-year period of analysis were approximately -0.017 and -0.007 per decade for 501 nm and 862 nm (PFR), while the CIMEL-PFR trend differences have been found 0.0005 and 0.0003 respectively. The linear trend difference for 870/862 nm is larger than the linear fit standard error. When calculating monthly AODs using all PFR data (higher instrument frequency) and comparing them with the PFR measurements that are synchronous with CIMEL, the trend differences are smaller than the standard error. The trend differences are also larger than the trend uncertainty attributed to the instrument measurement uncertainty, with the exception of the comparison between the 2 PFR datasets (high and low frequency) at 862 nm. Finally, when calculating time-varying trends, they differ within their uncertainties.
Angelos Karanikolas et al.
Status: final response (author comments only)
-
CC1: 'Short Comment on amt-2022-181 - effect of optical airmass', Thomas Eck, 05 Jul 2022
Sensitivity of aerosol optical depth trends using long term measurements of different sun-photometers
Angelos Karanikolas, Natalia Kouremeti, Julian Gröbner, Luca Egli, Stelios Kazadzis
I have a short comment related to the accuracy of the AOD analyzed in this paper. It is well known that sunphotometer measured AOD is proportional to the optical airmass (m) or pathlength through the atmosphere. The AOD error reduces by a factor of 1/m as m increases. This is reflected in your Figure 2 as the reduction in AOD differences between these two types of instruments as optical airmass increased. The most complete discussion of the accuracy of the AERONET measured AOD is given in Eck et al. (1999), where the uncertainty in measured AOD of field instruments is estimated to be 0.01 for airmass=1 (overhead sun) for visible and near-infrared wavelengths.
Therefore a potential additional analysis that could be added to this study to minimize the effects of calibration would be to utilize only data for m>3 for both instruments. Trends computed with this subset of data would therefore include only morning and afternoon data (excluding mid-day, although this would vary with season). In addition to a reduction of calibration biases between instruments by excluding mid-day data, there is also the added factor of excluding a significant portion of the mid-day data affected by fair weather cumulus clouds. All sunphotometer data sets are biased towards sampling low cloud fraction days with high atmospheric pressure. These days often show a diurnal cycle of cumulus cloud fraction related to the daily cycle of solar heating and associated convection and vertical mixing. Therefore an analysis of data with only m>3 or m>4 (in winter) would minimize the influence of a highly spatially and temporally variable cloud type on AOD (cloud edge contamination plus cloud influence of AOD itself; see Marshak et al., 2021), while also increasing AOD data accuracy. Of course the data sample size will decrease significantly for this large airmass subset of the data, but it should still provide for an additional informative aspect of this trend comparison for these two different instrument types which employ different measurement frequencies and cloud screening methodologies.
-
RC1: 'Comment on amt-2022-181', Anonymous Referee #1, 15 Jul 2022
General comments:
Overall, this is a well-written paper describing the inherent problems encountered when using multi-instrumental datasets for climate studies. This paper, despite being technical, addresses some important aspects for the scientific community, such as the suitability of merging databases from different instruments and the reliability of such products. AERONET-Cimel, as the largest network for the study of the climate impact of aerosols, and SKYNET-Prede, as the network more spread in Asia, can combine their products to evaluate the impact that aerosols have on a global scale. In this regard, this paper introduces critical information to take into account for this purpose, with GAW-PFR as the reference instrument. The authors performed in this work a detailed explanation of the intercomparison techniques (AERONET-Cimel versus GAW-PFR) as well as a precise description of linear trends and time-varying trends.
I consider that this manuscript fits perfectly into the scope of AMT and that the results presented here are relevant. There are only a few minor/technical remarks.
Minor/technical comments:
Abstract: The three last sentences of the abstract seem confusing and don’t help the reader to have an initial idea of the most relevant results of this work. One example can be found when the authors mention “all PFR data”, or the final sentence about time-varying trends. I consider the information in the abstract should be self-explanatory for the reader to understand at a first glance the most important outcomes of the paper.
Page 2, line 46: Please include a more suitable reference for the Cimel sunphotometers, i.e., Holben et al. (1998) and/or Giles et al. (2019).
Page 2, line 48: Please include the word “SKYNET” in this sentence for clarification.
Page 3, lines 85-87: It should be noted that this 13-year time series corresponds to Davos. Please also include a comma after “measurement frequency”.
Page 4, line 123: Are the three Cimel sunphotometers used in this study calibrated in Mauna Loa?
Page 4, line 24: There is a typo in Giles et al. (2019).
Page 5, lines 140-142: The authors enumerated in lines 70-73 different references in the literature aimed at inducing a threshold in the minimum amount of daily/monthly information to reliably perform statistical analysis. However, the authors finally set the threshold at 5 (3) to have a valid month (day) median. Is this threshold an empirical output of this specific study? Is this a recommendation for future studies?
Page 5, lines 151-153: The authors found 93.8% of Cimel-synchronous data to be cloud-free according to the PFR cloud screening algorithm. However, I don’t understand the next step. Do the authors calculate the AOD average (not the median value) of the complete data series and compare this value with the average of 93.8% of data? Please clarify.
Page 5, last sentence: The location of this sentence at the end of the section seems confusing. Do the authors have any reason to have placed it at the end of the section? Is it possible to include this information in Table 1?
Page 6, line 163: Can the authors explain briefly the way to de-seasonalize monthly medians?
Page 7, line 196: Do the authors have any explanation for the different median/mean AODs in June at your site? Maybe is it related to the arrival of different air masses and therefore some outliers are likely to occur at this time of year?
Page 8, line 207: Similarly to the last question, a clear departure of AOD differences from the WMO criterion of traceability is observed in 2019. Do you have any clue to explain this unexpected behaviour?
Page 9, Figure 3 caption and Table 3 caption: coefficient of determination is sometimes called as an acronym and sometimes not. Please homogenize.
Page 11, Figure 5: Y-axis should be “de-seasonalized”, according to the text and figure caption.
Page 13, line 307: Is there an objective threshold for statistical significance in the DLM analysis? What does “low significance” mean?
Page 13, section 3.1.1 and conclusions: The fact that LSLR trends are not consistent (in terms of tendency and significance) with DLM trends makes the reader ask him/herself about the suitability of including this analysis. The different tendencies can be certainly explained because of the fact that trends are not monotonic, as it is stated in the paper, and this type of analysis (DLM) seems therefore more adequate to study the long-term fluctuation of a variable like AOD. However, DLM results lack of statistical significance. Have the authors checked the existence of some break-points in the de-seasonalized monthly AOD data series in Fig. 5? Looking at these pictures, a real change in the trend during the last part of the period could be discerned, which could add weight to the positive trend found in the DLM analysis.
Pages 14-15, conclusions: In general, this section seems confusing for the reader, with some redundant information that might be removed. Some examples at page 15, line 359 (“as mentioned earlier”) or page 15, line 363 (redundant sentence about the trend uncertainty). Furthermore, the time sequence of the writing does not correspond to the timeline of this paper.
Page 14, line 346: What do the authors mean by a decline after mid-2000? Linear trends are observed to be negative in the whole 2007-2019 period.
Page 15, line 351: Dynamic Linear Modeling does not appear with the acronym, while this section includes many of them (including also DLM in lines 362, 364, 366 and 367).
Page 15, line 361: Please correct the typo “0.00 at 862 nm”
-
RC2: 'Comment on amt-2022-181', Anonymous Referee #2, 29 Jul 2022
General Comments
The Paper is very well written and structured. The study is important, and it can be a method to apply for analyzing products among different networks. A desirable outcome would be applying this method to other sites where intercomparison campaigns are performed. The paper fits the objectives of the Journal.
technical corrections
In addition to what already highlighted by RC1:
- Explain in the text, for who is not of this field ,the level of AOD aeronet downloaded data
- Line 145, in this part of the text it is not clear what the PFRhf dataset is used for, even if explained clearly later.
- Line 207: any reason for the larger deviation in 2019?
Angelos Karanikolas et al.
Angelos Karanikolas et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
235 | 83 | 8 | 326 | 2 | 4 |
- HTML: 235
- PDF: 83
- XML: 8
- Total: 326
- BibTeX: 2
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1