Intercomparison of upper tropospheric and lower stratospheric water vapor measurements over the Asian Summer Monsoon during the StratoClim Campaign
- 1Dept. of the Geophysical Sciences, University of Chicago, Chicago, IL, USA
- 2LATMOS, UVSQ, Sorbonne Université, CNRS, IPSL, Guyancourt, France
- 3Forschungszentrum Jülich, Institut für Energie und Klimaforschung (IEK-7), Germany
- 4National Research Council of Italy, Institute of Atmospheric Sciences and Climate (CNR-ISAC), Rome, Italy
- 5Institute for Atmospheric and Climate Science, ETH Zürich, Zürich, Switzerland
- 6Central Aerological Observatory of RosHydroMet, Dolgoprudny, Russian Federation
- anow at: Dept. of Environmental Science and Engineering, California Institute of Technology, Pasadena, CA, USA
- bnow at: Empa, Laboratory for Air Pollution/Environmental Technology, Dübendorf, Switzerland
- 1Dept. of the Geophysical Sciences, University of Chicago, Chicago, IL, USA
- 2LATMOS, UVSQ, Sorbonne Université, CNRS, IPSL, Guyancourt, France
- 3Forschungszentrum Jülich, Institut für Energie und Klimaforschung (IEK-7), Germany
- 4National Research Council of Italy, Institute of Atmospheric Sciences and Climate (CNR-ISAC), Rome, Italy
- 5Institute for Atmospheric and Climate Science, ETH Zürich, Zürich, Switzerland
- 6Central Aerological Observatory of RosHydroMet, Dolgoprudny, Russian Federation
- anow at: Dept. of Environmental Science and Engineering, California Institute of Technology, Pasadena, CA, USA
- bnow at: Empa, Laboratory for Air Pollution/Environmental Technology, Dübendorf, Switzerland
Abstract. In situ measurements in the climatically important upper troposphere / lower stratosphere (UTLS) are critical for understanding controls on cloud formation, the entry of water into the stratosphere, and hydration/dehydration of the tropical tropopause layer. Accurate in situ measurement of water vapor in the UTLS however is difficult because of low water vapor concentrations (< 5 ppmv) and a challenging low temperature/pressure environment. The StratoClim campaign out of Kathmandu, Nepal in July and August 2017, which made the first high-altitude aircraft measurements in the Asian Summer Monsoon (ASM), also provided an opportunity to intercompare three in situ hygrometers mounted on the M-55 Geophysica: ChiWIS (Chicago Water Isotope Spectrometer), FISH (Fast In situ Stratospheric Hygrometer), and FLASH (Fluorescent Lyman-α Stratospheric Hygrometer). Instrument agreement was very good, suggesting no intrinsic technique-dependent biases: ChiWIS measures by mid-infrared laser absorption spectroscopy and FISH and FLASH by Lyman-α induced fluorescence. In clear-sky UTLS conditions (H2O < 10 ppmv), mean differences between ChiWIS and FLASH were only −1.42 % and those between FISH and FLASH only −1.47 %. Agreement between ChiWIS and FLASH for in-cloud conditions is even tighter, at +0.74 %. In general, ChiWIS and FLASH agreed to better than 10 % for 92 % (87 %) of clear-sky (in-cloud) datapoints. Agreement between FISH and FLASH to 10 % occurred in 78 % of clear-sky datapoints. Estimated realized instrumental precision in UTLS conditions was 0.05, 0.1, and 0.2 ppmv for ChiWIS, FISH, and FLASH, respectively. This level of accuracy and precision allows the confident detection of fine-scale spatial structures in UTLS water vapor required for understanding the role of convection and the ASM in the stratospheric water vapor budget.
- Preprint
(5839 KB) -
Supplement
(6500 KB) - BibTeX
- EndNote
Clare E. Singer et al.
Status: final response (author comments only)
-
RC1: 'Comment on amt-2022-13', Anonymous Referee #1, 29 Mar 2022
Review of “Intercomparison of upper tropospheric and lower stratospheric water vapor measurements over the Asian Summer Monsoon during the StratoClim Campaign” by Singer et al.
Summary:
The manuscript by Singer et al. describes the comparison of three hygrometers onboard the Geophysica high altitude research aircraft during the StratoClim field campaign in 2017. Water vapor measurements in the upper troposphere and lower stratosphere are difficult and intercomparisons of different instruments are an essential tool to provide confidence in the observations of each. This study also includes comparisons with balloon borne soundings and space based satellite observations.
The paper is an important contribution for the understanding of the water vapor observations during that campaign. It is overall well written and the analysis is detailed and of high quality. I would recommend publication after some minor modifications.
Major comments:
The study uses v4.3 for the comparison with MLS water vapor measurements. The MLS team has already released v5.0, which addresses some significant biases and drifts that do affect the comparison with the in situ measurements. I would strongly suggest using v5 instead of v4.3 as the results would look slightly differently.
Minor comments:
I would suggest moving Figure S6 from the supplement to the main document. It is quite helpful in understanding the discussion of the cloud determination.
Since the authors refer to Figure S9 twice, it too could possibly be moved to the main text.
Lines 246ff: It would be quite useful to have an overview table, which lists for each flight the total flight hours, total number of data points, total number of measurements in cloud and out of cloud. Without this, the number of data points given here miss perspective.
The comparisons in Figure 8 look quite good. Nevertheless, there may be some sampling biases in this comparison, since the balloon-borne measurements are typically launched only in non-precipitating conditions, and MLS is sensitive to cloud contamination in the UTLS. A short discussion about how this potential sampling bias may influence the comparison could be helpful.
Lines 374: Not unexpected, the comparison was not blind and there have been many interactions between the different teams during the campaign. I have confidence in each team, but it would still be good to know to what extent if any instrument calibrations were adjusted during the campaign based on input from the other teams.
Technical comments:
Line 150: delete the stray comma.
Lines 269f: Better write: “During the stair stepping, there were two moist layers around …”
In Figure 5: I would suggest making the vertical axes for potential temperature and H2O the same between the upper and lower group of panels.
-
AC1: 'Reply on RC1', Clare Singer, 12 May 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-13/amt-2022-13-AC1-supplement.pdf
-
AC1: 'Reply on RC1', Clare Singer, 12 May 2022
-
RC2: 'Comment on amt-2022-13', Anonymous Referee #2, 04 Apr 2022
Review of: Intercomparison of upper tropospheric and lower stratospheric water vapor measurements over the Asian Summer Monsoon during the StratoClim Campaign by Clare E. Singer and colleagues; https://doi.org/10.5194/amt-2022-13
Overview: This paper presents an intercomparison between the 3 in situ water vapour instruments that flew on the Geophysica aircraft for the 2017 StratoClim experiment based in Kathmandu. Included in the discussion is a comparison with an average profile from a balloon borne hygrometer launched near the Kathmandu airport and from the Microwave Limb Sounder satellite instrument. The paper is well written and found good agreement between the aircraft instruments, and also with the average satellite and frost point balloon profiles. I recommend publication, but offer just a few comments below.
1) The abstract states " In clear-sky UTLS conditions (H2O < 10 ppmv), mean differences between ChiWIS and FLASH were only -1.42% and those between FISH and FLASH only -1.47%. Agreement between ChiWIS and FLASH for in-cloud conditions is even tighter, at +0.74%. In general, ChiWIS and FLASH agreed to better than 10% for 92% (87%) of clear-sky (in-cloud) datapoints." I'm a bit confused between the order 1% values noted in the first sentence, and the 10% value noted in the second. Is the second sentence valid for cases where H2O is larger than 10ppmv? Perhaps instead, the uncertainty needs to be included in the first sentence (ie, what's the Standard deviation on the -1.42% for ChiWIS and FLASH?) Please made this clear. Also, include the range in the statement on line 253 on page 10.
2) Figures in supplement: It would be easier to look at on a laptop screen if you include the values on the horizontal axis for figure s2 and s4
3) Re: MLS, the most recent retrieval is version 5, which at least partially corrected for an instrumental drift. It would be worthwhile redoing the comparisons with the new retrieval.
-
AC2: 'Reply on RC2', Clare Singer, 12 May 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-13/amt-2022-13-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Clare Singer, 12 May 2022
-
RC3: 'Comment on amt-2022-13', Anonymous Referee #3, 08 Apr 2022
Review of “Intercomparison of upper tropospheric and lower stratospheric water vapor measurements over the Asian Summer Monsoon during the StratoClim campaign", by C.E. Singer et al.
The paper directly compares in situ water vapor measurements by three instruments aboard the M-55 Geophysica aircraft during the StratoClim campaign: CWIS, FISH and FLASH-a. Of the three, CWIS is the newest instrument, making its debut on an aircraft. The data evaluation is fairly straightforward except that FISH measurements must be divided into "clear sky" and "in cloud" since it does not discriminate between water vapor and condensed-phase water entering its sampling inlet. The paper concludes finding "no intrinsic technique-dependent biases". The paper compares not only the mixing ratio data from each instrument, but also RH data calculated using ambient temperature measurements. These RH values are also used to demonstrate the "realism" of RH-ice values calculated from the mixing ratio data relative to theoretical limits.
Overall, the paper is fairly well written, and reasonably concise with 10 of 18 figures relegated to a supplement. Some of the figures in the supplement are discussed in the main body text, making it cumbersome to flip between the two documents. My only three significant concerns are the following, with minor comments to follow.
The near-zero bias values between instruments over the full campaign are misleading because a much larger positive bias for one flight effectively "cancels out" a similar magnitude negative bias for another flight. For example, for FISH-FLASH biases for F4 and F6 in Figure 3. F4 shows a negative bias of 10-15% and F6 shows positive bias of 10-15% for F6. When combined over the entire campaign, the two biases basically add to zero. I would like to suggest that summary bias statistics for each instrument pair, for each flight, including uncertainties, be provided in a table. Then the reader can see not only the large biases during specific flights, but also how these biases change in both magnitude and sign from one flight to the next. Yes, this information can be gleaned from Figure 3, but having the statistics readily available in a table will make them much more accessible.
I'm sure others have pointed out that the MLS4.3 data used in the paper have now been replaced by v5 that reduces some of the "wet" biases that had crept into the older v4 retrievals. Although I don't find it essential to replace the MLSv4 data in the paper (e.g., profile statistics in Figure 8), it should be stated that similar profile statistics based on MLSv5 data would generally shift them to lower mixing ratios by 0.2-0.3 ppm. The "dry bias" mentioned in L348 would become even greater, while the "wet bias" (L349) would be reduced by v5. The importance of this MLS data to the comparison with the aircraft instruments would become much greater if some quantitative "coincidence" criteria with the aircraft flights were mentioned. How far apart in time and space were these MLS and Geophysica measurements made?
I am somewhat baffled by the treatment (and lack of use) of the 11 different CFH profiles that were obtained during the campaign. Was the only use of this data to consolidate 11 individual profiles down to one "statistical" (mean ± stddev) profile to compare with the aircraft data? Were the individual CFH profiles of little use in comparing to the aircraft data for individual flights? For example, would CFH profiles in coincidence with F4 and F6 help explain why the bias between FISH and FLASH changed sign between the two flights? If there is a good reason why the CFH profiles are used only to create one "statistical" profile it would be beneficial to include such a statement, otherwise the reader is left wondering why the balloon profiles play so little role in the intercomparison.
Minor comments:
Lines 10-11 vs line 12: This confused me when I first read the abstract. Overall mean differences are on the order of 1%, but then "In general, CWIS and FLASH agreed to better that 10% for 92% ...". I thought there was probably a typographical error. However, after reading the paper, I now understand why the overall 1% bias values are so small - the topic of my first significant concern above. I suggest re-working the abstract text describing overall (full campaign) bias statistics to be less misleading, as I think full-campaign statistics are not very meaningful when there are large flight-to-flight differences in the biases.
L10-11: Need to include uncertainty values for mean differences
L13: change "to" to "within"
L15: The detection of "fine-scale spatial structures" depends not only on high precision (not accuracy), but also on instrument response time. This sentence is a bit of "grandstanding" relative to the contents of this paper.
L18: With mixing ratios > 10,000 ppm at Earth's surface, water vapor is not a "trace gas"
L20: "that roughly doubles the anthropogenic warming from carbon dioxide alone" is a very mis-interpretable statement without additional context.
L26: Mole fractions are not "concentrations".
L36: Is water vapor a "pollutant" in Earth's atmosphere?
L42: "low" is such a relative adjective. Most trace gases exist at mixing ratios <5 ppm, some < 5 ppt, so why is 5 ppm considered "low" for water vapor? It is because water vapor can also be found in the atmosphere at >10,000 ppm. Anytime an instrument is carried from the very wet lower troposphere to the UTLS it is challenge to measure the very dry air there. That's why at 5 ppm the water vapor mixing ratio is low.
L46: I would change "Although" for "Because"
L55: NOAA frost point hygrometer is also known as the NOAA FPH.
L64: Voemel and Hall published what can be described as "detailed uncertainty analysis of water vapor measurements by the CFH and FPH, respectively," rather than "an intercomparison"
L74: remove "with" before "within"
L81: Knowing that none of these three instrument directly measures RH, how can RH values provide accuracy information when the RH uncertainties are a combination of the uncertainties of simultaneous WV and T measurements?
L83: the second occurrence of "in situ measurements" can be omitted
L90: There have been previous in situ measurements in the ASM. For example,
see https://doi.org/10.1016/j.atmosres.2022.106093L219: "finite" is extremely vague here. Is the time needed on the order of 10s of seconds? 100s of seconds? Please be more quantitative here.
L236: "of better 1.5%" makes no sense.
Figure 3: This figure intrigues me because there are no tails on both sides of the distributions. Do the normalized PDFs really drop from viewable values to unviewable values in these panels? Why does the y-scale go from 10^-3 to 10^-5 in one increment when all the other increments are a single factor of 10?
L253: These mean differences (and every single one in the paper) should be accompanied by a uncertainty value, most like two standard deviations of the mean.
Figure 4: Yellow is never a good choice for colored markers in Figures. It is difficult to tell where the yellow markers begin and end in this figure. The terminology "heat map" is new to me. What does it mean?
L282: You don't need pressure to calculate RH, only WV partial pressure and ambient temperature.
Figures 6, 7: I understand the purpose of showing values of RH-ice here against the theoretical limit to assess WV measurement accuracy, but with anything but the highest accuracy T measurements used to derive RH values, calculated RH values have uncertainty contributions from the T measurements as well as the WV measurements. If RH-ice slightly exceeds the theoretical limit, is the WV accuracy to blame? or the T accuracy? or both?
What I do not understand is the point-by-point comparison of RH-ice values from the three hygrometers. How does this analysis shed any more (or different) light on the previous comparison of WV mixing ratios? RH values are a derived quantity, while WV mixing ratios are the direct measurements. What does Figure 8 add that Figure 2 hasn't already shown?
L368: This sentence is far too "grandstanding" for what is discussed in this paper. How can the attribution of "good agreement" for the full campaign data, knowing that individual flight biases are high in some cases, and flip from negative to positive from one flight to the next, be the product of these two things? Please tone this sentence down as it far too overreaching. It also makes it sound like you all colluded after the flights to make sure the data from different instruments agree with one another. It is clear these data sets were not prepared or submitted under "blind" conditions, but "robust communication between instrument teams during subsequent data analysis" probably makes it sounds more conspiratorial than it was.
L374: I think "major progress" overreaches the findings of this comparison paper. Some instrument pairs were found to have large (10-15%) mean biases during some flights. That's not new. The biases sometime changed sign from one flight to the next. Definitely not new. When all the flight are combined, the large biases observed for some individual flights were diminished by similar biases of opposite sign for other flights. This is fortunate, but not necessarily new. If you can demonstrate small flight biases for instrument pairs that are consistent over a campaign, now THAT would represent "major progress".
-
AC3: 'Reply on RC3', Clare Singer, 12 May 2022
The comment was uploaded in the form of a supplement: https://amt.copernicus.org/preprints/amt-2022-13/amt-2022-13-AC3-supplement.pdf
-
AC3: 'Reply on RC3', Clare Singer, 12 May 2022
Clare E. Singer et al.
Clare E. Singer et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
424 | 153 | 19 | 596 | 50 | 28 | 5 |
- HTML: 424
- PDF: 153
- XML: 19
- Total: 596
- Supplement: 50
- BibTeX: 28
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1