the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
The SPARC Water Vapor Assessment II: assessment of satellite measurements of upper tropospheric humidity
William G. Read
Gabriele Stiller
Stefan Lossow
Michael Kiefer
Farahnaz Khosrawi
Dale Hurst
Holger Vömel
Karen Rosenlof
Bianca M. Dinelli
Piera Raspollini
Gerald E. Nedoluha
John C. Gille
Yasuko Kasai
Patrick Eriksson
Christopher E. Sioris
Kaley A. Walker
Katja Weigel
John P. Burrows
Alexei Rozanov
Download
- Final revised paper (published on 09 Jun 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 05 Nov 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Review of “The SPARC water vapor assessment II: Assessment of satellite measurements of upper tropospheric water vapor” by William Read et al.', Anonymous Referee #1, 30 Nov 2021
In this study, the authors presented the results of comprehensive intercomparison of water vapor based on the twenty-one satellite-based measurements, frostpoint hygrometer balloon sondes, and Vaisala-RS92 radiosondes in the upper troposphere, which covers the pressure range from 300 to 100 hPa. The comparison methodologies used in this study are robust, and the results are well presented. This work will certainly be used as an essential reference by a wide scientific community. However, there are some technical details of the work that I would recommend the authors revisit before it is ready for the final publication. Please find my general and the specific comments below.
General Comments:
- I believe the authors have the vast knowledge on the water vapor measurements in the upper troposphere and stratosphere. General comments on the importance of long-term measurements of upper tropospheric and stratospheric water vapor on climate and climate change would be appreciated.
- I believe there are newer versions of data from some of the instruments used in this study, e.g., MLS, which became available recently. I think it would be useful to add some insights on how the retrieval versions of each instrument are chosen and how the results presented here will or will not be sensitive to the different versions of data. Another comment related with newer generation instruments, e.g., SAGE III on ISS, which is not included in this study, could potentially added as well.
- On technical aspects, please revisit the color schemes of line figures and try to find a way to emphasize the lines that are note-worthy. This will help identify the key results in this work more easily.
- I think adding references where it is relevant would greatly improve the richness of this manuscript. I think many of them are missing. Some of them are pointed out in my specific comments.
Specific Comments:
P1, Abstract: I would recommend using consistent nomenclature for water vapor instead of using three separate ones, e.g., humidity, water vapor and H2O. This applies to the rest of the manuscript.
P1, L10: Here, do ‘average ~30% agreement’ and ‘additional ~30% variability’ refer to the differences in the averages of the data?
P2, L17: increase in satellite missions -> increase in the number of satellite missions. It would be necessary to add a sentence here mentioning that not only the number of instruments but also the measurement techniques are improved since 2000.
P2, L17: It would be helpful to add a few references at the end of this sentence.
P2, L19: Again, citations are needed at the end of this sentence.
P2, Section 2: References for the Vaisala-RS92 are needed here. It would be helpful to add one more column at the end of Table 1 to show references for each satellite data set.
P4, L44: I would recommend rewrite this sentence for clarity.
P4, Section 3: I would recommend adding time periods for the comparison as each satellite measurement covers different time periods. Also how was the quality control done for each data set?
P4, L47: ‘the weighting…sounder’ – It would be useful to have this as a number in km.
P4, L49: I am curious to know if all instruments are prone to drifts in the water vapor retrievals or it only applies to a certain technique.
P4, L56: ‘Many scientific…’ - A few references for these types of studies would be useful here.
P6, L92: What are the time periods of data used in Fig. 3?
P7, Fig. 2: I think this figure looks crowded because of all the equations included in the figure. I am wondering if this figure and the explanation in the text (L80-90) can be simplified.
P7, L113: I am wondering why this is true only for the MIPAS-Oxford retrieval.
P7, L115: ‘support the above result’ – A reference for this statement would be useful.
P10, L156: I believe MLS v5 is publicly available. A relevant citation would be helpful here.
P12, Fig. 6: It would be helpful to add a comment about the outliers in this figure.
Figs. 6 & 7: It is hard to see the MLS-Aura lines (yellow) in these figures. Use a different color for MLS-Aura to emphasize the feature. This can be done by switching the color with another instrument.
P19, Fig. 12: Is the reason why the agreement is better in Boulder related to the number of available measurements is higher in Boulder than the other stations?
Figs. 14 & 15: I am wondering what the best-fit lines for 178 and 147 hPa would look like and if it is still useful to show.
P21, section 4.3: Time periods used in this comparison should be mentioned here as different satellites cover different time periods.
P21, L220: A reference for MERRA is needed here and elsewhere.
Figs. 16-18: It is interesting to note that only the MLS vs. AIRS comparison in Fig. 16 shows the second peak in the pdf distribution below 178 hPa when the air is dry (< 10 ppmv).
P23, L240: ‘and inter…down’ – A little more explanation about this would be helpful.
P23, Section 4.4: It would be helpful to add a paragraph describing the differences that one can expect between MLS and AIRS because of the differences in the measurement techniques and sampling patterns, etc. This is described in the first paragraph on Page 29, which can potentially be moved to the beginning of Section 4.4.
P32, Section 5: Figure 23 contains a lot of information and I think it is extremely useful. I think it would help to split this section into discussion and conclusion or subsections for simplicity. This section contains many details that need time to digest.
Citation: https://doi.org/10.5194/amt-2021-300-RC1 -
AC1: 'Reply on RC1', William Read, 09 Feb 2022
I have generally incorporated the recommendations presented except for the following.
1. We (the submitters to the special issue) met last summer and decided not to add new versions to the assessment. This is because a number of papers are already in and accepted and the desire is for homogeniety in the data discussed in the special issue. So we decided that the issue will focus on the versions that were in place 2 years ago.
2. The line color scheme used was also a group decision made to be uniform across all the papers, thus to maintain uniformity, I will not change this.
I have incorporated the other changes, including the addition of references and resectioning on sections as requested.
Citation: https://doi.org/10.5194/amt-2021-300-AC1
-
RC2: 'Comment on amt-2021-300', Anonymous Referee #2, 07 Dec 2021
Review of
The SPARC water vapor assessment II: Assessment of satellite measurements of
upper tropospheric water vaporRead et al.
Overview
--------The paper presents a comprehensive analysis of available satellite datasets
covering water vapour measurements in the upper troposphere, using balloon
frost point hygrometers as the main reference source, but also
satellite-to-satellite intercomparisons. The analysis is certainly comprehensive
and the authors have taken care to explain all the assumptions and caveats
associated with their results.I have only a few signficant suggestions (along with a number of minor points)
which I think may improve the paper, but I leave it to the authors to decide
whether or not to incorporate these.1) While the problem that in-situ measurements sample just a single location
within the volume of the satellite measurement is discussed, it is slightly
waved off as an inevitable cause of discrepancy without any attempt to assess
the magnitude of H2O variability, apart from a reference to 20-30%
variability from MOZAIC. I think it would have been interesting to see, for
the BFH sites chosen, just how consistent the day-to-day values are when
averaged over, say, to the vertical resolution presented in the plots. This
would give a useful upper limit to the contribution of local atmospheric
variability to the discrepancies (assuming over 24 hours that the wind blows
air a much larger distance than the satellite resolution) and could, perhaps,
replace the rather artificial example used for Fig 2.2) (specifically the gridded map comparions) Wouldn't you expect MLS to be more
'moist' than AIRS? Presumably MLS is sampling over the whole volume, cloudy
or not, while AIRS - even though nadir-viewing - is still restricted to
cloud-free, and therefore presumably, drier volumes? On the other hand I
would not expect AIRS to be retrieving H2O in the drier regions above clouds
either (assuming the presence of a cloud in nadir-viewing flags the whole
spectrum as unusable), while MLS would, so perhaps above cloud tops MLS
would become drier than AIRS. Perhaps this contributes to the MLS
'exaggeration' of AIRS?3) The conclusions section is far too long (over 4 sides of text). I would
suggest moving much of the material to summary sections at the end of each
type of comparison and limit the conclusions to a discussion of Fig 23 and
recommendations.
Typos, minor comments & general pedantry
-----------------------------------------P1 L1: SCIAMACHY is neither occultation nor passive thermal.
P1 L3: Another major problem with this region (less so for the microwave
instruments) is the presence of cloudsP1 L4: 'per million BY volume' ?
P2 L16: 'And' should be capitalised, according to sparc-climate.org
P2 L18: 'there HAS been'
P2 L19: Can it be a 'reassessment' if these are 'new' resources which,
presumably, have not yet been assessed?P2 L25: Table 1 actually has 25 entries under 'Data Set' rather than 21, since
you have split all four MIPAS data sets into 2.P2 L28-30 and L34-38: I'd prefer these lists of data sites in table form, which
would be much easier to read and refer back to. Maybe even a map?P3 Table 1: might be worth distinguishing between nadir and limb-viewing, which
are two very different techniques within the 'IR Thermal Emission'
categoryP4 L41: Perhaps mention here what units are used for H2O concentration (ppmv?)
If the original data were in different units, presumably they have all
been converted to the same at this stage?P4 L45: Maybe a colon after 'methods' rather than a comma, and a new paragraph
for 'coincident comparisons' since the other two methods merit new
paragraphs.P4 L61: 'sun-synchronous' is hyphenated here, but not in L51.
P4 L63: Section headers seems to have gone a little awry. S4 is titled
'Coincident Comparisons' but includes subsections on the two other
comparison methods.P5 Fig1: Could probably have been incorporated into Table 1.
P6 L71: '20-30%' needs an en-dash in here, ie '--' in LaTeX.
P6 Fig2: not sure this figure is necessary to explain a Gaussian distribution
P6 L81-90: I take the point but I feel there ought to be a mathematically
robust method of showing this, otherwise how do we know it's not
just a characteristic of this particular sample?P8 Fig 3: Although averaging kernels have not been applied to the BFH data it
may be instructive to superimpose the corresponding points if, say,
triangular averaging kernels of different widths had been applied since
this may account for some of the inability of the satellites to track
the extreme values (ie accounting for vertical but not horizontal
atmospheric variability).P9 Fig 4: Use 'BFH' rather than 'FP' in plot titles (also for Fig 5).
It would be useful to give some indication of the FOV size of the
different instruments eg in the figure caption, and how this maps
(approximately) on to the vertical scale used here. This would give the
reader some idea of the degree of smoothing expected from the averaging
kernels.P9 L120-127: In order to make the Figures 'standalone', most of this should be
in the caption to Fig 5 rather than here.P14 L180: 'also HAVE a seasonal cycle'
P14 L189: An obvious question arising here is: how well do the Vaisala and BFH
instruments agree with each other? Presumably that's covered in the
Dirksen et al but no reason not to produce your own analysis or
reproduce theirs.Fig 12: A more elegant way of displaying amplitude and phase would be polar
plots for each pressure level, amplitude indicated by length of
radius. In this way phase differences between low amplitude fits are not
exaggerated as they are when plotted separately from amplitude.P21 L200 (and subsequently in this paragraph) hyphen introduced within RS92,
inconsistent with elsewhere (and Vaisala web-site, which has no hyphen)P21 L203: Unnecessary (and inconsistent) capitalisation of Satellite.
P21 L219: 'based' is superfluous here
P21 L220-225: As with Fig 5, I think it would be better to have this information
in the Figure captionP22 Fig14/15 captions should make it clear that scatter plot is only for MLS.
P22 Fig 15 caption: Sodankyla mis-spelt, also missing the umlaut over the
final 'a'P24 Fig 16 caption: some people might argue that it should be 'data ... are
shown', but I'm not one of those people.P23 L228: Given that ACE-FTS only measures at local sunrise/sunset, while
MLS-Aura at local 13:45 (and presumably 01:45), how are there more
MLS-ACE coincidences at the equator than MLS-MIPAS?P23 L241: You seem to have shifted to a different set of pressure surfaces for
the gridded maps (ie 50hPa increments). Any reason for the change?P23 L251: Space within '10ppmv' for consistency.
P24 L29: Again, a new pressure level, 175 hPa.
P29 L265: Perhaps semicolon instead of comma after 'maritime'
P29 L270: South Atlantic Anomaly should be capitalised (at least Atlantic should
be).P30 Figure caption - also mention the time period used for averaging.
P32 L314: Use LaTeX $-$ rather than hyphens for minus signs.
&P33 L360, &P36 L424P34 L379: Comma after 'closing'.
P34 L381 & L393: 'Thermal lapse'? Do you mean negative temperature gradient?
P34 L393: Use en-dash: 10--20 ppmv
P36 L406: Comma after 'troposphere' (to match that after 'instruments')
P36 L431: I'm not sure this is due to forward model differences. In my
experience it is most likely due to the inverse models, ie
regularisation, a priori assumptions, convergence criteria etc----------
Citation: https://doi.org/10.5194/amt-2021-300-RC2 -
AC2: 'Reply on RC2', William Read, 09 Feb 2022
Thank you for the review.
I have encorporated the changes requested except.
I am not convinced that looking at day to day variability of a volume averaged measurement is the same as a measure of the variability within the volume which is what I need to know. Also the repeat overpass repeat time is approximately 5 days. Thus I havent added this.
I agree that plotting the phase and amplitude information in a polar style plot is a more elegant way to present this information. I have decided not to do this because It would further delay this already late publication by a month and a separate panel is needed for each height. The current format displays the information as a vertical profile.
All other concerns are addressed.
Citation: https://doi.org/10.5194/amt-2021-300-AC2
-
AC2: 'Reply on RC2', William Read, 09 Feb 2022