the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Validation of methane and carbon monoxide from Sentinel-5 Precursor using TCCON and NDACC-IRWG stations
Bavo Langerock
Jean-François L. Blavier
Thomas Blumenstock
Tobias Borsdorff
Matthias Buschmann
Angelika Dehn
Martine De Mazière
Nicholas M. Deutscher
Dietrich G. Feist
Omaira E. García
David W. T. Griffith
Michel Grutter
James W. Hannigan
Frank Hase
Pauli Heikkinen
Christian Hermans
Laura T. Iraci
Pascal Jeseck
Nicholas Jones
Rigel Kivi
Nicolas Kumps
Jochen Landgraf
Alba Lorente
Emmanuel Mahieu
Maria V. Makarova
Johan Mellqvist
Jean-Marc Metzger
Isamu Morino
Tomoo Nagahama
Justus Notholt
Hirofumi Ohyama
Ivan Ortega
Mathias Palm
Christof Petri
David F. Pollard
Markus Rettinger
John Robinson
Sébastien Roche
Coleen M. Roehl
Amelie N. Röhling
Constantina Rousogenous
Matthias Schneider
Kei Shiomi
Dan Smale
Wolfgang Stremme
Kimberly Strong
Ralf Sussmann
Osamu Uchino
Voltaire A. Velazco
Corinne Vigouroux
Mihalis Vrekoussis
Pucai Wang
Thorsten Warneke
Tyler Wizenberg
Debra Wunch
Shoma Yamanouchi
Yang Yang
Minqiang Zhou
Download
- Final revised paper (published on 28 Sep 2021)
- Preprint (discussion started on 06 Apr 2021)
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2021-36', Anonymous Referee #1, 22 Apr 2021
Manuscript “Validation of Methane and Carbon Monoxide from Sentinel-5 Precursor using TCCON and NDACC-IRWG stations” submitted to Atmos. Meas. Tech. by Sha et al. describes the validation of two atmospheric data products as generated operationally from the radiance spectra as measured by the TROPOMI instrument onboard of the Sentinel-5 Precursor (S5P) satellite. The manuscript covers an important topic appropriate for Atmos. Meas. Tech., contains new material and is well written. I therefore recommend publication after the comments listed below have been considered by the authors.
General comments
The manuscript is very (too?) long. It consists of 84 pages including 37 figures. I do not insist on shortening it but for readers (and reviewers) a split into two publications, one on methane and one on CO, would have been better, I think. Please consider this for a possible follow-on publication.
The paper covers several new and important aspects. Other validation papers addressing similar topics typically highlight the need (based on theoretical considerations) to apply one or more correction to be applied to the satellite and/or the ground-based data before they can be meaningfully compared. These corrections are primarily used to consider differences in vertical sensitivity (averaging kernels) and a priori profile assumptions in the retrieval. In addition, there are other corrections, especially corrections related to altitude, i.e., surface topography variations. Sha et al. also describe and apply these corrections but in addition they also present “direct comparison” results obtained without correction for vertical sensitivity and different a priori profiles (but still include an altitude correction). I like this very much for several reasons. One reason is that one needs to know how large a correction is to judge if the validation critically depends on (is dominated by) the correction or not. Another related reason is that many users will use the data “as is” and for this they probably need to know how good the data quality is without having applied more or less complex corrections. The information contained in this manuscript provides user with relevant information on this aspect, which is good. My interpretation of the results presented in this manuscript is that already a simple direct comparison gives meaningful validation results as the differences w.r.t. results obtained with correction are small. As shown in the paper (their Tables 4-7) the agreement between the satellite and the ground-based data are often (depending on metric and which data are compared) even better for the uncorrected data, i.e., for direct comparisons (e.g., Table 4 where it is shown that all three parameters (correlation coefficient, mean bias and standard deviation of the bias) are better for the direct comparison (see “Mean all stations”). This is not what one would expect for a meaningful correction. How can it be that a correction leads to an agreement between two data sets that is worse compared to a direct comparison without any correction? That this is the case (as shown in the paper) is very likely related to the fact that the correction is so small (compared to other more relevant effects) that the correction is essentially “not significant” (probably the error of the correction is on the same order or even larger than the correction itself). From the results shown in the paper I conclude that the effect of the correction is “plus/minus” in the sense that for certain aspects (or validation sites) agreement gets better but for other aspects agreement gets worse so that the overall effect is essentially zero. Do the authors agree with this interpretation of their results? Or in other words, do the authors agree with my conclusion that a meaningful validation of the S5P methane and CO products by comparisons with TCCON and NDACC is also possible via simple direct comparison, i.e., that a meaningful validation does not require to correct the data for altitude sensitivity and/or different a priori profiles? Please comment on this in the conclusion section of the manuscript as I think that statements related to this “correction aspect” would make the paper even more useful.
The Abstract contains this sentence starting at line 6: “In this paper, we present for the first time the S5P CH4 and CO validation results …”. The sentence is long and lists a number of aspects so that it is not entire clear what is meant with “for the first time”. Please modify this sentence to clarify if this statement only refers to a subset (or only one) or the many aspects listed. In any case this is not the first paper that reports on the validation of S5P CH4 and CO. The planned publication does not cite peer-reviewed publications relevant for the context of this publication. This is a major shortcoming and not easy to understand taking into account that many of the listed co-authors should be aware of relevant publications which are not cited (e.g., due to co-authorship). It needs to be mentioned that there is at least one other algorithm and corresponding S5P methane and CO data set, which is published, available, validated using TCCON and used to address important scientific applications, namely the S5P WFMD methane and CO data products generated by University of Bremen. To address this at least the following papers need to be cited (see References): Schneising et al., 2019, 2020a, 2020, Vellalassery et al., 2021.
Because there is not only one S5P methane and CO product please add in the abstract which products (including version number) are validated in this publication.
In this context also the paper of Lorente et al., 2021, is relevant. This paper is mentioned in the Sha et al. manuscript, which is good. Lorente et al., 2021, also report about a bias corrected S5P methane data product and shows detailed validation results. Is this Sha et al. manuscript using a methane data product with the same bias correction applied as also used by Lorente et al., 2021? If not, what is the difference between these products? Please add a short explanation of this so that it is clear for the readers what the relation between the two S5P bias corrected methane products is.
Specific comments
Abstract, page 2, line 20: Statement “We found that the required bias …”. There is no “required bias” but only a requirement that the bias should be less than a certain value. Please modify this sentence.
Abstract, page 3, line 33 following: Sentence “The validation results for the clear-sky and cloud cases of S5P pixels are comparable to the validation results including all pixels …”. This may be true for the validation results but is this really a statement about the quality of the S5P CO product for cloudy conditions taking into account that the ground-based data are limited to cloud free conditions (and the fact that S5P cannot look through clouds)?
Page 6, line 154 following: Concerning the 7% scaling factor and TCCON CO data use. I understand that the TCCON CO data used here are the products publicly available from the TCCON archive. Are these data already scaled and if yes how has the scaling factor be removed (e.g., is it a constant or does the product also contains the unscaled values?)? Please add this information on the content and use of the input data.
Page 8, beginning of Sect. 3: The S5P operational CO product needs to be (has been) transformed to another product before it can be compared with TCCON. The transformation depends on space and time and is large and not simply a transformation between physical units. It is unclear for me if a meaningful validation is possible under these circumstances. At least it needs to be clearly mentioned that a direct comparison (validation) is not possible as the S5P and TCCON products differ significantly.
Page 8, line 197: Sentence “Ps and TCH2O are taken from the S5P files”. Are these two quantities retrieved parameters or where do they come from? How accurate are they?
Page 9, line 230 following: I do not understand the colocation method. It is mentioned that an “effective location” of the FTIR sites is used depending on line-of-sight. Is this effective location a fix point on the Earth surface for a given satellite overpass? How is this location determined taking into account that the FTIR observations cover a certain period and there may be gaps at certain times due to clouds etc.? If not what does “a radius of 100 km” mean? What exactly are the “Co-located pairs”? Is this an average of several FTIR observations and an average of several S5P retrievals? If it corresponds to an average of several S5P retrievals (as mentioned in the paper) than the standard deviation of this difference cannot be directly related to the required random error or precision (as done in the paper, see line 253 following) as this requirement refers to the random error of single ground pixel retrievals but not to averages. Please clarify.
Page 17, lime 515: Sentence “This result confirms the previously reported studies”. What exactly is confirmed here? The exact value or only that it is negative? Please clarify.
Page 24, line 758: Sentence starting with “We found that the systematic difference between the S5P …”: The listed uncertainties of, for example, +/-0.57% for the bias corrected data, are not well justified, I think. These numbers are computed as mean values of the standard deviations of (averaged) S5P retrievals minus TCCON data per site. This uncertainty is more related to random errors and not so much to systematic errors. I think that the standard deviation of the biases as obtained for the various TCCON sites is more appropriate. I therefore recommend to add to all tables where the last row lists mean values (“Mean all stations”) another row with “Standard deviation all stations” and to use these values as uncertainties.
Page 39: Fig. 1: I suggest to improve the x-axis annotation. If I understand correctly than for each co-location pair the relative difference is computed as (SAT-GB)/GB*100, i.e., as percentage difference. On the left of Fig. 1 then mean of this difference is shown (for each site) and on the fight the standard deviation of this difference. So “Mean difference [%]” and “Standard deviation of the difference [%]” would be clearer, I think. The annotation for the data in the middle is also potentially misleading as the unit percent may suggest that a percentage difference is shown but if I understand correctly the data show absolute differences of two numbers, where each number has unit percent. Perhaps one can clarify this in the figure caption.
Page 41, caption Fig. 3: Please add that the time resolution of the data shown here is weekly.
Typos
No typos have been identified.
References
Lorente, A., Borsdorff, T., Butz, A., Hasekamp, O., aan de Brugh, J., Schneider, A., Wu, L., Hase, F., Kivi, R., Wunch, D., Pollard, D. F., Shiomi, K., Deutscher, N. M., Velazco, V. A., Roehl, C. M., Wennberg, P. O., Warneke, T., and Landgraf, J.: Methane retrieved from TROPOMI: improvement of the data product and validation of the first 2 years of measurements, Atmospheric Measurement Techniques, 14, 665–684, https://doi.org/10.5194/amt-14-665-2021, https://amt.copernicus.org/articles/14/665/2021/, 2021.
Schneising, O., Buchwitz, M., Reuter, M., Bovensmann, H., Burrows, J. P., Borsdorff, T., Deutscher, N. M., Feist, D. G., Griffith, D. W. T., Hase, F., Hermans, C., Iraci, L. T., Kivi, R., Landgraf, J., Morino, I., Notholt, J., Petri, C., Pollard, D. F., Roche, S., Shiomi, K., Strong, K., Sussmann, R., Velazco, V. A., Warneke, T., and Wunch, D.: A scientific algorithm to simultaneously retrieve carbon monoxide and methane from TROPOMI onboard Sentinel-5 Precursor, Atmos. Meas. Tech., 12, 6771-6802, https://doi.org/10.5194/amt-12-6771-2019, 2019.
Schneising, O., Buchwitz, M., Reuter, M., Bovensmann, H., and Burrows, J. P.: Severe Californian wildfires in November 2018 observed from space: the carbon monoxide perspective, Atmos. Chem. Phys., 20, 3317-3332, https://doi.org/10.5194/acp-20-3317-2020, 2020a.
Schneising, O., Buchwitz, M., Reuter, M., Vanselow, S., Bovensmann, H., and Burrows, J. P.: Remote sensing of methane leakage from natural gas and petroleum systems revisited, Atmos. Chem. Phys., 20, 9169-9182, https://doi.org/10.5194/acp-20-9169-2020, 2020b.
Vellalassery, A., Pillai, D., Marshall, J., Gerbig, C., Buchwitz, M., Schneising, O., and Ravi, A.: Using TROPOspheric Monitoring Instrument (TROPOMI) measurements and Weather Research and Forecasting (WRF) CO modelling to understand the contribution of meteorology and emissions to an extreme air pollution event in India, Atmos. Chem. Phys., 21, 5393-5414, https://doi.org/10.5194/acp-21-5393-2021, 2021.
Citation: https://doi.org/10.5194/amt-2021-36-RC1 - AC1: 'Reply on RC1', Mahesh Kumar Sha, 30 Jul 2021
-
RC2: 'Comment on amt-2021-36', Anonymous Referee #2, 04 May 2021
A. General Comments
The advantage of the TROPOMI measurement is its capability to cover entire globe in a single day with higher spatial resolution. Readers are interested in its validation for the data with large satellite zenith angles and how accurate the fast L2 retrieval algorithm is. The present manuscript looks like a technical report. The research paper must be concise and needs analysis for root causes of bias. The manuscript includes many redundant portions, which must be shortened. The abstract and the conclusion are also too long. Major revision is needed.
I have following suggestions.
(1) Match up condition
As TORPOMI has higher sampling density and spatial resolution, stricter match up conditions can be applied than for existing instruments such as GOSAT and OCO-2. 100 km or 50 km is too long for the sites located near urban area such as Saga.
(2) Summary table
There are several numbers of systematic errors in the abstract, the main text, and the conclusion. The summary table with numbers and conditions will help readers’ understanding.
(3) NDAC
There are already plenty of match up data with TCCON. Explanation why NDAC data is additionally need, is required in more detail. Do authors need more data at high latitude stations with large solar angles?
(4) Geometry dependency
Authors mentioned solar zenith angle dependency. TROPOMI has wide cross-track coverage. Is there also viewing angle dependency? Is the bias due to forward calculation error by the radiative transfer model used in the L2 retrieval?
Discussion on how to reduce bias such as SZA dependent and surface-albedo dependent ones will be useful for readers.
B. Specific Comments
(1) Abstract, page 2, line 12
A brief explanation of “QA” in the abstract is needed.
(2) Page 2, line 16, Page 8 line 207, “Smoothing uncertainty”, Page 8
It appears in the abstract. In the main text, it appears first in Page 8. Brief explanation will help readers’ understanding.
(3) Page 8, Line 209
Detailed explanations on TCCON site are not main topics of this paper. The information is available in the TCCON WIKI and not needed in the main text.
(4) page 9, line 241, “a priori alignment”
The explanation will help readers’ understanding.
(5) Page 16, line 479, “sufficient number”
How many pixels are needed for robust statistics?
C. Technical Corrections
(1) Page 40, Figure 2, Page 48. Figure 10, Page 74 Figure 36
There are too many colors to identify each site.
Citation: https://doi.org/10.5194/amt-2021-36-RC2 - AC2: 'Reply on RC2', Mahesh Kumar Sha, 30 Jul 2021