Articles | Volume 16, issue 3
© Author(s) 2023. This work is distributed underthe Creative Commons Attribution 4.0 License.
Advances in retrieving XCH4 and XCO from Sentinel-5 Precursor: improvements in the scientific TROPOMI/WFMD algorithm
- Final revised paper (published on 03 Feb 2023)
- Preprint (discussion started on 28 Oct 2022)
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor |
: Report abuse
RC1: 'Comment on amt-2022-258', Anonymous Referee #2, 07 Nov 2022
- AC1: 'Response to Anonymous Referee #2', Oliver Schneising, 16 Dec 2022
RC2: 'Comment on amt-2022-258', Anonymous Referee #1, 14 Nov 2022
- AC2: 'Response to Anonymous Referee #1', Oliver Schneising, 16 Dec 2022
Peer review completion
AR: Author's response | RR: Referee report | ED: Editor decision
AR by Oliver Schneising on behalf of the Authors (16 Dec 2022)  Author's response Author's tracked changes Manuscript
ED: Referee Nomination & Report Request started (29 Dec 2022) by Joanna Joiner
RR by Anonymous Referee #2 (06 Jan 2023)
RR by Anonymous Referee #1 (12 Jan 2023)
ED: Publish subject to minor revisions (review by editor) (13 Jan 2023) by Joanna Joiner
AR by Oliver Schneising on behalf of the Authors (16 Jan 2023)  Author's response Author's tracked changes Manuscript
ED: Publish subject to technical corrections (18 Jan 2023) by Joanna Joiner
AR by Oliver Schneising on behalf of the Authors (18 Jan 2023)  Author's response Manuscript
Review of “Advances in retrieving methane and carbon monoxide from TROPOMI onboard Sentinel-5 Precursor” by Schneising et al., AMTD
This paper presents updates to the TROPOMI/WFMD retrieval algorithm and its post-processors in application to XCH4 and XCO retrievals from the TROPOMI satellite instrument. The authors introduce changes they have made to the spectral fitting, auxiliary data, cloud filtering scheme, bias correction and destriping methods. A validation of the improved algorithm is presented based on a multi-year dataset of TROPOMI retrievals as well as on TCCON and surface measurements.
Overall, the paper is structured clearly and well written and it addresses relevant scientific questions within the scope of AMT. Useful data science ideas with respecte to cloud-filtering, bias correction of satellite imagery and trend detection are presented. However, some methodological questions remain and I recommend publication only after these concerns have been addressed.
1) The title does not clearly reflect the contents of the paper. This work deals with algorithm updates to TROPOMI/WFMD. It does not introduce new retrieval concepts (the only change to the retrieval approach is an update in the handling of the spectral baseline during fitting). In addition, some algorithm updates proposed in this manuscript are discussed with a strong focus on XCH4 retrievals, some even disregarding XCO. The title should therefore be changed to focus on the content of the paper. I propose “TROPOMI/WFMD XCH4 v1.8: improvements in spectral fitting, auxiliary datasets and post-processing”
2) I am concerned that the authors do not appropriately cite work that they have presented in previous articles. Two examples stand out to me.
Firstly, section 3.2 does not provide significant new information on WFMD v1.8 retrievals, since the algorithm updates related to the digital elevation model have already been discussed in a recent AMT article by the same team (Hachmeister et al. 2022). In fact, Figure 4 of the present article is essentially identical to Figure 10 of Hachmeister et al.. The authors should not publish these results twice. In my opinion, section 3.2 should therefore be removed.
Secondly, in line 68, please remove reference Buchwitz et al. 2017. That article is neither about TROPOMI XCH4 nor about WFMD retrieval configuration details and it is therefore an inappropriate self-reference in the context of that paragraph (“verifying or improving operational products…”, sensitivity to “details of the algorithm setup”).
3) The paper reaches substantial conclusions with respect to the performance enhancement due to the new destriping method and adapted cloud-filtering, but the work on the polynomial fit parameter update (section 3.1) does not go beyond a qualitative discussion of a case-study and that section requires more attention (see specific comments).
4) The article does not compare TROPOMI/WFMD results to the operational TROPOMI XCO and XCH4 products. I think that such a comparison would be a necessary addition to this manuscript.
5) Please comment if TROPOMI/WFMD and its post-processor will be made available to fellow scientists, which would greatly profit traceability of this work.
6) The authors occasionally go into detail on the performance of XCO with v1.8 of their algorithm, but they do not systematically conduct their analyses for both CH4 and CO. I think the article would be much clearer if CO was either dropped entirely from this work or if equal emphasis was placed on the two molecules.
7) The abstract should be edited to mention the TCCON analysis and the considerable improvement in filtering clouds above water surfaces.
Line 12: “…machine learning calibration… has been optimised.”; add a short statement describing the implemented updates.
Line 26/28: This sentence may be misunderstood in a way that the lifetime of methane is 9 years shorter than the lifetime of CO2. Please rephrase.
Line 33: This occurs both through natural processes and resulting from human activities. -> This occurs both through natural processes and human activities.
Line 49: …TANSO FTS aboard GOSAT … retrieves CO2 and CH4, … -> … measures/observes CO2 and CH4 absorption lines, …
Line 63: add citations to ATBDs for both operational products:
Line 64: add citation to https://amt.copernicus.org/preprints/amt-2022-255/
Line 100-105: To make this sentence more readable I suggest breaking it into two sentences.
Line 114: Is the only update a change with respect to the gridding of the underlying data? Please explain in the text.
Line 115: Was there a noticeable impact of the update in the meteorological reanalysis data on XCH4 results? Please elaborate.
Line 115: v1.5 -> v1.2 (or if v1.5 is correct, what is v1.5?)
Section 3.1 makes the case that increasing the degree of the polynomial which approximates the spectral baseline (from 2 to 3) is a general improvement to the XCH4 v1.8 product. I find that this section would be significantly more robust and less of a qualitative discussion if the following updates were made.
a) The motivation of this section is to highlight changes in the spectral fitting procedure, yet no actual spectral analysis is presented here. The proposed fit updates would be much more convincing if the effecte on fit residuals was displayed, and if fit statistics (Chi2/RMS, convergence quality, etc.) for retrievals with and without the polynomial upgrade (in the Etosha pan and elsewhere) were included in the manuscript.
b) According to https://earthobservatory.nasa.gov/images/147221/cycles-of-wet-and-dry-in-etosha-pan, the wet season in the Etosha Pan occurs between October and March. The authors argue that wetland-related CH4 enhancements are visible in the images of the second row in Figure 2. However, this appears to be a measurement taken during the dry-season. Additionally the true color images from VIIRS do not show any obvious inundation in the region as far as I can tell. I suspect that the residual XCH4 enhancements in Figure 2c and 2f are therefore still retrieval artefacts. Spectral signal levels can probably be used to detect inundated TROPOMI pixels. Please explain what your arguments are based on and why you think the enhancements detected here are real.
Line 132: How do you know what a realistic level of XCH4 enhancement is over the Etosha pan? Has this been studied before? If not, please reword.
c) What is the significance of this spectral fit update in view of the post-processing steps following the retrieval, especially the machine learning calibration? How does the machine learning calibration affect the Etosha pan enhancements? Are plots in 2c, 2f prior to or after application of the machine learning calibration? What would 2c, 2f look like without the update in the polynomial, but with the updates in the machine learning calibration?
Section 3.1 (Polynomial fit parameters) contains work that is conceptually similar to https://amt.copernicus.org/preprints/amt-2022-255/. Please include that paper as a reference and add a few remarks concerning that article’s conclusions in view of your results.
Line 119-120: “… it has been noted …” - by whom?
Figure 2: What is algorithm version v1.5? It has not been introduced earlier. How does it differ from v1.2 (which I believe corresponds to your work published in Schneising et al. AMT 2019)? For clarity, I think it would be good to use v1.2 (Schneising et al 2019) for 2b and 2e.
The reflectance spectra from Moreira et al. 2014 and Tayebi et al. 2017 are a good find. I think it would be very informative to show a plot of WFMD residuals in comparison to those spectra.
I also suggest fixing the spectral baseline polynomial to a shape similar to the spectra from Tayebi et al. and Moreira et al. and observing what effect that may have on the retrievals.
Do surface reflectance induced biases, as observed for XCH4 retrievals, exist for XCO in your algorithm?
Line 184-186: Which dataset did you use for the surface roughness feature? Please add a reference in the manuscript.
With regard to the post-random-forest 3-step quality filter: What is the rationale of filtering for these three metrics separately instead of including RMS and spectral shift/squeeze as features in the random forest? Please clarify.
Generally, does the random forest still consider the 25 features listed in Schneising et al., AMT, 2019? Have you updated any other configurations of the machine learning code? Please explain in the text, and give some more introduction to your random forest set-up and updates you have made (e.g. near line 180).
Which features are most important for classifying clouds over water in your random forest classifier?
Line 184: You almost doubled the training data-set: Are the new training data exclusively ocean scenes? How did you select the new data for training?
Please go into detail why the filter underperforms over water. Intuitively, shouldn’t these scenes be advantageous for the cloud filter, because the underlying surface is more homogeneous? A Figure showing the performance of the filter (and its challenges) over water would be very helpful here.
Paragraph from lines 216-230: How great is the loss of “good” measurements? If the numbers in Figure 5 are upper limits, how far from the upper limits is the actual loss (for the different cases you study in Fig. 5)?
Do you have any plans to include a cloud-shadow filter in your algorithm?
What impact do shadowy scenes have on the ensemble of your retrievals?
Line 216: “increases” -> increase
Line 233-234: I thought the introduction of the third-order baseline fit (section 3.1) was meant to resolve the albedo bias issue? If it remains, why bother making that adjustment if you are re-calibrating XCH4 in the post-processor anyway?
Why are you not conducting a machine learning calibration for XCO? Are the retrievals better? If yes, how are they better? Can the XCH4 “calibration” inform corrections to XCO?
I realise that it is standard practice in trace gas retrievals to carry out bias corrections. However, I have not seen a post-processing correction, or “calibration”, that brings in information from an a priori/model dataset again. This seems to defy the purpose of running a retrieval. The way this section is currently written makes it very hard for the reader to understand what impact the calibration has on the WFMD results and it raises many questions.
For example, how does the calibration affect XCH4 values in retrieved methane plumes?
How does it impact CH4 inversions on regional scales?
How far to the prior XCH4 does this correction pull the retrieved values (how does the distribution of retrieved values change)?
How does the correction vary by latitude and season? Please elaborate in the manuscript.
Line 240: Are you retrieving XCH4 from spectra at SZA>70? Are you taking the non-planar nature of the atmosphere into account for those cases? Do you trust your retrievals at such SZA values and what do they look like?
Line 253: Please explain why you subtract specifically 5 ppb (given that the SLIMCH4 bias at the three northernmost TCCON sites is greater than 5 ppb)? What is the mean Arctic bias and what do you consider “typical” bias levels?
Line 237: is r_cld the cloud flag from your cloud filter? Or is it from VIIRS?
When you include the across-track index in the calibration, what impact does that have on the striping pattern in XCH4? What is the reason to include the across-track dimension index if it does not sufficiently destripe the images?
Do you see any possibility to get the destriping done within this calibration scheme so that you do not need to run the wavelet procedure?
Figure 6: How did you chose the regions used in the training of the machine learning regressor? It looks like there is a latitude band missing (tropics) - why did you exclude that? At which point in the calibration procedure do TCCON measurements actually come into play (other than being a validation source for the climatology you are using)? If the correction does not explicitly use TCCON data, please remove the TCCON stations from the map, because it misleads the reader to think TCCON data go directly into the correction. Please explain in the manuscript.
Figure 8: “Coiflet wavelets” are introduced in the caption for the first time. Please also add a short explanation in the main text before referring to coiflets in this plot.
Lines 351-353: An analysis of the temporal development of atmospheric CO concentrations in comparison to other measurements would be very valuable here as well. Perhaps https://doi.org/10.1016/j.rse.2020.112275 or https://www.epa.gov/air-trends/carbon-monoxide-trends could be starting points for a comparison.
Line 455: Change citation Dlugokencky: Lan, X., K.W. Thoning, and E.J. Dlugokencky (2022): Trends in globally-averaged CH4, N2O, and SF6 determined from NOAA Global Monitoring Laboratory measurements. Version 2022-10, https://doi.org/10.15138/P8XG-AA10
This is the recommended citation (see bottom of page https://gml.noaa.gov/ccgg/trends_ch4/)