Evaluating the consistency between OCO-2 and OCO-3 XCO2 estimates derived from the NASA ACOS version 10 retrieval algorithm
Abstract. The version 10 (v10) Atmospheric Carbon Observations from Space (ACOS) Level 2 Full Physics (L2FP) retrieval algorithm has been applied to multi-year records of observations from NASA's Orbiting Carbon Observatory -2 and -3 sensors (OCO-2 and OCO-3, respectively) to provide estimates of the carbon dioxide (CO2) column-averaged dry-air mole fraction (XCO2). In this study, a number of improvements to the ACOS v10 L2FP algorithm are described. The post-processing quality filtering and bias correction of the XCO2 estimates against multiple truth proxies are also discussed. The OCO v10 data volumes and XCO2 estimates from the two sensors for the time period August 2019 through February 2022 are compared, highlighting differences in spatiotemporal sampling, but demonstrating broad agreement between the two sensors where they overlap in time and space. A number of evaluation sources applied to both sensors suggest they are broadly similar in data and error characteristics. Mean OCO-3 differences relative to collocated OCO-2 data are approximately 0.2 ppm and −0.3 ppm for land and ocean observations, respectively. Comparison of XCO2 estimates to collocated Total Carbon Column Observing Network (TCCON) measurements show root mean squared errors (RMSE) of approximately 0.8 ppm and 0.9 ppm for OCO-2 and OCO-3, respectively. An evaluation against XCO2 fields derived from atmospheric inversion systems that assimilated only near-surface CO2 observations, i.e., did not assimilate satellite CO2 measurements, yielded RMSEs of 1.0 ppm and 1.1 ppm for OCO-2 and OCO-3, respectively. Evaluation of errors in small areas, as well as biases across land-ocean crossings, also show encouraging results, for each sensor and in their agreement. Taken together, our results demonstrate a broad consistency of OCO-2 and OCO-3 XCO2 measurements, suggesting they may be used together for scientific analyses.
Thomas E. Taylor et al.
Status: final response (author comments only)
- RC1: 'Comment on amt-2022-329', Anonymous Referee #2, 08 Mar 2023
- RC2: 'Comment on amt-2022-329', Anonymous Referee #1, 09 Mar 2023
Thomas E. Taylor et al.
Thomas E. Taylor et al.
Viewed (geographical distribution)
This paper evaluates of the consistency between OCO-2 and OCO-3 xCO2 (column-averaged dry air mole fraction) retrievals using the ACOS v10 algorithm. The authors describe updates in the ACOS v10 algorithm as applied to the two instruments, as well as the retrieval data flow from pre-processers to posterior quality filtering and bias correction. Data volume and xCO2 retrievals are compared between the OCO-2 and OCO-3. The satellite retrievals are also compared with truth proxies including ground-based TCCON network and global models that assimilates only near-surface CO2 observations. Additionally, the authors also attempt to characterize retrieval errors over small areas and from retrievals across coastal lines. From these analyses, the authors conclude that OCO-2 and OCO-3 ACOS v10 xCO2 retrievals are broadly consistent and can be used together in science studies. Overall, this is a comprehensive study on the quality of xCO2 retrievals from OCO-2/-3 and should be of interest to readers of Atmos. Meas. Tech. The paper is well-organized, well-written and the figures are of good quality. The analysis methods used in the paper are mature and well-established. I would recommend that the paper be published in Atmos. Meas. Tech after minor revisions.
The use of TCCON as truth proxy for retrieval evaluation does appear to be circular, as indicated by the authors. Have some other data sources (e.g., from airborne campaigns) been considered as truth proxies?
The authors discuss the updates from ACOS v8/v9 to v10, and I’m wondering what are the impacts of these updates in terms of retrieval quality? Can the authors compare, for example, the RMSE of ACOS v8 vs. TCCON with that of ACOS v10 vs. TCCON?
Line 9: how do you interpret the differences between OCO-2 and OCO-3 in the context of precision/bias estimates against truth proxies? Do the OCO-2/-3 differences reflect or represent systematic errors or certain components of the systematic errors?
Line 100: spatial scale for precision and accuracy requirements?
Line 125: version 10 here refers to the L1B algorithm? This is a bit confusing. Is this part of level 1 or level 2?
Lines 163-164 - specify the variability of the bias in the two versions?
Table 2: it appears that Table 2 is not mentioned in text.
Line 252: how is initial median calculated (from the six models in Table 4)? Line 253 mentions “four models”.
Line 259: how is model/TCCON profile converted to 20 layers of ACOS profiles?
Table 7: what is considered truth xCO2 for SAA?
Line 290: course should be coarse?
Line 290: logarithm is taken for AODfine as well?
Line 322: FIn should be In.
Figure 3: Y axis spacing is not linear?
Section 4 – I’m wondering if Section 4 can be moved to the Appendix?
Figure 4 – Can you show the correlation between OCO-2 and OCO-3?
Figure 8 and Section 6.1 – what is the typical variability of individual soundings during each overpass? Do the comparisons change if the size of the domain is changed?
Line 482: course should be coarse?
Line 484: Table 4 and Table 5 list six models, not four.
Figure 9: the title of right column in the figure is a bit confusing.
Figure 10 and line 514: northward propagation is not that obvious in the plot (especially Figure 10b), perhaps mark it in the figure?
Figure 11: describe in the caption what each data point represents in the scatter plots.