the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using portable low-resolution spectrometers to evaluate Total Carbon Column Observing Network (TCCON) biases in North America
Nasrin Mostafavi Pak
Jacob K. Hedelius
Kaley A. Walker
- Final revised paper (published on 09 Mar 2023)
- Preprint (discussion started on 02 Dec 2022)
RC1: 'Comment on egusphere-2022-1331', Benedikt Herkommer, 20 Dec 2022
AC1: 'Reply on RC1', Nasrin Mostafavipak, 03 Feb 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-1331/egusphere-2022-1331-AC1-supplement.pdf
- AC1: 'Reply on RC1', Nasrin Mostafavipak, 03 Feb 2023
RC2: 'Comment on egusphere-2022-1331', David Griffith, 23 Dec 2022
Review AMT https://doi.org/10.5194/egusphere-2022-1331:
Pak et al., Using portable low-resolution spectrometers to evaluate TCCON biases in North America
Reviewer: David Griffith
This paper describes an intercomparison campaign between four EM27/SUN solar FT spectrometers at several TCCON sites in N America. It follows and extends previous studies by Hedelius et al (AMT 9, 3527–3546, 2016) and from the COCCON network (Frey et al., AMT, 12(3):1513–1530, 2019) to further characterise these portable low resolution spectrometers for retrieval of atmospheric trace gas column amounts from solar infrared spectra. In particular this paper compares retrievals and assesses biases in using the previous GGG2014 and latest GGG2020 retrieval software for the EM27 spectra, and adds comparisons with simultaneously-measured aircore profiles. With increasing usage of portable low resolution spectrometers such as the EM27/SUN for this purpose, the paper adds valuable further characterisation of the method and instruments and is well suited to publication in AMT after addressing reviewers’ comments.
2.3 ILS measurements.
Although ILS measurements using open path H2O spectra were made where possible, they are not used in any way in the retrievals. They seem to be used only qualitatively as an indication of any possible change in alignment. The version of linefit used (14.0, with full 20-parameter model) is different from that recommended by Alberti et al. (2021) (14.8, simple 2-parameter fit). The ME at maximum OPD derived here is not as unique as a measure of ILS as the simple 2 parameter version because it depends on the shape of the ME or phase vs OPD curves and on the SNR, regularisation and a priori assumptions. Thus the only real message from this section is that there was no major change of alignment from the 4 instruments (Fig 2, lower), and this section could possibly be shortened to reflect that.
3.1.1. Airmass dependent correction factors (ADCFs).
Although the ADCF method is referenced, I think the formula/algorithm for the correction should be summarised here; there is otherwise no information in the paper about how the factors listed in Table 3 are applied and the reader has no idea how to interpret them. TCCON ADCFs also include a reference SZA around which the correction is expanded, and an exponent in the formula. It should be stated if/that these are fixed for EM27 analyses, and to which values. In line 267, perhaps reword slightly to make clearer that the derived ADCFs are obtained by fitting the long term 2018-2021 records from the 4 instruments to mean values for this dataset, and that these mean values are then applied to all further measurements.
3.1.2 Instrument-instrument biases and “calibration”.
Are the bias corrections to the raw measurements applied multiplicatively or additively? As with the ADCF, it would be helpful to spell out near line 300 the algebraic algorithm used for this (and every) correction. In the COCCON network a similar process is followed, and though Frey 2019 is referenced, it is not discussed or compared. I think it would be helpful to add some text to compare the approach here with that of COCCON, for example to (hopefully) demonstrate that both groups see similar biases. Ultimately the research community that uses these data needs to know that biases between instruments not only within the N American and European-centred networks are minimised, but also that they also minimised between the networks. The two networks use different retrieval codes, so it is quite important to be confident they do not introduce any relative bias.
- Evaluation of TCCON biases against the EM27/SUNs (and Discussion)
This section could be improved and more readable by reorganising to separate out the “comparison with TCCON” from the “discussion” of the interpretation of the time series such as at Eureka. For example the paragraph from L 348 discusses the atmospheric interpretation of the Eureka time series, then the discussion returns to the TCCON-EM27 biases in Fig 10 and Table 6. Perhaps this paragraph could be merged into the current section 4.1?
In the TCCON-EM27 bias section, since Xluft has been added to Fig 8, I would recommend also adding the mean Xluft values to Table 5 so the reader can do their own sanity check to see just how different from 1.000 they are.
- Conclusions and general.
I would find it helpful to make more explicit comparisons with similar studies on the COCCON network, for example by Frey et al 2019, to the extent allowed by the COCCON publications. At present COCCON is acknowledged but not further compared or discussed. I am sure the modelling community would like to be able to combine results from both this work (using GGG and EGI for analysis) and COCCON (using Proffast and independent post-processing) without fear of significant bias. Explicitly addressing this point of comparison would be particularly useful.
L45. “First and foremost…” Actually, TCCON stations “first and foremost” use consistent spectrometer hardware and data collection protocols to collect spectra, before using consistent data analysis methods.
L54: cm-1 not italicised
L82: The original design includes (not consists of) one InGaAs detector…
L104: use “there is” rather than the abbreviation “there’s” in formal text such as here.
L106: Please state here, maybe better in section 2.2 or later site-by-site that the EM27/SUNs were at the same height asl as the TCCON trackers, or if not, how the altitude-dependent pressure correction was applied.
L157: see L104 - it is
L 197: replace “high accuracy” with “traceable accuracy”
L198: add “calibrated” and “simultaneously” : …against calibrated airborne in situ profiles that are simultaneously collected at the TCCON sites.
L201: update ref to Tans 2009 – there was a recent and complete paper by Tans describing the theory behind Aircore.
L203: I think the aircore description is a little too brief - it is worth spelling out here that the aircore tube fills from one end so the earliest sample is compressed into the far end of the tube, and that diffusion is slow enough that the vertical profile is preserved along the tube length. “What about diffusion?” is the most common question I get when explaining Aircore to people.
L216: launches is presented … (not are)
L220 and L 225: why do you use the a priori GGG profile above the aircore ceiling rather than the scaled a priori profile after the fit? How significant is the difference? Similarly in Figure 3, why show the a priori profile and not the scaled profile after the fit?
L231: perhaps add “… spectroscopic linelist…” Linelist may be jargon to some readers.
L241: spell out GEOS-FPIT on first usage
L262: suggest reword to “… caused by spectroscopic and instrumental inaccuracies.” Or “errors”.
L278: replace “deteriorates” with “increases” or “actually increases”
L324: The aircore does not “float”, this term comes from ballon-borne occultation measurements. Perhaps replace “float” with “maximum”
L332: can you provide a reference to justify why you can state that the altitude errors in the aircore data are negligible?
L342: In the definition of Xluft, please indicate that Vdryair is calculated from measured surface pressure at the time of the measurement.
Caption Figure 8 and 9. To be clear, I suggest you add “collocated” as in “Timeseries of EM27/SUN (tb) retrieved XCO2, XCH4, XCO and Xluft in color and co-located TCCON in…”.
L357: the meaning of this statement is not clear to me – reduced by a factor of 8 (2) relative to what? Single spectrum measurements, I assume – please state so. This is not so important, it is the averaging time that matters most.
L438: When using GGG2020, state explicitly if the site to site biases agree everywhere for both high and low resolution TCCON measurements. (From Table 6, the maximum biases are 0.53 and 0.8 ppm respectively – these are not strictly less than the variability quoted of 0.5 ppm)Citation: https://doi.org/
AC2: 'Reply on RC2', Nasrin Mostafavipak, 03 Feb 2023
Publisher’s note: this comment is a copy of AC3 and its content was therefore removed.Citation: https://doi.org/
AC3: 'Reply on RC2', Nasrin Mostafavipak, 03 Feb 2023
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2022/egusphere-2022-1331/egusphere-2022-1331-AC3-supplement.pdf
Peer review completion
- Full-text XML
Overall, this is a sound piece of work dealing with the important task of comparing different TCCON sites (in North America) and the study fits well within the scope of AMT. I recommend publication after minor revisions.
In Section 2.3, you are dealing with ILS measurements. In Figure 2 you show the ILS measurements of the TCCON stations but you are not classifying them (are they good/bad?). They are not used for a further data analysis, I do not see a strong need to show them here. Is there an acceptance level? The variation of 1% found for the fluctuations in EM27/SUN modulation efficiency (ascribed to variable humidity and room temperature) seems way too high, Alberti et al., 2022 demonstrated a significantly better reproducibility (see Fig. 15 in this work). It is important to take the variable partial pressure of H2O into account when doing the analysis (which can be calculated from total pressure, path length and H2O column) and to do the analysis repeatedly until a self-consistent solution (for ILS, column, and partial pressure) is found. If this has been taken into account, I would suspect that there was a problem in coupling the light source to the spectrometer in a reproducible manner.
You derive the airmass-depended correction factors by comparing the measurements in the course of the day to the daily median, thereby assuming the variation is solely due to an airmass dependency. I am in doubt if this assumption is valid since it ignores intraday variability of GHGs, which, especially in a rural area like Toronto, seem reasonable for me to occur.
Next you are writing in line 281 you are using the same method as GGG2014 to apply the correction factors. However, there is missing an explanation or at least a specific citation on how this is done in GGG2014.
You are using the median for the comparison of the 10-minute bins of the different spectrometer but you are using the mean value to calculate the 10-minute bins. What are the reasons for choosing the mean or the median for the different situations?
In chapter 3.1.2 you are describing how the maximum biases are calculated. However, this procedure is not clear to me. I understand you are taking the 10-minutes bins of the different spectrometers and then calculate the difference of the minimal and maximal bin each, even though they are not temporal coincident. This however, would include the variation of the XGas value to the maximum bias. Please clarify what is done there.
Lastly, you are writing in the introduction that you were taking a pressure sensor to the road trip to compare with the pressure measurement done on site. However, no detailed comparison is included in the paper. It would be nice to at least say a few words to the pressure comparison or better to show some results (maybe in a table?).
Line 48: I am not fully aware of TCCON doing calibration of surface pressure measurements. If they do, please provide a citation or an explanation how this is done.
Line 93: The abbreviations used for the EM27/SUNs and the TCCON station seem to be chosen randomly (e.g. it is unclear to me why the Armstrong Flight Research Center is abbreviated with “df”). Furthermore, for a reader it is quite confusing which abbreviation is a TCCON station and which is an EM27/SUN. Maybe add TCCON-xx to the TCCON sites or vice versa.
Table 1: It would help to reduce the reader’s confusion with the abbreviations if you would add the abbreviations of the TCCON sites in the “Site” column.
Line 169 – 171: I am not sure if I understand correctly what you are doing. This is what I understood: You are using the Digiquartz data as a „standard” measuring at height a. For AFRC and Lamont you are adding a factor to bring the pressure of the TCCON station to the level of the “standard” to correct for height. Have you ever compared the pressure measurements of the “standard” with the pressure of the TCCON station? Because otherwise, you cannot be sure if the correction is only due to height or also compensating sensor biases.
Line 290: In the list of citations Alberti et al. 2022 would be good do mention, too.
Caption Figure 6: I was confused by this plot first, since I thought you are comparing something with the TCCON data and not that you only recorded the measurements at these sides. Maybe it is worth to write this explicitly.
Figure 8: Adding vertical lines separating the measurements of the different sites could help to make the plot clearer.
Line 365: Add explicitly which is the reference EM27/SUN (is it tb?). This could help the reader for making fast comparisons with the figures.
Line 408: This sentence is more appropriate to section 4, not 4.1. Because it sums up the results of all the other stations than eureka, it appears misplaced to me in a section treating the peculiarities of the Eureka station.
In the caption of figure 1, Park Falls should be abbreviated by “pa” instead of “oc”.