Articles | Volume 19, issue 7
https://doi.org/10.5194/amt-19-2601-2026
© Author(s) 2026. This work is distributed under the Creative Commons Attribution 4.0 License.
On the capability of the Changing Atmosphere Infra-Red Tomography explorer (CAIRT) candidate mission to constrain O3 and H2O in the upper troposphere and lower stratosphere
Download
- Final revised paper (published on 17 Apr 2026)
- Preprint (discussion started on 27 Dec 2025)
- Supplement to the preprint
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2025-6130', Anonymous Referee #1, 26 Jan 2026
- AC1: 'Reply on RC1', Quentin Errera, 09 Mar 2026
-
RC2: 'Comment on egusphere-2025-6130', Anonymous Referee #2, 27 Jan 2026
- AC2: 'Reply on RC2', Quentin Errera, 09 Mar 2026
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Quentin Errera on behalf of the Authors (10 Mar 2026)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (10 Mar 2026) by Zhao-Cheng Zeng
RR by Anonymous Referee #1 (29 Mar 2026)
ED: Publish as is (29 Mar 2026) by Zhao-Cheng Zeng
AR by Quentin Errera on behalf of the Authors (30 Mar 2026)
Referee comments
"On the Capability of the Changing Atmosphere Infra-Red Tomography explorer
(CAIRT) candidate mission to constrain O3 and H2O in the upper troposphere
and lower stratosphere"
Quentin Errera et al.
AMT Discussion started 27Dec25
--------
SUMMARY
The CAIRT instrument was a limb-viewing infrared radiometer proposed for the
ESA EE11 mission, one of two instruments which were considered at the final
selection. Although not selected ESA encouraged continued scientific studies.
This paper focuses on just one aspect of CAIRT which is to improve
measurements of O3 and H2O in the UTLS region, in this case via
incorporation of profiles into a data assimilation scheme. The
improvement is established not only with respect to a control but also
the long-established MLS instrument. For both instruments the
approach is to sample fields generated by an independent model at the
observation locations of the two instruments, apply the instrument
averaging kernels to simulate the measured profiles, and incorporate
these into a data assimilation scheme based on an different model.
The conclusions are that CAIRT shows improvement in the modelled
O3 and H2O fields, and is generally better than MLS, but, given the
set-up and CAIRT's much higher profile density, any other conclusion
would indeed have been surprising.
I have no major criticism with anything presented in the paper
although overall I feel it includes too much unecessary technical
detail on model settings and geolocation procedures, better suited
to appendices or supplementary data. But that's just my preference.
GENERAL COMMENTS
I would like the authors to clarify the following points
(see detailed comments below for further elaboration)
(a) There is mention of systematic errors but I would like a bit
more details on what these are, how they were assessed and how they
were handled in the data assimilation.
(b) The handling of cloud-contaminated CAIRT data is unclear, particularly
how it relates to "superobbing".
I also offer the following suggestions, for the authors consideration,
which I think would improve the paper
(c) There should be a plot of the limb spectrum as observed by CAIRT,
identifying features associated with the main molecular species.
(d) I appreciate that there is more to data assimilation than simply
interpolating retrieved fields, but it would have been interesting to
see how a simple gridding/binning of the CAIRT and MLS profiles on the
assimilation model grid compares with data assimilation approach (as
an extra test case).
(e) While the zonal mean plots can be interpreted in a number of different
ways according to the prejudices of the viewer, there should be a
summary table quantifying the improvement with a single number or two,
eg mean bias and SD, or something fancier such as information content
added by each instrument. But, anyway, a hard number.
DETAILED COMMENTS
P3 L76 "review"
Table 1: While I understand that the O3 uncertainty requirements were
expressed as ppbv It would be useful to have some indication of actual
O3 concentrations in the UTLS region in order to put these values
into context (I eventually found this information by inspecting the
colour scales in Fig 5).
P4 L88: Is resolution 0.25cm-1 here specified before or after apodisation?
P4 L96: It would be interesting to know the minimum pixel size, ie before
averaging spectra what is the FOV size of an individual spectrum?
P4 L97: I don't think you can claim a "strong" heritage based on just a
single predecessor.
P4 L99: This could be misunderstood. MIPAS actually had several
detectors, covering different spectral ranges, all co-aligned to view
the same 'pixel'.
P4 L100: Also liable to be misunderstood - while MIPAS (and MLS)
produced profiles along a single track CAIRT would have produced
profiles along multiple tracks so it's unclear whether these figures
refer to improvements in sampling along just a single orbital track
or, say, total number of profiles per day.
P5 L121: "within which..."
P5 L127 & L138: Since there is quite a bit discussion in the main text based
on what these figures in the supplementary data show, they should
have been included in the main body of the paper.
P8 L216: Here it is 0.2cm-1 sampling but on P4 L88 a figure 0.25cm-1 is
given for the instrument.
At the end of the paragraph it explains that this is after
apodisation. Knowing how this works, I assume it therefore
represents a convolution of spectra calculated on a
much-finer spacing with the apodised instrument lineshape,
and sampled at 0.2cm-1, but just reading "computed with
0.2cm-1 spectral sampling" a non-specialist might assume this
was the original fine grid spacing.
P9 L223: "factorized random error covariance matrices". I don't understand
what "factorized" means in this context. If I had to guess I would say
some sort of EOF/SVD decomposition of the L2 random error covariance.
P9 L224: Non-LTE effects can be a problem for modelling stratospheric H2O
lines. Were these included as part of the systematic error?
P9 L241: I didn't understand the statement setting y_a to zero to avoid
any a priori bias. Isn't this going to bias all terms in the simulated
L2 profile towards zero?
P9 L245: I didn't understand this. How do clouds influence retrievals beyond
the physical cloud boundary (unless you are just referring to partly
cloud-filled pixels containing a cloud-contaminated component).
P9 L246: I assume the CAMS cloud fraction is the fraction in the CAMS
model grid cell containing cloud of any sort, ie horizontal fraction
over some indeterminate but presumably large area compared with
CAIRT horizontal FOV extent. It is unclear how this maps into a
limb-view.
P9 L247: If a cloud is assumed to have finite vertical extent below the
cloud top, it is not obvious why 50km extent along the line-of-sight
prevents any retrieval beneath cloudy regions. If the cloud were high
enough (eg PSCs or high-level cirrus) you would presumably still be
retrieving along the line-of-sight through the cloud-free paths
below it?
Fig 2: It looks like this box spans the coast of East Africa. It would have
been nice to add some geographical features and maybe a length scale to
give a better impression of the spatial sampling of CAIRT.
P10 L254: With the reference to interferograms here I am now uncertain whether
the assimilation is of CAIRT L2 profiles, L1C spectra, or L1A
interferograms?
P10 Eqs 2 & 3: This is confusing. You're using bold font for y, as in
Eq 1, suggesting each y_i is a profile. Yet discussion of excluding
cloud contaminated values suggests this averaging is performed
level-by-level, ie y_i are the set of profile values at a single
altitude. If it is level-by-level does all the off-diagonal
information in the the L2 random error covariance get ignored in the
super-obbing?
P11 L276: Missing Section reference ("??").
P11 L292: It is stated that the CAIRT data for the tropics is representative
of other latitudes. But cloud heights and frequency in the tropics
are probably higher than other latitudes, and tropopause temperatures
lower, so this seems pessimistic rather than representative.
P12 L296: I don't know how valid it is to compare systematic errors without
a consistent definition of systematic errors for the two instruments.
P12 L303: Isn't Fig 6 the H2O equivalent of Fig 5? (rather than Fig S3)
Figs 5 and 6: Given the superficial similarity between CAMS and the two
instruments, it would have been informative to add a 3rd column showing
the differences Instrument-CAIRT.
P14 L327: I was not aware that systematic errors could be included in data
assimilation, except as a fudge by simply inflating the random error.
Can you explain a bit more how this was achieved?
P16 Section 8: Which of the four CAIRT assimilation runs listed in Table 2
was being used for the results shown in this section?
From the text on P18 L354-360 I would guess AR_CAIRT.
P16 Section 8: It would be useful, at the start of this section, to have
the equations which define the NMB, NSD and correl.coefficient.
For example, I can think of two different ways of normalising the
SD, based on instrument random error or atmospheric variability.
Typography:
Note the distinction (available in LaTeX) between -, --, and --- for
different sorts of punctuation, as well as $-$ for a minus sign.
Here they seem to have been used interchangably and randomly.
Also: undesirable paragraph indentation immediately following
equations (1) and (4). In LaTeX this can be avoided by not leaving
blank line after the equation.
----