Articles | Volume 17, issue 7
https://doi.org/10.5194/amt-17-1891-2024
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
An improved OMI ozone profile research product version 2.0 with collection 4 L1b data and algorithm updates
Download
- Final revised paper (published on 04 Apr 2024)
- Preprint (discussion started on 26 Jul 2023)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on amt-2023-154', Anonymous Referee #1, 30 Aug 2023
- AC2: 'Reply on RC1', Juseon Bak, 19 Dec 2023
-
RC2: 'Comment on amt-2023-154', Anonymous Referee #2, 05 Sep 2023
- AC1: 'Reply on RC2', Juseon Bak, 19 Dec 2023
-
RC3: 'Comment on amt-2023-154', Anonymous Referee #3, 05 Sep 2023
- AC3: 'Reply on RC3', Juseon Bak, 20 Dec 2023
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Juseon Bak on behalf of the Authors (21 Dec 2023)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (21 Dec 2023) by Sandip Dhomse
RR by Anonymous Referee #2 (26 Dec 2023)
RR by Anonymous Referee #1 (02 Jan 2024)
RR by Anonymous Referee #3 (05 Jan 2024)
ED: Publish as is (06 Jan 2024) by Sandip Dhomse
AR by Juseon Bak on behalf of the Authors (16 Jan 2024)
I find that this is a carefully crafted paper, which I appreciate. I can tell the authors put a lot of effort into error assessment and to describing their algorithm improvements. They describe a v2 product that is a significant improvement over an already good performance by the v1 product. However, I believe the authors vastly understate the importance of their soft calibration approach in the success of the OMPROFOZ product. Nor do the authors adequately discuss the information content of their product given their soft calibration approach. The derived ozone profiles are effectively normalized to equatorial MLS profiles once per year. Therefore, the OMPROFOZ product is one that describes extra-tropical ozone profile variability and intra-annual ozone variability relative to MLS ozone profiles. There is nothing wrong with this approach, nor is it even new to this version. A product such as OMPROFOZ that better describes UTLS ozone variability is quite valuable. But the authors should clearly state up front (i.e. in the abstract or in the conclusions) what this product is and what it is not. In particular, it is not a product that independently measures long-term changes in ozone profiles. Their soft calibration approach is central to the definition of this product, yet the authors treat it almost as an afterthought with little mention throughout this paper. One gets the impression they themselves do not fully appreciate the role this normalization plays.
Section 1, Line 51
Remove "been"
Section 1, Line 56
Insert "to" before "evaluate"
Section 1, Line 66
Since this is the first reference to MLS in this paper, please indicate that you refer to the AURA instrument.
Section 2.1, Line 122
In the interests of full disclosure the authors should inform the readers that the row anomaly affects ALL of the UV1 channel. There are no rows that are reliably free of the effects of the anomaly, though longer wavelengths tend to be less effected than shorter ones, and lower rows less than higher ones.
Section heading 2.1 is duplicated. Section 2.2 is missing.
Section 2.2, Lines 171-175
The authors state that the measurement noise reported in the OMI L1B product underestimates the true noise. This is a well-known problem across multiple instruments. It occurs because pseudo-random systematic errors, either in the measurements or in the model, are much larger than detector noise. The authors go on to state that their assumed minimum errors are uncorrelated between wavelength, but this seems to be a rather poor assumption. Many modeling and measurement errors are spectrally correlated, e.g. cloud modeling errors. Can the authors comment on the effect such an assumption has on their product?
Section 3.2, Line 298
Begin this sentence with, "In Version 1 these meteorological variables ..."
Section 3.5, Line 358
Unlike other BUV instruments, normalizing by the OMI measured solar irradiance does not reduce optical degradation errors. With OMI there are several optical elements not shared in common between Earth radiance measurements and solar irradiance measurements. In the case of solar irradiance these elements (diffuser and folding mirror) represent the primary sources of degradation. As a consequence, the Coll. 4 degradation corrections for radiance and for irradiance are completely separate. Furthermore, the degradation correction for irradiance measurements was derived by assuming constant solar irradiance over the mission (Figure 5c demonstrates this point). This is a reasonably good calibration approach for wavelengths longer than 300 nm, but not for the UV1 channel. There are clearly benefits to normalizing OMI radiances with a time-dependent solar irradiance, but cancellation of long-term optical degradation is not one of them.
Having said all this, it's not clear why the authors are concerned about optical degradation given their soft calibration approach outlined in Section 3.8. Perhaps the authors should make it clear in Section 3.5 that their interest in improved solar irradiance measurements is solely to address seasonal variations in instrument calibration.
Section 3.1, Lines 369 - 371.
What does "implementations are identically applied" mean? If you mean to say that Coll. 3 and Coll. 4 data are treated the same for this experiment, please say so.
Section 3.6
The authors could instill confidence in their pseudo-absorber approach to scene inhomogeneity if they were to demonstrate that their empirical parameters correlate with scene reflectivity changes or the small pixel column results contained in the OMI Level 1B product. The largest slit function errors will occur in the along-track direction at cloud edges, which can be identified via scene reflectivity or the small-pixel data. However, if such a comparison has already been shown in the referenced paper the authors may ignore this comment.
Section 3.8
Per the authors' description in lines 454-463, OMI calibration is adjusted so that the retrieved ozone profiles match MLS + LLM profiles in the tropics. Per the description provided, this normalization occurs during the northern summer every year. While such an approach should help deal with systematic biases caused by the row anomaly, a once-per-year correction is inadequate to deal with the variable nature of the row anomaly. The authors should address the question of what intra-annual TCO errors may remain after the once-per-year soft calibration corrections.
Section 3.9
The authors should attempt to provide a physical explanation for the observed intra-orbital variations in residuals. Without a reasonable explanation how can the authors or the readers be confident a static CMC correction is approriate and adequate? The most likely explanation for the observed residual variation is additive errors (e.g. stray light) and the row anomaly. Will the CMC as the authors have implemented it address errors introduced by stray light and the row anomaly?
Figure 13
The MB and Std. Dev. labels appear to be reversed.
Figures 13 & 14
Given that the OMPROFOZ product is tied via soft calibration to MLS+LLM, it will be helpful to show readers similar comparisons of MLS+LLM to the same ozonesonde measurements (or at least measurements from the same stations). This may provide insight into how much of the observed TCO-sonde difference arises from the choice of soft calibration.
Section 4.0, Line 583
The authors should avoid referring to the flags as "TOMS-based" and instead continue to reference the OMUANC product as the source of these flags.
Section 5.0, Lines 628-630
This brief mention of the soft calibration understates the role it plays in the performance of this product. The text suggests that its role is to merely keep the simulations close to the measurements, perhaps to keep them in a more linear regime. Surely, in an iterative retrieval algorithm that typically requires no more than 2-3 iterations (Section 2.2) dependence on the initial guess is not strong. The authors should acknowledge that the primary role played by their soft calibration is to eliminate the long-term drift observed in v1, and to remove some of the static and slowly varying row anomaly errors that have hitherto stymied all other attempts at retrieving ozone profiles from OMI data.
Section 5.0, Lines 643-648
The authors imply that the improved long-term drift is somehow related to switching from Coll. 3 to Coll. 4 and to some unidentified implementation details. Given the soft calibration approach in v2 it is unlikely that the improved calibration in Coll. 4 plays any role in this.