the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Ozone Monitoring Instrument (OMI) collection 4: establishing a 17-year-long series of detrended level-1b data
Quintus Kleipool
Nico Rozemeijer
Mirna van Hoek
Jonatan Leloux
Erwin Loots
Antje Ludewig
Emiel van der Plas
Daley Adrichem
Raoul Harel
Simon Spronk
Mark ter Linden
Glen Jaross
David Haffner
Pepijn Veefkind
Pieternel F. Levelt
Download
- Final revised paper (published on 14 Jun 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 14 Jan 2022)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2021-430', Anonymous Referee #1, 26 Jan 2022
General comments:
Congratulation to the entire current and former team of OMI for such an excellent system providing - even long passed its foreseen mission lifetime - very valuable data. This paper gives a good overview of the adaptations to the processing including the degradation correction to improve the data quality. I recommend publication, after some corrections, incl. descriptions of the reached improvements for all changed processes.
Specific comments:
Page 1/ line 6: ‘… until the eventual end of the mission.’ Is this to be understood including possible extension even beyond 2023? In line 3 you mention: ‘for many years more.’ Suggestion to be more specific and possibly state the limitation on the extension of a mission, which is most likely here also the case due to remaining necessary fuel for deorbiting. Later you mention this on page 4/ line 91,92.
Line 8,9: In combination with the title, is the assumption correct, that data of the past 17 years is planned to be reprocessed? Maybe worth mentioning in the abstract already, if reprocessing is planned/ done. Later in the conclusions page 38/line 814 it is mentioned: ‘the reprocessing of the entire 17 year mission up until now is in progress’.
Line 17, 18: Is the understanding in combination with the statement in line 3 correct, that TES and HIRDLS are not operated anymore? Suggestion to state more explicitly the current status of TES and HIRDLS.
Page 2/ line 39,40: ‘..instrumental effects that are common’ suggestion to state the main differences between the optical paths between sun and Earth port, e.g. diffuser.
Line 48, 49; ‘For collection 4 the TROPOMI naming convention was adopted, referring to the UV1, UV2 and VIS channels as band 1, band 2 and band 3 respectively.’ Can you add an explanation why this has been adopted?
Page 3/ line 54, 59: Suggestion to add references for collection 1 and collection 2 dataset, e.g. Oord, 2006 SPIE and Oord, 2006, IEEE, vol 44, no 5, see also page 6/ line 154 where one of the references is provided, but here for collection 3, which was earlier referenced to Dobber, 2008.
Page 4/ line 114, 115: you mention completely understandably, that the updates of the KNMI and NASA L2 processors fall outside the scope of this paper, but could you possibly add some references?
Page 5/ line 132: To get a better understanding what 70 000 orbits mean in time, could you add in the introduction to OMI, how many orbits per day OMI performs, e .g. around page 2, paragraph starting at line 30?
Page 9/ line 240 / section 3.5: Might it be, that an angular dependence correction is non-optimal leading to this ‘striping’? Is a seasonal effect observed? Suggestion to also add a figure to illustrate this observed effect.
Page 15, 16/ section 5.2: It may be worthwhile stating, that even if the QVD degraded more than the ALU diffusers, the degradation shown over those 12 years (table 4), 16 years page 17 (figure 4) is very low compared to other instruments, especially considering its daily use.
Generally not for all described changes to the processor from collection 3 to collection 4 the improvements are described/ shown by absolute, error bar reductions or end-product improvement. Here some examples:
- Section 6.2 you mention ‘Furthermore, this over-fitting can result in unexpected behavior for extreme values of other input variables like the OPB temperature as well.’ but no numbers provided and improvement not clear;
- Section 6.1 what is the advantage of the difference implemented in collection 4?
- Section 6.2.1 improvement of changing the method on end-product not clear;
- Section 6.2.3 Improvement not clear;
- Section 6.3 other method described, but improvement not clear;
- Section 6.4 ‘resulted in a large amount of ground pixels that were flagged unnecessarily’ without giving e.g. percentage improvement;
- Section 6.5 transient signal flagging.
Suggestion to amend graphs and/or values of significant improvements, where missing.
Page 35/ line 774: ‘bias is expected due to the Earth-Sun distance normalization that is present in collection 4 and not in collection 3.’ If understood correctly a bias is introduced by the different method in collection 4. And, the bias is basically the improvement implemented by the new correction, but not shown in comparison with the former data from collection 3. Previously on page 8/ line 213 it is only stated that now both radiance and irradiance are corrected for Earth-Sun distance. Please consider to make the text more explicit. And please describe the value of the bias which is understood as the improvement in collection 4.
Line 776, 777: Is the mentioned ‘aggressive flagging’ linked to page 12/ line 310 section 4.5 Detector pixel quality flags? If yes, suggestion to add reference to that section here.
Page 39/ line 821: ‘that the observed Earth reflectance is not affected by instrumental artifacts’ might this be a too strong argument, since also the text describes there remain some effects, which are not able to be identified in flight, e.g. folding mirror, telescope mirror? Suggestion to change the wording slightly, e.g. is ‘not significantly affected’.
Technical corrections:
Page 3/ line 67: ‘trend and calibration monitoring system (TMCF)’ is it TCMF or trend monitoring and calibration system?
Page 9/ line 237: CKD file, please provide abbreviated text.
Page 15/ line 394, 395: QVD, quasi volume diffuser ALU1 and ALU2 diffusers made from aluminium.
Page 16/ line 403: ‘ratio From’ à ratio. From
Page 36/ figure 21: suggestion for visualization to use the same y-scale for the ratios from 1.00 to 1.40 as for the UV1 for all channels and to use dots instead of lines for better visibility and comparison.
Figure position: The figures positioning sometimes interrupts a sentence of the text, or , e.g. page 34/ figure 19 are placed in the next section. Consider repositioning the figures closest to their description in the text.
Last but not least: Maybe it would be nice to refer also to one of the early OMI papers by its optical designer Huib Visser, e.g.
Smorenburg, C., H. Visser, and K. Moddemeijer, "OMI-EOS: Wide field imaging spectrometer for ozone monitoring", Europto/SPIE conference, Berlin, 1999, SPIE volume 3737, 1999
and/or
Piet Stammes, Pieternel F. Levelt, Johan de Vries, Huib Visser, Bob Kruizinga, Kees Smorenburg, Gilbert W. Leppelmeier, and Ernest Hilsenrath "Scientific requirements and optical design of the ozone monitoring instrument on EOS-CHEM", Proc. SPIE 3750, Earth Observing Systems IV, (24 September 1999); https://doi.org/10.1117/12.363517.
Citation: https://doi.org/10.5194/amt-2021-430-RC1 - AC1: 'Reply on RC1', Antje Ludewig, 18 Mar 2022
-
RC2: 'Comment on amt-2021-430', Anonymous Referee #2, 27 Jan 2022
This is a good paper presenting a clear overview on the OMI L1b collection 4 data.
I recommend publication, after some minor corrections.Page 4:
"This updated OMI processor has in-orbit calibration functionality in forward mode, making the TMCF system obsolete. The available TMCF calibration data has been analyzed, such that historic trends in the instrument calibration status can be corrected for in the collection 4 L01b (re-)processing."Page 6:
"The instrument operation schedule has been updated such that calculation and calibration needed for background correction and random telegraph signal detection can now been done by the collection 4 L01b processor in forward mode without the need for the TMCF system."Page 6:
"The design of the collection 4 L01b makes it possible to have dependencies between measurements and perform aggregate calculations."Page 6:
"This allows, for example, to initially process background measurements, and use an aggregate of these processed background measurements in the background correction during the processing of the remaining measurements."
We assume that this processing approach is applied to one orbit only, but this is not clear from the text.
Is it possible to also apply this approach to multiple orbits, or to measurements / results from several days / weeks / months?Page 6:
"Another improvement is that the tables allow a more fine-grained processing configuration."
It is unclear from the text if this refers to measurement class (as indicated), or to ICID (Instrument Configuration IDentifier).Page 9:
"For collection 4 L2 processing an alternative irradiance product is generated that consists of the running average over 100 daily irradiance measurements, yielding an improvement of the signal-to-noise ratio with a factor of 10."
This requires a memory capability in the processing system. How is this implemented?Section 4.6 RTS:
In collection 3 the RTS map is based on analysing 30 days of dark signal data.
In collection 4 one day of data is used.
It looks like collection 3 is more looking more RTS in general, whereas collection 4 is more looking for RTS that is considered relevant for the L1b accuracy.
It would be interesting to know and understand more about the differences between these 2 methods.Section 5.1:
"A small change however is that in collection 3 the sensitivity calibration, as used by the L01b data processor, was provided as a function of wavelength in the calibration key data.
For collection 4 the TROPOMI convention was used, and the calibration key data was converted to be a function of detector pixel."
How do you deal with wavelength shifts for collection 4?Figure 4:
- The caption refers to top and bottom panels instead of left and right panels.
- "Clearly there is an overall 4% degradation with no strong wavelength dependence [ALU1]"
This is surprising and seems to point to a non-optical origin, such as perhaps geometric or electronic effects. Please elaborate a bit more on the origin of this observed 4% wavelength-independent degradation.Section 5.3 Relative irradiance:
It would be interesting to know more about the final accuracy differences between collections 3 and 4.Section 5.4, Figure 7:
- The caption refers to upper and lower panels instead of left and right panels.Section 5.4:
"This suggests that 2% – 3% of the observed change is independent of wavelength and not a result of optical degradation.
Also it is evident that the degradation can be strongly row dependent, especially for the UV1 channel."
What is the expected cause of this 2-3% offset? Does it make sense to include this in the irradiance degradation correction, when the cause is not optical?
What is the expected cause of this row dependency?Figure 14:
The indicated wavelength shift is 140 pm over 40K. Please indicate how much this is in spectral pixel size (e.g. 0.13 spectral px).Figure 15:
The indicated wavelength shift is 60 pm over a Q-factor range of 1.2. Please indicate how much this is in spectral pixel size (e.g. 0.06 spectral px).Citation: https://doi.org/10.5194/amt-2021-430-RC2 - AC2: 'Reply on RC2', Antje Ludewig, 18 Mar 2022
-
RC3: 'Comment on amt-2021-430', Ruediger Lang, 17 Feb 2022
The paper by Kleipool et al. on the “OMI Collection 4: establishing a 17-year long series of detrended
L1b data” details improved methodologies (with respect to earlier updates to the processing, in particular collection 3) applied to the processing of OMI level-0 to 1b measurements, including the correction of sensor degradation and calibration key-data deficiencies. In particular, low level, but important aspects like detector pixel response flagging has been improved and streamlined in collection 4, in particular to what is currently done for the Sentinel-5p mission. It is however expected that future missions like Sentinel-4, 5 and CO2M will take benefit of the presented approaches used here. Also because more harmonised approaches in these aspects will likely improve the understanding and will encourage the use of the data (and corresponding error flagging) by the users of level-1b data of these missions.
The paper is well written and clearly structured, and provides the latest version of the OMI Algorithm theoretical baseline Document (ATBD) for OMI level-0 to 1b processing as a supplement material. The latter is appreciated but also provides some challenges for non-expert readers of the paper (see below).
Next to the important aspects of detector pixel level performance monitoring and flagging, Kleipool et al. also present their methodology to address and correct the observed long-term degradation of Solar irradiance and Earthshine radiance signal levels (as expected for these type of sensors over such long time scales, in particular towards the UV). The solar irradiance degradation is larger than the Earthshine degradation due to the degradation of the involved solar diffusers and additional mirrors in the optical path of the solar port. The authors split the correction in the multi-dimensional change of the be-directional scattering distribution function (BSDF) for different illumination conditions normalised to certain reference angles. Separating it from the long-term degradation of the absolute irradiance levels. This approach also corrects expected deficiencies in the on-ground key-data of the BSDF because of the complexity of such measurements to be carried out on ground.
While this approach has also been used for other missions with this type of diffusers, here the BSDF changes are also evaluated as a function of time, which clearly improves the quality of the long-term time series.
For the correction of the Earthshine part the authors apply a “stable ground target” approach, also used by other missions for this purpose, where the target surface reflectance can be expected to be stable over the year and atmospheric variability is not too large. The choice of the target by the authors is snow/ice surfaces over Antarctica. While those surfaces should be quite stable (although snow BRDF function can be changing in a complex way as a function of temperature and solar illumination conditions) I am wondering if this is actually a good choice for a mission where ozone is contribution to a significant extend to the spectral variation of the measurements, in particular below 350 nm. Variability of Ozone is very large over the year in Antarctica, and arguably much more significant than at mid-latitudes. While in the latter case line absorber variability is larger (and stronger) like water vapour, these are usually covering only a small subsection of the spectrum and can therefor much better be filtered out. So I would have considered the Libyan desert being a better target, with an even more stable surface over the year (and well characterised), and less interference by ozone variability.
I particular, and as a consequence of the strong interference and variability of ozone below 335 nm, a correction of the radiances in this important region (with many level-2 products derived from this part of the spectrum) based on actual measurements, has not been carried out. Instead it has been assumed that the degradation is spectrally neutral for the Earthshine port, so can be based on the degradation coefficients derived in the region between 335 to 360 nm for band 2 and 390 to 500 nm in band 3. However, the exact regions considered usable and used for Earthshine degradation evaluation for target area measurements (and extended across the full spectrum I guess) are not explicitly stated, since other spectral regions are suffering from atmospheric absorption features, Fraunhofer lines and interference of a dichroic.
I consider the assumption on spectral neutrality a critical one and I find it has not been addressed in full by the authors. The results presented for Earthshine port correction could potentially be significantly biased because of this assumption. On the other hand, the results derived from the AU1 and AU2 diffusers, which indicate that the spectrometer and the detector assembly’s contribution to the degradation seems to be indeed spectrally quite neutral (and there can be physical arguments also made for such an observation) have not been explicitly applied to support the hypothesis, e.g. by comparing it to the observed degradation in the 335 to 360 nm region and make some interference from such comparison.
But most important I think the first mirror, which seems to be bypassed by the solar port optical path, cannot simply be ignored, in particular in the case that the region below 335nm is not addressed directly by Earthshine observations. Obviously any mirror in the light path could exhibit relative spectral neutrality in its degradation in the visible while exhibiting a strong spectral dependence in its degradation for shorter wavelength.
Acknowledging the fundamental difficulty in assessing the Earthshine port degradation in this shortwave spectral region, while at the same time also acknowledging the larger number of users of collection 4 data using particular this spectral region, I would strongly recommend to include some (at least initial) analysis applying level-2 retrievals, or applying (ozone) cross-section spectral dependency information to support the assumption on spectral neutrality.
The paper is very well suited for a publication in AMT and of high significance not only to future users of OMI level-1b data but also to users and developers of re-processed collections of current mission and future mission level-1b data processing for instruments of that type. I therefor can highly recommend it for publication providing the authors can address the issues raised before and in the specific comment sections below.
Specific comments:
Section 3.5: “In addition, a static irradiance measurement used over a 17 year mission ignores the subtle changes in the solar output, an effect that could enter the L2 products in the long term.” Can we really assume that the solar variability over a timescale of 17 years is negligible in terms of signal variation observed (in particular in the UV)?
Section 4.5 on flagging: Can you confirm then that a pixel qualification using originally 31 categories have been mapped down to 3 – and finally to only 1 in the end – with RTS being separated out? Was this mapping unique or were there some ambiguities to overcome?
Section 5.3 on relative irradiance: I would consider it clearer for the reader to talk about the diffuser BSDF – after having properly defined it – and its correction (which changes over time as a function of azimuth angle, elevation and time). So I would consider to replace "relative irradiance" with "diffuser BSDF" variation/correction.
Section 5.4: The choice of the normalisation point is an extremely sensitive and delicate matter for deriving such a degradation correction. First of all because data at the day of the launch (as “start of the mission”) cannot be used. But second also since the selected normalisation point (and its inherent biases) can significantly amplify biases in the normalised time series of correction coefficients for later periods.
- So what is characterized here as start of the mission? Ideally this should be the first irradiance measurement of the instrument, which can be solidly and fully calibrated (irrespective of commissioning periods or SIOV).
- On the other hand, various normalisation spectra should be tested to find out the sensitivity of the choice of the reference spectrum on the degradation correction coefficients. Has such a sensitivity test been carried out?
- Again I find the assumption on a completely stable sun over a 17-year period a bit tricky without further qualifications, in particular in the UV.
Section 5.5: There seems to be a systematic dependency (at first order) of the degradation over the rows with higher degradation for the middle rows and lower for the edges. Is this potentially a systematic effect?
Section 6.2.2. On the wavelength temperature correction: I would assume that the dependency of the dispersion on OPB has been measured on-ground. How do the results obtained in this study compare to the on-ground measurement temperature dependency of the spectral calibration stability?
Section 6.5: On the “transient” signal flagging. How often are pixels flagged for this "transient events"? Can some statistics be provided, and are these events unknown in their nature/origin, and therefore not categorized as any of the pixel effects before? Only in the next section it becomes clear that cosmic particle impact is one of those transient effect. So a list of potential causes would set the scene here.
Section 6.6: High latitudes are of course also very significant regions of cosmic particle impact. Here only the (important) SAA area and its evolution is shown and discussed. I would assume that transient effects also accumulate and are accounted for at a global scale (ie including high latitudes). Can you confirm?
Editorial comments
Generally, I think it would be very helpful to point to specific section in the supplement (ATBD) document, which is referred to throughout the paper at a time. This will help the reader (in particular the not so expert ones) to find their way through the vast amount of supplemental information provided in the ATBD (naturally not all relevant to the scope of this paper).
Generally, on figure captions: Captions often refer to top/down panels where there are only left/right panels
Section 2.1: OB, or OBP or OPB?
Section 2.1, l. 132: “it was observed that the duty cycle of the PWM of the UV detector heater occasionally dropped to zero,..”. Can a concrete date be added to this?
Section 3.3: It might be worth mentioning in this context that product format porting and restructuring to netcdf is part of a wider effort to streamline the product format of instrument of that type (GOME, SCIA, GOME-2, S5p, S5 and the future S7) and with the same AC and CLIM community in terms of output format (netcdf CF-standard)
Section: 4.1, 1st paragraph: So what is the correct value then? Since only the “erroneous” conversion is reported.
Section 4.2, l. 282: Check sentence.
Caption Figure 19: The terminology difference between "terrain height" and "surface altitude" is nowhere explained.
Citation: https://doi.org/10.5194/amt-2021-430-RC3 - AC4: 'Reply on RC3', Antje Ludewig, 18 Mar 2022