the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Global evaluation of RTTOV coefficients for early satellites sensors
Bruna Barbosa Silveira
Emma Catherine Turner
Jérôme Vidot
Abstract. RTTOV coefficients are evaluated using a large, independent dataset of 25,000 atmospheric model profiles as a robust test of the diverse 83 training profiles typically used. The study is carried out for nine historic satellite instruments: IRIS-D, SIRS-B, MRIR and HRIR from the infrared part of the spectrum, and MSU, SSM/I, SSM/T2, SMMR and SSMI/S from the microwave. Simulated channel brightness temperatures show similar statistics between both the independent and the 83 profile datasets, confirming that it is acceptable to validate the RTTOV coefficients with the same profiles used to generate the coefficients. Differences between RTTOV and the line-by-line models are highest in water vapour channels, where mean values can reach up to 0.4 ±0.2 K for the infrared, and 0.04 ±0.13 K for the microwave. Examination of the latitudinal dependence of the bias reveals different patterns of variability between similar channels on different instruments, such as 679 cm−1 on both IRIS-D and SIRS-B, showing the importance of the specification of the ISRF. Maximum differences up to several kelvin are associated with extremely non-typical profiles, such as those in polar or very hot regions.
- Preprint
(9856 KB) - Metadata XML
- BibTeX
- EndNote
Bruna Barbosa Silveira et al.
Status: final response (author comments only)
-
RC1: 'Comment on amt-2023-125', Anonymous Referee #1, 10 Aug 2023
RTTOV is a fast radiative transfer (RT) model used among many other things for data assimilation in many NWP centers. As such, it requires to be fast but accurate. To calculate gas absorption, RTTOV relies on linear regression methods to use pre-trained absorption coefficients for each instrument of interest in combination with a set of predictors. A way to evaluate the error of this simplification is to compare RTTOV simulations with more computationally demanding models that solve absorption on the fly (line-by-line - LBL - models). The NWP Satellite Application Facility (NW-PSAF) routinely makes this comparison using a set of 83 profiles - the same profiles that are used to generate the instrument absorption coefficient files. The authors argue that the robustness of such validation should be tested by using a larger set of profiles. Hence, the work presented compares RT simulations for these 83 profiles and for 25,000 globally distributed profiles, and argues that it is accurate enough to use the smaller set of profiles. The submitted publication evaluates nine MW and IR historic satellite instruments: IRIS-D, SIRS-B, MRIR, HRIR, MSU, SSM/I, SSM/T2, SMMR and SSMI/S - still relevant today in terms of their frequencies in the observation system. The evaluation is very much relevant for publication after the following comments are addressed.
Introduction
L9: ISRF. First time you mention it. Please use “Instrument Spectral Response Functions (ISRF)”
L12: Too many parenthesis “ (Radiative Transfer code for TOVs (TIROS Operational Vertical sounder))”
L14: replace by “(i.e., Eyre et al., 2022)”
L16: replace by “(i.e., Chen and Bennartz, 2020)”
L17: replace combines by combine
L17: please specify that the satellite coefficients are satellite gas absorption coefficients
L19: please rephrase. Perhaps it would be more appropriate to cite Saunders et al., 2007 first as work that has analyzed the accuracy of the RTTOV approximations. Please also give an order of magnitude of the “small errors”. The paper submitted also contributes to the evaluation of such accuracy. Please also justify the need for further analysis of such accuracy.
L34: The sentence starting with “Current validation [...]” needs rephrasing. It could be actually discussed together with the Saunders et al., 2007 paper as the aims are similar.
L35: Please rephrase “and statistical plot and data”
L48: Please rephrase “The satellite instruments [...]”
Atmospheric profiles
L42: Too many parentheses here too.
L56: Replace instruments for instrument.
L57: Were these 80 profiles selected randomly? In terms of location and time of the year? What is the top of the model? The same question for the 25,000 profiles selected from the IFS.
L67: What do you mean by “Each of the 5,000 profile subsets represents maximum diversity of one of five different variables [...]”?
L83: “The small values of ozone in the training profiles could be due to a profile located at the ozone hole”. In connection with the question above about L57: are training profiles all from the same area?
Line-by-line models and RTTOV
L91: “RTTOV setup” title. Perhaps it is more appropriate to use simply “RTTOV radiative transfer simulations” ?
L92: What do you mean by profiles failing the LBL models? Perhaps it would make more sense also to introduce line by line models first?
L94: Here it gets a bit confusing regarding instruments and pressure levels. Some comments:
- L95: you have not defined which instruments are narrowband yet.
- It is not clear as to Why you have to use different profile levels for different instruments?
- Also no reference previously in the text to different versions of RTTOV predictors.
L97: “As with the LBLRTM simulations”. These have not been discussed yet. Perhaps it makes more sense to introduce these before the RTTOV setup.
L103: Why are 54 levels sufficiently accurate for this analysis?
I have some additional questions regarding this section:
- You are including CO2 in the simulations. In the independent profiles the CO2 profile is always the same (CO2 in profile 83), but in the training profiles 1-82 CO2 varies. Is this correct? Why not exclude CO2 directly?
- What about the other minor gases that are in the training dataset: CH4, N20, CO? You mention something in the IR LBL RT model, but not here.
L105: It would help the reader to mention in the introduction that different radiative transfer models were used for the IR and the MWs.
L128: Where do you get Nitrogen and ozone from? the mean training profiles? Please specify.;
Independent profile dataset versus training profile dataset
L145: “A description of each of the sensors used in this study follows”. In this section you describe the sensors. All this information should be in another section, rather than in the same section of the results. For example L151, L167, L198-201, etc.
L148: “Note that maximum difference can be positive or negative with respect to the order of the subtraction between datasets.” What does this mean?
L153: It would help the reader to describe the figure (this applies to the other figures too, i.e., Figure 4, etc). i.e., “Figure 3 shows the differences in the (a) average, (b) standard deviation (STD) and © maximum values in the TOA Brightness temperatures (BT) simulations using the training profiles (red lines) and the independent profiles (blue lines) for all IRIS-D channels”.
L155: What is for example the instrument channel errors in the assimilation systems? to put these differences in that context?
L56: the CO2 band differences. I’m not sure if there are differences in the CO2 profiles (see questions above)
L158: This contradicts what it says above “The differences between the two datasets are more evident in the CO band (between 600 and 800 cm−1) and in the ozone band (near 1000 cm−1)”
L174: What is happening at channels 14.95 μm and 22.91 μm
l180: I guess SIRS-A channels are not shown? Except a figure is described? Which one?
L270: Some of this info was already stated in the methodology.
Spatial variation of bias from the independent dataset
L285: What about the microwave? OTherwise these lines could go in the following paragraph with Figure 8. Perhaps it is illustrative to mark these channels in Figure 3. Why did you choose these channels?
Figure 8: why does IRID-D have a spatial variation in the bias, while the similar channel onboard SIRS-B doesn't? Is it only the instrument predictors that change? What about the bias? because figure 8 is BT-BT.
L295: It would aid the reader, a bit more guidance. Perhaps name the 899cm-1 channel before the 679cm-1 since Figure 8 was about the 899cm-1. Jumping to the latitudinal distribution of the 679cm-1 would help if information was given about the spatial distribution of this channel first. Not necessarily a figure like figure 8, but a description (?).
L300. Please mention this is ‘not shown’
L301. Do you mean the latitudinal mean?
L306: Here you mention channels at 1051 cm-1 but in lines 288 these were not mentioned
Figure 10: maybe it helps to have smaller markers?
L314: Is the correlation between the total bias or the latitudinal bias? It is difficult to say that the spatial variability is possibly related to the content of water vapor, but later show no correlations. Do you have a similar figure to Figure 11b?
L321: Its hard to see from the way the figure is plot which are the -2K bias results.
L324: Does this mean that the simulations included cloudy profiles but with no cloud particles? Did you evaluate screening those profiles from the statistics? What percentage of cloudy profiles were included?
Profiles associated with maximum bias
The authors show the profile responsible for the maximum bias in each channel. Did you see how these profiles impacted the other channels? Are they also responsible for outlier behaviour?
L342: It is hard to read when in the text you use channel numbers but in the figures the channel frequencies.
Why not conduct a similar analysis with the IR frequencies explored in the previous section?
Conclusions
L371. I am not sure I understand what you mean by ‘This confirms
that it is acceptable to validate the RTTOV coefficients using the same profiles used to generate the coefficients.’
L390: The potential use of predictors for bias correction procedures is very interesting. It would be very valuable that at the heart of the discussion, the predictors were to be described in the introduction. In this analysis, only the zenith angle predictors are analyzed in a way for the microwave channels. The water vapor ones are sort of discussed too. However, wouldn't iwc and clw etc impact cloudy simulations? A robust discussion of the predictors would be good to have as a reference.
General comments
- Would it be relevant to conduct a similar analysis for jacobian calculations?
- Regarding surface emissivity. Are all radiative transfer models treating the surface in the same way? Would it be relevant to run the analysis for lower emissivities in the microwaves? thinking about the oceans for examples: most relevant for global assimilation systems.
- Regarding the RTTOV coefficient files. Were these calculated using the same absorption models as the ones used in this study? Because if there were calculated with different models, the spectroscopy would also be something to consider in the differences, right?
Tables and Figures
- Table 1: for completeness include the channels, or otherwise change “Channels” to “Number of channels”
- Figure 1 legend please rephrase.
- Figure 2: legend and plot titles are not self explanatory.
- Figure 3: Please put y-axis labels (mean, std, max) and in the legend training vs. independent profiles.
- In general all figures should have self explanatory titles that have the same format. For example, in Figure 10 it says ‘7 54’ but in Figure 9 ‘7pred54’. Figure 14 for example is missing a, b and c. subplot c doesn't have a title. This applies to many figures. Please also unify font size in all figures.
Citation: https://doi.org/10.5194/amt-2023-125-RC1 -
RC2: 'Comment on amt-2023-125', Anonymous Referee #2, 17 Aug 2023
Review of “Global evaluation of RTTOV coefficients for early satellite sensors”
Summary: The authors evaluated RTTOV version 7 coefficients for nine historical sensors using a dataset of 25,000 profiles along with the standard 83 profile training dataset. RTTOV output was compared to output from a line by line model for each sensor to perform the evaluation. Average biases as well as the spatial distribution of RTTOV biases are presented for each sensor. For the most part, the paper is well written and logically organized, I have only a few points to raise, mainly to improve the clarity of the information presented. I therefore recommend minor revisions.
Comments:
- Line 56 “every instruments”: instrument
- Figure 1: I’m not colorblind, so I can’t confirm if it is in fact difficult to see, but please consider that having a red line and a green line, especially right next to each other, may not be the most accessible color choice.
- Section 3.1: I would suggest switching the order of sections 3.1 and 3.2 since the profiles that failed the LBL are excluded from the RTTOV simulations, but the LBL model setup (and the failing profiles) hasn’t been discussed yet, which makes it feel a little out of order.
- Line 119 “55 profiles failed”: Were these profiles also not used for the AMSUTRAN simulations?
- Line 190 “Other two”: Replacing with ‘Two other’ would make more sense.
- Line 190: I understand this is only one short paragraph, but it’s a little strange that these sensors are in the SIRS section, rather than their own.
- Line 194 “the other two sensors”: So far you have discussed four other sensors, which two are you referring to here?
- Line 206 “satellite zenith angle”: Please include the SZA acronym definition here.
- Section 4.1.7: Is there a problem with channel 8?
- Line 276 “channels 7-11 around 183.31 GHz”: Line 265 seems to imply that channels 17-18 are the ones around 183 GHz.
- Line 276 “channels 7-11 around 183.31 GHz”: Do you mean channels 9-11?
- Line 285 “2 IRIS-D channels … The three IRIS-D channels”: Please check the description of the number of channels on this line, do you mean three IRIS-D channels and two SIRS-B? If not, why is the corresponding CO2 channel for SIRS-B not shown?
- Line 294 “Figures 9a and 9b … respectively”: This sentence is confusing and makes it sound like only the 679 GHz channel from IRIS-D and the 899 GHz channel from SIRS-B is shown, especially when paired with the description at the top of the section (see previous comment)
- Figure 9: Figure 8 and Figure 9b show essentially the same information, it seems slightly redundant to show both, especially since the discussion is also repeated (Lines 291-293 and lines 298-299)
- Figures 8-10: Is there a reason these 3 figures are not presented in the same way? Why is Figure 9 not similar to Figures 8 and 10 but for the 679 GHz channel? Alternatively, the three figures could be combined into one, similar to Figure 9 but with 3 panels.
- Figures 14-18: The layout of these figures makes them quite small and difficult to read when formatted by the journal. Consider rearranging these slightly, perhaps sharing the y-axis (removing the repeated pressure label and tick labels) or trying an arrangement with 2 rows (it may also help to move the legend outside the plot area as the 4th “panel”)
- Figures 15-18: I don’t think it’s necessary to include the legend on each panel, this may help with the readability issues.
Citation: https://doi.org/10.5194/amt-2023-125-RC2 -
RC3: 'Comment on amt-2023-125', Anonymous Referee #3, 29 Aug 2023
Review of the manuscript "Global evaluation of RTTOV coefficients for early satellites sensors"
General comments
- This manuscript presents important results to quantify the performance of the radiative transfer model, RTTOV, that is used by numerical weather prediction and climate reanalysis centers, such as ECMWF and JMA, for example.
- This study considers specifically early satellite sensors. Given this focus, it would be important to verify the performance of the RTTOV model to handle known variations between environmental conditions at the time of early satellite sensors and present-day conditions. These variations may have affected the temperature, humidity, and ozone, all well discussed in the paper.
- However, further environmental variations have also affected, with significant net fractional changes, several absorbers' concentrations. One would hence expect that a particular attention be given in such a study to the performance of the (RTTOV) model to handle (known) changes in CO2 (as well as CH4 and N2O), to only name these species. Carbon dioxide is probably the most important in this respect because its 15 micron absorption line is used for infrared temperature 'profiling' (although in practice all wavenumbers contribute to extract information, in a data assimilation system), and oxygen is similarly interesting if one considers that it has some non-negligible variability (see reference below). This aspect of variability in the absorber amounts remains a bit hidden in my opinion (recognizing it is discussed adequately for humidity and ozone) and would deserve to be more exposed. Including this variability may increase significantly the magnitudes of differences between training and independent set.
- There are other species that have seen substantial changes since the 1970s, e.g., ozone depleting substances such as CFCs. For these ones in particular, one would expect to see an impact on the performance of RTTOV to model several channels of the IRIS instrument in 1970.
- A related point is that, similarly to controlling the performance of RTTOV to reproduce past variations, given assumed environmental trends, one could imagine that a similar study be of interest to several operators such as NWP centers to predict at what point, in the future, the present rates of increases in CO2, CH4, and N2O amounts may yield to significant radiative transfer modelling errors, if their variability is not sufficiently represented in the training profiles.
Detailed comments
- L16: could one make a link here with 'machine learning'?
- L 19: "small": this adjective may be removed – or else, quantified.
- L 30: "to be released": did you mean to write "to enter production"
- L 31: "all satellite observations": add the word "radiance"
- L 35: "a more robust": it is not just about robustness here. A necessary validation is indeed to verify that the application of RTTOV, to the training profiles, does work, as intended. The validation that is presented here is "additional", in my opinion (i.e., it does not replace the step of necessary validation with the training profiles).
- What is the IFS version used to produce the 25,000 profiles?
- How relevant or useful may it be to consider (in potential future studies) using atmospheric profiles from a completely different source, other than ECMWF, for performance assessment?
- L 97-98: What is the relevance of deriving coefficients at 400 ppmv CO2 for early sensors, when this concentration is approximately 25% larger than actual concentrations at the time of the early sensors?
- Is it indeed the case that there are no differences in this study between the absorber amount concentrations (except for water vapour and ozone) between the training profile set and the independent profile set? If so, then the performance of RTTOV will necessarily be over-confident, because important sources of variability are neglected (unless proven or otherwise).
- L 92: about the profiles that failed LbL calculations: how many profiles does this represent?
- In each subsection 3.2.X, it could be useful to remind the reader which absorber(s) are sensed by the instrument (when this information is not already present).
- On the importance of the ISRF: is there any obvious relationship observed between the channel bandwidth and the model performance found?
- Conclusions: one aspect that stands out in the results is that best agreement is obtained for channels when the absorber amount is (if I understood well) kept identical, between training and independent profiles, i.e., oxygen and CO2 channels in particular. Can you confirm?
- Conclusions: the sentence " This confirms that it is acceptable to validate the RTTOV coefficients using the same profiles used to generate the coefficients " is a bit too far reaching.
Editorial comments
- Introduction: maybe a sentence explaining how a 'fast' radiative transfer model is constructed, before L 16, would help. (e.g. based on LbL calculations, use of predictors…)
- L 42: profile -> profiler
- L 43: Sounde -> Sounder
- L 65-66: could you rephrase the sentence "Each of the 5,000 profile subsets…"
- L 78: use either dot or the colon as the decimal separator (using both may be confusing – and given the use of the colon as thousand separator throughout the paper, e.g. "25,000" for the number of profiles, I would recommend using the dot as decimal separator).
- Figure 19 caption, typo: "are some are" -> "and some are"
- Throughout the paper, unless you refer to a 'historic' event, maybe the adjective to use is more often 'historical' than 'historic'.
References of potential interest
- Shi, P., Chen, Y., Zhang, G. et al.Factors contributing to spatial–temporal variations of observed oxygen concentration over the Qinghai-Tibetan Plateau. Sci Rep 11, 17338 (2021). https://doi.org/10.1038/s41598-021-96741-6
Citation: https://doi.org/10.5194/amt-2023-125-RC3
Bruna Barbosa Silveira et al.
Bruna Barbosa Silveira et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
190 | 51 | 16 | 257 | 8 | 7 |
- HTML: 190
- PDF: 51
- XML: 16
- Total: 257
- BibTeX: 8
- EndNote: 7
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1