I would like to thank the authors for considering and addressing all my comments and for the large effort they have undertaken for improving the manuscript. Many of them have been taken, which I acknowledge. Very valuable are, for example, the equilibrium index and the absorption correction.
I noted, however, that my major recommendations on the content of the manuscript have not been taken, e.g., to focus the manuscript only on the description of the dataset and present a proper validation of the whole dataset and do not present partial studies on a limited comparison and on the O3 behaviour but leave them for further studies.
I made those comments from the perspective of an eventual community user and for the potential of the paper for being useful and citable. With that option the message would be "here there is a new O3 middle atmosphere dataset". As it is written, however, the message is just that a dataset will be available in the future.
As for the comparison with other datasets and the discussion on the O3 behaviour, they are partial and incomplete studies (e.g., only some months, or the latitudinal dependence shown only for one month) and therefore the conclusions reached might not be fully valid. That is, as a user, I would not fully trust them. As an example, the authors show, as a typical example of the good agreement between IRI with OS and SMR, one orbit and one profile, from the many millions(?) available. Again, as a user, I would like to see a statistical study. That single profile in the bottom panel of Fig. 11 would tell me little.
It seems the authors would like to publish two papers on this dataset. One, this, with the message that there will be available a dataset and, another, where the proper dataset will be presented with its validation. I think that is also a fair view, although I do share. The same applies to the comparison and O3 behaviour, I would leave them out but if the authors prefer to include them, I would not object. Therefore, I am ready to accept the publication of this work in ATM with its current major contents.
I am giving below a couple of comments (the most important) on the revised manuscript and minor suggestions (mainly editing).
1) Error budget. Overall, I am satisfied with the analysis. However, it seems there is a mixed up between random (also usually called instrumental, noise or precision) and systematic errors. Something which I understood nearly the end of the manuscript but needs to be clarified. For example, in legend of Fig. 1, we read "total error". This clearly does not include the systematic error discussed later on (Table 1). Is this total "calibration" error? or instrument (noise) error? Or a contribution to the total systematic error?
See also comment below on:
- the comment on l. 245.
- Legend of Fig. 4
- in l. 397 it is clearly written: "... has a "precision" of around 5-20\% based on the retrieval noise estimate". This should be stated from the beginning.
2) Lowermost altitude coverage. In some places it is written that the data start at 40 km but in others, e.g. lines 392-393, (and mainly in the figures) the lowermost heigh is 50 km. I would recommend to state as lower altitude 50 km, particularly in Table 2. The reason is that according to the figure shown in the response (absorption factor tables), there is already a considerable absorption at 40-45 km, between 40 and 60\% absorption. What I am surprised is that even with such absorption the AKs still show a high sensitivity, MR near 0.8.
Other comments:
Abstract. "completely new" -> "new"?
Abstract, l. 9. I would use "feasibility" instead of "performance". The latter implies a quantification (e.g. the performance of an instrument), which has been done only partially.
Abstract, l. 13. Sentence: "We find that IRI appears to have a positive bias of up to 25\% below 75 km, and up to 50\% in some regions above." This is an example of my general comment above (to justify my view). This sentence is a typical example of characterisation of an available database. It may well be that you try to understand and correct this bias in the future so the actual database will not have such a bias. In that case this sentence is meaningless. Or would you process the whole dataset with that known bias?
l. 13 " ... data set about the overall atmospheric distribution of ozone." Do you mean the whole atmosphere? Between 50 and 100 km?
l. 42, equilibrium -> equilibrium,
l. 66, change "novel technique" to "novel treatment" or "novel approach", as written later on. "Technique" embraces the whole inversion.
l. 83 "venerable". This is a question of taste, very personal. I would avoid religious terms in scientific papers. You can use other adjetives as "very productive", useful, fruitful, etc. Also later on you sometimes use "believe" which I would change to "think".
p. 7 Legend of Fig 1. (already mentioned above). "total error". This clearly does not include the systematic error discussed later on (Table 1). Is this total "calibration" error? or total instrument (noise) error? Or a contribution to the total systematic error? It should be clarified.
l. 186-7. "... and it changes with the atmospheric temperature at the tangent point.". In this case Phi should be inside the integral in Eq. 5.
l. 190. suggested writing: " ... the atmospheric layer at the line-of-sight is optically thick."
l. 194. "... is used for temperature and pressure." and I believe also for "O2 vmr".
P. 8 legend of Fig. 2. I suggest to change "error size" to just "error" or "error values" or "error ratio or percentage". One possibility is to multiply it in this figure by 100 and give it in %.
p. 10. Fig. 3. (Already mentioned above). If the absorption is so strong at tangent paths above 50 km (see fig. in the reply to the referees), how can the sensitivity be so large at 40 km?
Minor: label box partially overlap/hide the data.
l. 245. Here it also needs to clarify which is this error. I believe it is the random (instrumental or noise error). Note that these values contrast with the upper limits listed in Table 1. So they need to be clarified.
l. 249. zic-zac -> zig-zag? or zigzag?
Fig. 4. Again clarify which is this error?
Fig. 7 (p. 16). Legend. I would change "naive" to "simple" or similar. "naive" is relative to persons, ideas, not to quantifications. Change it also in the text.
ll. 392-393. " the lowest 10km grids in the retrieval are filtered out to avoid biases due to the possible edge effect." So, at the end, the lowermost retrieved altitude is 50 km? (Change it in Table 2).
l. 397. "... has a precision of around 5-20\% based on the retrieval noise estimate" OK. Now it is clear. This should be stated much earlier in the manuscript.
l. 403. OK. Now it is clear the difference between the precision (random error or noise) and the systematic error sources. Make this clear from the beginning, including the abstract.
p. 19, Table 1. Which the the precision or bias of the pointing?
Change "estimated error sizes" by "estimated errors"
Footnote. a), "... sensitivity of each of the parameter" -> " ... sensitivity of each parameter.
p. 20, Table 2.
"Retrieval uncertainty" -> random error, precision, noise error (use one of these terms).
40-100 km -> 50-100 km
ll. 413-414. I do not understand this sentence. The retrieved quantity is always number density. However, if you measured pressure and temperature simultaneously you can translate it, without lack of accuracy, to VMR, which is a better unit for understanding the chemical and physical processes behind, and do not show the large variation of several orders of magnitude with altitude.
l. 450 KIT-IMK -> KIT-IMK and IAA-CSIC
l. 454. I suggest to add this reference for the MIPAS p-T data used:
Garcia-Comas, M., Funke, B., Gardini, A., López-Puertas, M., Jurado-Navarro, A., von Clarmann, T., Stiller, G. P., Kiefer, M., Boone, C. D., Leblanc, T., Marshall, B. T., Schwartz, M. J. and Sheese, P. E.: MIPAS temperature from the stratosphere to the lower thermosphere: Comparison of vM21 with ACE-FTS, MLS, OSIRIS, SABER, SOFIE and lidar measurements, Atmos. Meas. Tech., 7(11), 3633–3651, 2014.
Fig. 11. Legend, line 2. "(above 40 km)". It seems it is above 50 km (consistent with the whole manuscript).
l. 463. "IRI ozone covers the altitude range from 50 to 100km as ..." This seems to be correct and more likely. Harmonise the rest of the manuscript with this value.
l. 468. "... the background density included in the SMR product". Is the density measured by SMR? Clarify and mention the ultimate source used.
ll. 471-472. As mentioned in the general comment above, this coincidence (lower panel) is fine. However, if one would like to have a solid consistency between the three datasets it should not rely on one single profile from ~millions. That is, as a user I would not trust it unless it is done on a statistically meaningful sample.
l. 474. believe > think
l. 481. hemisphereS
l. 493. "We are going to look at the ..." -> We discuss the ...
l. 495. I think the study would be more solid if based on several months (not just one) but then this is probably "beyond the scope of the paper". This if the reason I argue in my general comment to focus the paper on one aspect and leave the other for other studies.
Fig. 12. Legend. Monthly mean DAYTIME ozone number density ...
Fig. 13, legend, line 2. Again mention 50 km as the uppermost altitude. OK. consistent with everything else. How was performed the merging between the 2 datasets at 50-60 km?
l. 521. "...where the differences are bigger, up to -70\%." This is correct but maybe you should warn that O3 vmr at 80 km is very small, which exploits the relative difference. What is the absolute diff in VMR?
l. 527. ... maybe be THE main reasonS ...
Fig. 14. Left upper panel: The legend box hide some of the data
Legend, line 3. Specify which type of error.
ll. 554-555 "...according to how much the measurement conditions diverge from ..." -> ... according to the divergence from ...
l. 563. "The comparison also demonstrates the advantage of the high
sampling rate of IRI." Probably the major advantage would be to reduce the precision of the mean values when averaging large samples?
l. 567. believe -> think |