the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Local comparisons of tropospheric ozone: Vertical soundings at two neighbouring stations in Southern Bavaria
Thomas Trickl
Martin Adelwart
Dina Khordakova
Ludwig Ries
Christian Rolf
Michael Sprenger
Wolfgang Steinbrecht
Hannes Vogelmann
Abstract. In this study ozone profiles of the differential-absorption lidar at Garmisch-Partenkirchen are compared with those of ozone sondes of the Forschungszentrum Jülich and of the Meteorological Observatory Hohenpeißenberg (German Weather Service). The lidar measurements are quality assured by the highly accurate in-situ measurements at nearby the Wank (1780 m a.s.l.) and Zugspitze (2962 m a.s.l.) summits and at the Global Atmosphere Watch station Schneefernerhaus (2670 m a.s.l.). The lidar results agree almost perfectly with those of the monitoring stations. Side-by-side sounding of the lidar and electrochemical (ECC) sonde measurements by a team of the Forschungszentrum Jülich shows just small positive offsets (≤ 3.4 ppb), almost constant within the troposphere. We conclude that the recently published uncertainties of the lidar in the final configuration since 2012 are realistic and rather small for low to moderate ozone. Comparisons with the Hohenpeißenberg routine Brewer-Mast sonde measurements are more demanding because of the distance of 38 km between both sites. These comparisons cover the three years September 2000 to August 2001, 2009 and 2018. A slight negative average offset (-3.64 ppb ± 7.5 ppb (full error)) of the sondes with respect to the lidar is found. Most sonde measurements could be improved in the troposphere by recalibration with the station data. This would not only remove the average offset, but also greatly reduce the variability of the individual offsets. The comparison for 2009 suggests a careful partial re-evaluation of the lidar measurements between 2007 and 2011 for altitudes above 6 km where an occasional negative bias occurred.
- Preprint
(3660 KB) - Metadata XML
- BibTeX
- EndNote
Thomas Trickl et al.
Status: closed
-
RC1: 'Comment on amt-2023-54', Anonymous Referee #1, 27 Mar 2023
The paper needs only some minor rather technical corrections (see annotated manuscript).
-
AC1: 'Reply on RC1', Thomas Trickl, 04 Jun 2023
All three replies will be sent in a single PDF document.
Citation: https://doi.org/10.5194/amt-2023-54-AC1
-
AC1: 'Reply on RC1', Thomas Trickl, 04 Jun 2023
-
RC2: 'Comment on amt-2023-54', Anonymous Referee #2, 02 Apr 2023
Review of “Local comparisons of tropospheric ozone: Vertical soundings at two neighbouring stations in Southern Bavaria” by Trickl, et al.
Summary:
This manuscript compares vertical profiles of ozone observed by the tropospheric DIAL ozone lidar at IMK-IFU Garmisch Partenkirchen, by Brewer-Mast ozonesondes launched at Hohenpeissenberg about 35 km to the northwest, and by ECC ozonesondes launched at the IMK-IFU site. In addition, the manuscript uses continuous surface ozone observations at three high-elevation surface sites about 9 km to the southwest.
The manuscript describes some of the technical developments of the lidar system and uses the high-elevation surface observations as anchor points for the vertical profiles. The manuscript then proceeds to describe the comparisons between the different instruments.
Comparisons of this type are vital to provide confidence in any of the involved instruments and to be able to characterize the unavoidable biases between them. As such, this manuscript is an important contribution for the understanding of the long term data sets such as the Hohenpeissenberg soundings, but also the IMK-IFU lidar ozone measurements.
I would recommend publication after some major revisions, which I detail below.
Major comments:
Some of the comparisons between the lidar and ozonesondes need to be re-written. In multiple figures and descriptions in the text you first apply a bias correction to the ozonesonde data and then state and show an “almost perfect” agreement with the lidar. This is misleading. There is no justification for applying this bias correction other than making the agreement look good. It would be much more interesting to properly describe the bias and show profiles of the actual differences. You show a statistics of the bias (Figures 10, 14, and 20), which is essential for this paper. However, it would be good to understand the justification for a constant bias, i.e. it would be better to show differences of the profiles. In addition, you should discuss over which altitude range the average difference is determined. Figures 11-13 are outright misleading in this context. I provide more suggestions in the detailed comments.
Throughout the manuscript it would be good if it was clear what is considered as the reference, i.e. the less biased observation. I assume it is the high-elevation surface observations first, then the lidar and the ECC ozone sondes, and last the Brewer-Mast sondes. This is nowhere explicitly described and may not be easy to do. However, it would be useful to point out which instrument is suspected of having a bias. This could be done in the discussion section.
In a few cases there appear to be selective exclusions of profiles without good instrumental justification. I give some examples below.
Detailed comments:
In the abstract it would be useful to give the distance between IMK_IFU and Hohenpeissenberg and Zugspitze.
Line 29: Here a bias and an uncertainty is provided. This should be expanded throughout the manuscript. What does the uncertainty refer to? You use the term “full error”, which is not well defined. Is it a single standard deviation? Is it the result of a larger analysis including additional factors?
Line 24: It would be good to give a relative offset in addition to the absolute value.
Line 30: Here is a rare indication that the sonde data are biased. It is not clear how they could be recalibrated with station data. Which station? IMK-IFU? Wank? Zugspitze? Schneefernerhaus? Some other measurements at Hohenpeissenberg? Recalibration based on other data may remove instrumental bias, but will not remove co-location and timing errors. If co-location mismatch is the significant, then recalibration may not reduce the variability in the comparison and only reduce a mean bias.
Line 63: “at short intervals”: What is the temporal resolution that the lidar observations can achieve? I assume you meant to write “with a temporal resolution of less then xxx minutes”; however, I don’t know what xxx would be. This should be expanded on in the instrument description.
Line 90: What are the criteria for air-mass matching? This should be described. It might be useful to show how sensitive the comparisons are to these criteria.
Lines 94-95: Do you have a reference for this statement? It would be very reasonable to expect a difference in the boundary layer ozone concentrations in particular comparing the mountaintop location of Hohenpeissenberg with the valley location at Garmisch Partenkirchen, which are roughly at the same elevation.
Line 113: The background current is usually given in uA, not hPa. Is the limit a background current, or an ozone partial pressure?
Line 114: The assumption of a constant pump temperature is obviously a weak assumption. Could that explain some of the biases described later?
Lines 115f: If I understand it correctly, a time response correction may be significant at low ozone concentrations such as in the troposphere even without steep gradients. However, I don’t know if anybody has looked into time response issues of Brewer-Mast sondes.
Lines 116ff: Could you give a range for typical correction factors for the Brewer-Mast sondes. Could their tropospheric effect contribute to the biases seen? I assume this is not done for the ECC measurements by the FZJ. This should be explicitly stated in section 2.2.
Line 137ff: The Vaisala RS41SGM has the ability to buffer data. In future campaigns, this could be used to span the altitude region of missing telemetry coverage.
Line 151: Is the data acquisition time of 41 s the minimum temporal resolution for profiling?
Line 157ff: It would be good if you could show a profile of the expected uncertainty of the DIAL system.
Line 162: What is “an internal quality control”? Can you elaborate?
Line 181f: Please move this sentence to line 151 and elaborate.
Line 182f: If this option wasn’t used, then why mention it at all?
Line 189: You refer to the surface observations as in situ measurements. Ozonesonde observations are also in situ observations, which causes some confusion. I would suggest referring to the surface observations as “high elevation surface observations” instead of “in situ observations”.
Throughout: The English guidelines by AMT ask not to italicize “in situ”.
Lines 252f: This is one example of many where you state that there is “outstanding agreement provided that a small constant offset is applied”. Either you apply an offset correction or the agreement is outstanding, but not both. Instead, I would suggest to modify all figures showing profiles such that only the uncorrected profiles are shown and in the same figure a panel to the right a profile of the difference(s) is shown. This would show honestly how the offset varies with altitude. In addition, this panel could indicate the altitude range over which the constant offset is calculated. The difference plot could also include the profile of the lidar uncertainty, which gives a reference for what bias to expect introduced by the lidar.
Lines 253f: Why are the DIAL profiles smoothed? Can you show the effect of the smoothing? Isn’t the temporal averaging sufficient to smooth out the profile?
Lines 274-280: This is another example, where the offset handling is not appropriate. It would be good to see profiles of the offsets and in addition a statistics of the variability of the offsets across all profiles. I believe this is done in Figure 10, but since it is mentioned here, this would need to be moved up.
Line 302: Another example of the offset handling. If you have to add 5.8 ppbv, the profiles do not match.
Lines 316ff: Throughout the manuscript, I am missing a discussion of the wind direction. If the sonde launched at MOHP flies straight South, the matching should get better. If it flies straight East, then matching may be more challenging.
Lines 333ff: To what altitude does this statement refer? The sonde RH is very low throughout the troposphere, i.e. descending motion is likely. Clearly identifying stratospheric intrusions is a little trickier under these conditions.
Line 339: How could you support that the differences are “mostly not related to differences in air-composition”? This is not obvious.
Lines 348ff: Before discarding this particular profile, it would be essential to show the horizontal distribution of the ozone distribution based on satellite or modeling output. This Figure should include the track of the ozone sonde. All profiles will have co-location challenges and this case may just be a slightly more challenging.
Line 353: Is a bias correction due to cross section corrections by 1.8% statistically significant?
Lines 356ff: I find the apparent seasonal cycle not convincing. Given the length of the total observational record, there should be many more data that allow comparing the lidar measurements with the high-elevation surface observations. If it can be shown there, then the argument would be much stronger. As is, the number of data points that indicate an apparent seasonal cycle is quite small. In addition, there may have been some selection of profiles that are causing this seasonal cycle.
Line 375ff: This is another example of selective matching. The profiles could be excluded because the co-location is too bad, not because the profiles don’t match. That co-location threshold should then be applied to all profiles.
Line 378ff: Is there any reason to believe that instrumental issues in either instrument should cause such an oscillation in the bias between the instruments? There is obviously a strong reason to believe that spatiotemporal matching causes oscillations of that sort.
Line 381: Figure 13 does not support that statement. This is acknowledged in line 382, i.e. this statement appears to be unsupported and incorrect. Please remove or justify.
Lines 386ff: This is another egregious misstatement. The 2018 analysis does reveal a significant bias, which has been removed!
Lines 403ff: Not just temporal proximity is important, the spatial proximity needs to be considered as well.
Lines 414ff: I believe this is the first description how the offset correction has been derived. This description needs to come before any biases are discussed. Ideally, the same metric is used for all profiles, i.e. the bias should be calculated over the same altitude range for all soundings. If that has been done, please say so.
Lines 426ff: The RH is still quite low and does not support any particular scenario. There is a significant temporal mismatch between the ozone sounding and the lidar observations. The discussion is somewhat vague but may indicate that there is a significant change in the transport history during the time between the two observations. Rather than saying that this is difficult to explain, you could point to the trajectories in Figure 17 to argue that there was a strong change in air masses. However, this depends on how the trajectories were set up, which is not explained.
Lines 435ff: Please show that in a figure.
Lines 445: Why was single photon counting abandoned, if it was the superior technique?
Line 448: Again: temporal and spatial proximity
Lines 462ff: This is the first time that quantitative data are provided for a time period under investigation. Quantitative results like this should be given for all time periods. Furthermore, I would suggest to use the IFU DIAL as reference in all calculations, which makes it easier to compare the numbers and the sign of these numbers.
Line 468: What is meant by “derivative formation”?
Line 484: How do you justify the potential bias of the lidar?
Lines 488f: How do you justify the quality of the ozone data, if there is a variable bias to the lidar? I don’t see a problem that there is a bias between the two. The goal of this paper should be to quantify it and possibly to speculate on its cause.
Line 490: Is +/- 2 ppbv the limit of the bias (this is how this sentence reads), or rather its statistical uncertainty? I suspect it is the latter, and the manuscript should justify this uncertainty estimate.
Line 497: This bias estimate should have been justified in the earlier sections. The only quantitative values shown in line 464 gives different values. Furthermore, the way this statement reads is that there is a high likelihood that there is no bias between the systems since the uncertainty is larger than the actual value. I don’t think this is intended. A more detailed description of the bias is needed.
Figures: The manuscript uses a large number of figures, providing a bit too much detail, while leaving out some important points. I would suggest combining Figures 4, 5, 8, and 9 into one figure with 4 panels similar to Figure 2. Likewise, Figures 15, 16, 18, and 20 could be combined.
Each of these panels should show only the uncorrected sonde and lidar profiles and separated, while sharing the same vertical axis, the true difference profile on the right of each panel. If there is a reasonable statistics, maybe all difference profiles can be summarized in a new Figure showing one single average difference. This may need to be done for each period or possible it can be done for the entire multi-decade period.
All Figures showing RH should have the axis label “Ozone [ppbv] / RH [%]”. The RH should not be scaled in any of the Figures. Figures 5, 8, 18 and 19 use a scaling factor of 0.5; Figures 1 and 16 do not.
I do not believe that showing the Raman lidar water vapor mixing ratio in Figure 2 adds much value. The sonde relative humidity might be more useful.
Figure 2: I assume that the ECC ozone sonde profiles are not corrected. The absence of a significant bias could be pointed out in the text.
Figures 10, 14, and 20: It would be better to show “UFS-IFU” and, respectively, the other surface stations minus IFU. This would bring the MOHP-IFU in better context. The average lines are slightly misleading, in particular the smoothed “seasonal” bias in Figure 10. I would suggest removing these average lines from the Figures and discussing the averages quantitatively in the text.
Figures 11-13: These are misleading, since the bias has already been removed. It is important to see the scatter of the bias profiles to bring the vertical variability into the proper context. I suspect that spatiotemporal matching may be responsible for much of the vertical structure, i.e. it is of random nature and not related to any either instrument. If you feel that there is an instrumental component to the vertical structure of the bias, then you need to show and justify it.
Technical comments:
Line 23: “Side-by-side soundings”
Line 28: Delete “These”
Line 62: Do you mean “sampling bias”, when you mention “fair weather bias”?
Line 92: extra period
Lines 106f: Move coordinates directly following “MOHP (975 m a.s.l., 47.80 N, 11.00 E)”.
Line 192: Missing comma: “Since the 1990s,”
Line 192: “TEI-49 ozone analyzers”
Line 194: “TEI-49iPS”
Lines 202f: Please rephrase. A TEI49C-PS instrument cannot be applied to something. In addition, it is not clear, whether this primary standard belongs to UFS and is used to recalibrate the TEI49i weekly and monthly, or whether this instrument belongs to the UBA and is used to recalibrate the UFS ozone monitor annually.
Line 206ff: Better write: “The measurements were supported by a second instrument (Horiba APOA-370), which is equivalent to the TEI-49. GAW audits at the station for surface ozone took place in 2001, 2006 and 2011.”
Lines 229f: Better write: “This indicates a spatial inhomogeneity of the air mass.”
Line 231: Better write: “Different air masses must be assumed at different altitudes …”
Line 235: Missing comma: “… were terminated, …”
Line 245: Delete “The first night of the campaign was clearer.”
Lines 259ff: Better write: “In this altitude range the MR do not agree quantitatively with those obtained with the CFH sondes, because of the 1-h data-acquisition time necessary for the stratosphere.”
Line 261f: I would suggest to either delete this statement or elaborate. A discussion of the air-mass matching and a listing of how well the profiles match in time and space would be useful anyway.
Line 295: probably need to change to “… spatiotemporal proximity …”
Line 298: Better write: “… This is explained by less structure in the ozone profile and …”
Line 361: Comma missing after “2018,”
Lines 466f: Better write: “Tropospheric differential-absorption ozone lidar systems are highly sensitive …”
Lines 471f: Better write: “Based on continual improvements, starting with the 1994 system upgrading, the IFU ozone DIAL reached peak performance by 2012, but potential for minor improvements remains. Comparison …”
Line 476: Better write: “Here, we analyse the lidar performance in three periods during its technical development.”
Line 479: Delete “perfect”
Line 483: Better write: “Between 2007 and 2011, we suspect a slight negative …”
Line 486: Delete “just”
Line 488: replace “identify” with “validate”
Citation: https://doi.org/10.5194/amt-2023-54-RC2 -
AC2: 'Reply on RC2', Thomas Trickl, 04 Jun 2023
The reply to all three reviews will be given in a single PDF document
Citation: https://doi.org/10.5194/amt-2023-54-AC2
-
AC2: 'Reply on RC2', Thomas Trickl, 04 Jun 2023
-
RC3: 'Comment on amt-2023-54', Anonymous Referee #3, 12 Apr 2023
The paper of Trickl et al. is of interest to a broad community of researchers interested in natural variability in free tropospheric ozone as well as the development of reliable tropospheric ozone instrumentation. The analysis is very good. The only concern at initial review is that the manuscript does not conform to FAIR principles (findable, accessible, interoperable, reusable). Thus, publication is recommended when all data are openly archived and meet the FAIR criteria.
Citation: https://doi.org/10.5194/amt-2023-54-RC3 -
AC3: 'Reply on RC3', Thomas Trickl, 04 Jun 2023
The reply to all three reviews will be given in a single PDF document.
Citation: https://doi.org/10.5194/amt-2023-54-AC3
-
AC3: 'Reply on RC3', Thomas Trickl, 04 Jun 2023
Status: closed
-
RC1: 'Comment on amt-2023-54', Anonymous Referee #1, 27 Mar 2023
The paper needs only some minor rather technical corrections (see annotated manuscript).
-
AC1: 'Reply on RC1', Thomas Trickl, 04 Jun 2023
All three replies will be sent in a single PDF document.
Citation: https://doi.org/10.5194/amt-2023-54-AC1
-
AC1: 'Reply on RC1', Thomas Trickl, 04 Jun 2023
-
RC2: 'Comment on amt-2023-54', Anonymous Referee #2, 02 Apr 2023
Review of “Local comparisons of tropospheric ozone: Vertical soundings at two neighbouring stations in Southern Bavaria” by Trickl, et al.
Summary:
This manuscript compares vertical profiles of ozone observed by the tropospheric DIAL ozone lidar at IMK-IFU Garmisch Partenkirchen, by Brewer-Mast ozonesondes launched at Hohenpeissenberg about 35 km to the northwest, and by ECC ozonesondes launched at the IMK-IFU site. In addition, the manuscript uses continuous surface ozone observations at three high-elevation surface sites about 9 km to the southwest.
The manuscript describes some of the technical developments of the lidar system and uses the high-elevation surface observations as anchor points for the vertical profiles. The manuscript then proceeds to describe the comparisons between the different instruments.
Comparisons of this type are vital to provide confidence in any of the involved instruments and to be able to characterize the unavoidable biases between them. As such, this manuscript is an important contribution for the understanding of the long term data sets such as the Hohenpeissenberg soundings, but also the IMK-IFU lidar ozone measurements.
I would recommend publication after some major revisions, which I detail below.
Major comments:
Some of the comparisons between the lidar and ozonesondes need to be re-written. In multiple figures and descriptions in the text you first apply a bias correction to the ozonesonde data and then state and show an “almost perfect” agreement with the lidar. This is misleading. There is no justification for applying this bias correction other than making the agreement look good. It would be much more interesting to properly describe the bias and show profiles of the actual differences. You show a statistics of the bias (Figures 10, 14, and 20), which is essential for this paper. However, it would be good to understand the justification for a constant bias, i.e. it would be better to show differences of the profiles. In addition, you should discuss over which altitude range the average difference is determined. Figures 11-13 are outright misleading in this context. I provide more suggestions in the detailed comments.
Throughout the manuscript it would be good if it was clear what is considered as the reference, i.e. the less biased observation. I assume it is the high-elevation surface observations first, then the lidar and the ECC ozone sondes, and last the Brewer-Mast sondes. This is nowhere explicitly described and may not be easy to do. However, it would be useful to point out which instrument is suspected of having a bias. This could be done in the discussion section.
In a few cases there appear to be selective exclusions of profiles without good instrumental justification. I give some examples below.
Detailed comments:
In the abstract it would be useful to give the distance between IMK_IFU and Hohenpeissenberg and Zugspitze.
Line 29: Here a bias and an uncertainty is provided. This should be expanded throughout the manuscript. What does the uncertainty refer to? You use the term “full error”, which is not well defined. Is it a single standard deviation? Is it the result of a larger analysis including additional factors?
Line 24: It would be good to give a relative offset in addition to the absolute value.
Line 30: Here is a rare indication that the sonde data are biased. It is not clear how they could be recalibrated with station data. Which station? IMK-IFU? Wank? Zugspitze? Schneefernerhaus? Some other measurements at Hohenpeissenberg? Recalibration based on other data may remove instrumental bias, but will not remove co-location and timing errors. If co-location mismatch is the significant, then recalibration may not reduce the variability in the comparison and only reduce a mean bias.
Line 63: “at short intervals”: What is the temporal resolution that the lidar observations can achieve? I assume you meant to write “with a temporal resolution of less then xxx minutes”; however, I don’t know what xxx would be. This should be expanded on in the instrument description.
Line 90: What are the criteria for air-mass matching? This should be described. It might be useful to show how sensitive the comparisons are to these criteria.
Lines 94-95: Do you have a reference for this statement? It would be very reasonable to expect a difference in the boundary layer ozone concentrations in particular comparing the mountaintop location of Hohenpeissenberg with the valley location at Garmisch Partenkirchen, which are roughly at the same elevation.
Line 113: The background current is usually given in uA, not hPa. Is the limit a background current, or an ozone partial pressure?
Line 114: The assumption of a constant pump temperature is obviously a weak assumption. Could that explain some of the biases described later?
Lines 115f: If I understand it correctly, a time response correction may be significant at low ozone concentrations such as in the troposphere even without steep gradients. However, I don’t know if anybody has looked into time response issues of Brewer-Mast sondes.
Lines 116ff: Could you give a range for typical correction factors for the Brewer-Mast sondes. Could their tropospheric effect contribute to the biases seen? I assume this is not done for the ECC measurements by the FZJ. This should be explicitly stated in section 2.2.
Line 137ff: The Vaisala RS41SGM has the ability to buffer data. In future campaigns, this could be used to span the altitude region of missing telemetry coverage.
Line 151: Is the data acquisition time of 41 s the minimum temporal resolution for profiling?
Line 157ff: It would be good if you could show a profile of the expected uncertainty of the DIAL system.
Line 162: What is “an internal quality control”? Can you elaborate?
Line 181f: Please move this sentence to line 151 and elaborate.
Line 182f: If this option wasn’t used, then why mention it at all?
Line 189: You refer to the surface observations as in situ measurements. Ozonesonde observations are also in situ observations, which causes some confusion. I would suggest referring to the surface observations as “high elevation surface observations” instead of “in situ observations”.
Throughout: The English guidelines by AMT ask not to italicize “in situ”.
Lines 252f: This is one example of many where you state that there is “outstanding agreement provided that a small constant offset is applied”. Either you apply an offset correction or the agreement is outstanding, but not both. Instead, I would suggest to modify all figures showing profiles such that only the uncorrected profiles are shown and in the same figure a panel to the right a profile of the difference(s) is shown. This would show honestly how the offset varies with altitude. In addition, this panel could indicate the altitude range over which the constant offset is calculated. The difference plot could also include the profile of the lidar uncertainty, which gives a reference for what bias to expect introduced by the lidar.
Lines 253f: Why are the DIAL profiles smoothed? Can you show the effect of the smoothing? Isn’t the temporal averaging sufficient to smooth out the profile?
Lines 274-280: This is another example, where the offset handling is not appropriate. It would be good to see profiles of the offsets and in addition a statistics of the variability of the offsets across all profiles. I believe this is done in Figure 10, but since it is mentioned here, this would need to be moved up.
Line 302: Another example of the offset handling. If you have to add 5.8 ppbv, the profiles do not match.
Lines 316ff: Throughout the manuscript, I am missing a discussion of the wind direction. If the sonde launched at MOHP flies straight South, the matching should get better. If it flies straight East, then matching may be more challenging.
Lines 333ff: To what altitude does this statement refer? The sonde RH is very low throughout the troposphere, i.e. descending motion is likely. Clearly identifying stratospheric intrusions is a little trickier under these conditions.
Line 339: How could you support that the differences are “mostly not related to differences in air-composition”? This is not obvious.
Lines 348ff: Before discarding this particular profile, it would be essential to show the horizontal distribution of the ozone distribution based on satellite or modeling output. This Figure should include the track of the ozone sonde. All profiles will have co-location challenges and this case may just be a slightly more challenging.
Line 353: Is a bias correction due to cross section corrections by 1.8% statistically significant?
Lines 356ff: I find the apparent seasonal cycle not convincing. Given the length of the total observational record, there should be many more data that allow comparing the lidar measurements with the high-elevation surface observations. If it can be shown there, then the argument would be much stronger. As is, the number of data points that indicate an apparent seasonal cycle is quite small. In addition, there may have been some selection of profiles that are causing this seasonal cycle.
Line 375ff: This is another example of selective matching. The profiles could be excluded because the co-location is too bad, not because the profiles don’t match. That co-location threshold should then be applied to all profiles.
Line 378ff: Is there any reason to believe that instrumental issues in either instrument should cause such an oscillation in the bias between the instruments? There is obviously a strong reason to believe that spatiotemporal matching causes oscillations of that sort.
Line 381: Figure 13 does not support that statement. This is acknowledged in line 382, i.e. this statement appears to be unsupported and incorrect. Please remove or justify.
Lines 386ff: This is another egregious misstatement. The 2018 analysis does reveal a significant bias, which has been removed!
Lines 403ff: Not just temporal proximity is important, the spatial proximity needs to be considered as well.
Lines 414ff: I believe this is the first description how the offset correction has been derived. This description needs to come before any biases are discussed. Ideally, the same metric is used for all profiles, i.e. the bias should be calculated over the same altitude range for all soundings. If that has been done, please say so.
Lines 426ff: The RH is still quite low and does not support any particular scenario. There is a significant temporal mismatch between the ozone sounding and the lidar observations. The discussion is somewhat vague but may indicate that there is a significant change in the transport history during the time between the two observations. Rather than saying that this is difficult to explain, you could point to the trajectories in Figure 17 to argue that there was a strong change in air masses. However, this depends on how the trajectories were set up, which is not explained.
Lines 435ff: Please show that in a figure.
Lines 445: Why was single photon counting abandoned, if it was the superior technique?
Line 448: Again: temporal and spatial proximity
Lines 462ff: This is the first time that quantitative data are provided for a time period under investigation. Quantitative results like this should be given for all time periods. Furthermore, I would suggest to use the IFU DIAL as reference in all calculations, which makes it easier to compare the numbers and the sign of these numbers.
Line 468: What is meant by “derivative formation”?
Line 484: How do you justify the potential bias of the lidar?
Lines 488f: How do you justify the quality of the ozone data, if there is a variable bias to the lidar? I don’t see a problem that there is a bias between the two. The goal of this paper should be to quantify it and possibly to speculate on its cause.
Line 490: Is +/- 2 ppbv the limit of the bias (this is how this sentence reads), or rather its statistical uncertainty? I suspect it is the latter, and the manuscript should justify this uncertainty estimate.
Line 497: This bias estimate should have been justified in the earlier sections. The only quantitative values shown in line 464 gives different values. Furthermore, the way this statement reads is that there is a high likelihood that there is no bias between the systems since the uncertainty is larger than the actual value. I don’t think this is intended. A more detailed description of the bias is needed.
Figures: The manuscript uses a large number of figures, providing a bit too much detail, while leaving out some important points. I would suggest combining Figures 4, 5, 8, and 9 into one figure with 4 panels similar to Figure 2. Likewise, Figures 15, 16, 18, and 20 could be combined.
Each of these panels should show only the uncorrected sonde and lidar profiles and separated, while sharing the same vertical axis, the true difference profile on the right of each panel. If there is a reasonable statistics, maybe all difference profiles can be summarized in a new Figure showing one single average difference. This may need to be done for each period or possible it can be done for the entire multi-decade period.
All Figures showing RH should have the axis label “Ozone [ppbv] / RH [%]”. The RH should not be scaled in any of the Figures. Figures 5, 8, 18 and 19 use a scaling factor of 0.5; Figures 1 and 16 do not.
I do not believe that showing the Raman lidar water vapor mixing ratio in Figure 2 adds much value. The sonde relative humidity might be more useful.
Figure 2: I assume that the ECC ozone sonde profiles are not corrected. The absence of a significant bias could be pointed out in the text.
Figures 10, 14, and 20: It would be better to show “UFS-IFU” and, respectively, the other surface stations minus IFU. This would bring the MOHP-IFU in better context. The average lines are slightly misleading, in particular the smoothed “seasonal” bias in Figure 10. I would suggest removing these average lines from the Figures and discussing the averages quantitatively in the text.
Figures 11-13: These are misleading, since the bias has already been removed. It is important to see the scatter of the bias profiles to bring the vertical variability into the proper context. I suspect that spatiotemporal matching may be responsible for much of the vertical structure, i.e. it is of random nature and not related to any either instrument. If you feel that there is an instrumental component to the vertical structure of the bias, then you need to show and justify it.
Technical comments:
Line 23: “Side-by-side soundings”
Line 28: Delete “These”
Line 62: Do you mean “sampling bias”, when you mention “fair weather bias”?
Line 92: extra period
Lines 106f: Move coordinates directly following “MOHP (975 m a.s.l., 47.80 N, 11.00 E)”.
Line 192: Missing comma: “Since the 1990s,”
Line 192: “TEI-49 ozone analyzers”
Line 194: “TEI-49iPS”
Lines 202f: Please rephrase. A TEI49C-PS instrument cannot be applied to something. In addition, it is not clear, whether this primary standard belongs to UFS and is used to recalibrate the TEI49i weekly and monthly, or whether this instrument belongs to the UBA and is used to recalibrate the UFS ozone monitor annually.
Line 206ff: Better write: “The measurements were supported by a second instrument (Horiba APOA-370), which is equivalent to the TEI-49. GAW audits at the station for surface ozone took place in 2001, 2006 and 2011.”
Lines 229f: Better write: “This indicates a spatial inhomogeneity of the air mass.”
Line 231: Better write: “Different air masses must be assumed at different altitudes …”
Line 235: Missing comma: “… were terminated, …”
Line 245: Delete “The first night of the campaign was clearer.”
Lines 259ff: Better write: “In this altitude range the MR do not agree quantitatively with those obtained with the CFH sondes, because of the 1-h data-acquisition time necessary for the stratosphere.”
Line 261f: I would suggest to either delete this statement or elaborate. A discussion of the air-mass matching and a listing of how well the profiles match in time and space would be useful anyway.
Line 295: probably need to change to “… spatiotemporal proximity …”
Line 298: Better write: “… This is explained by less structure in the ozone profile and …”
Line 361: Comma missing after “2018,”
Lines 466f: Better write: “Tropospheric differential-absorption ozone lidar systems are highly sensitive …”
Lines 471f: Better write: “Based on continual improvements, starting with the 1994 system upgrading, the IFU ozone DIAL reached peak performance by 2012, but potential for minor improvements remains. Comparison …”
Line 476: Better write: “Here, we analyse the lidar performance in three periods during its technical development.”
Line 479: Delete “perfect”
Line 483: Better write: “Between 2007 and 2011, we suspect a slight negative …”
Line 486: Delete “just”
Line 488: replace “identify” with “validate”
Citation: https://doi.org/10.5194/amt-2023-54-RC2 -
AC2: 'Reply on RC2', Thomas Trickl, 04 Jun 2023
The reply to all three reviews will be given in a single PDF document
Citation: https://doi.org/10.5194/amt-2023-54-AC2
-
AC2: 'Reply on RC2', Thomas Trickl, 04 Jun 2023
-
RC3: 'Comment on amt-2023-54', Anonymous Referee #3, 12 Apr 2023
The paper of Trickl et al. is of interest to a broad community of researchers interested in natural variability in free tropospheric ozone as well as the development of reliable tropospheric ozone instrumentation. The analysis is very good. The only concern at initial review is that the manuscript does not conform to FAIR principles (findable, accessible, interoperable, reusable). Thus, publication is recommended when all data are openly archived and meet the FAIR criteria.
Citation: https://doi.org/10.5194/amt-2023-54-RC3 -
AC3: 'Reply on RC3', Thomas Trickl, 04 Jun 2023
The reply to all three reviews will be given in a single PDF document.
Citation: https://doi.org/10.5194/amt-2023-54-AC3
-
AC3: 'Reply on RC3', Thomas Trickl, 04 Jun 2023
Thomas Trickl et al.
Thomas Trickl et al.
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
356 | 98 | 27 | 481 | 12 | 12 |
- HTML: 356
- PDF: 98
- XML: 27
- Total: 481
- BibTeX: 12
- EndNote: 12
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1