|Second review of ‘The Airborne Romanian measurements of Aerosols and Trace gases (AROMAT) campaigns’. One very positive change I did notice and that I think the flow improved a lot through the removal of the sections 3.1.1 and 3.1.2 in the original submission. I still have some concerns though paper that needs revisions before publishing and they are mostly concerns that were brought up in my original review. |
This title still does not reflect the significance of this paper. I strongly suggest to the authors to add to the title to reflect the attempts for assessing validation or campaign strategies for current/future missions satellite missions. It is possible to have an all-encompassing title that advertises AROMAT and the focus on take-home messages in the paper. Possible suggestion:
‘Conceptual satellite validation strategy assessments during the AROMAT campaigns’
I only stress this again because it will help the paper reach a broader audience. This paper’s purpose isn’t just about advertising a campaign.
Another issue I have is the strength of conclusions drawn on the temporal variation quantification of 4E15. It is a novel way to do this calculation, but the conclusions drawn from one morning of data over 8 ‘satellite’ pixels should be lightly taken. This one number could be very atypical, but we don’t know since it is just one example. This is somewhat stated in the manuscript but not covered when assessed in the abstract/conclusions.
One dependency I didn’t see other than the mention of meteorology, emissions, time of day, is that that temporal variation is also dependent on the time window asessed which can changed (e.g., this example is about an hour time window but that window could be smaller in another experiment, which will also be under different meteorology, emissions, etc.). So the recommendation is to draw back the focus on that one number of 4e15 in your conclusions.
Edits needed at the very least: In the abstract and conclusions if you are going to state this number of 4e15, you should also state that this is quantified from one sample under one set of conditions in the morning timeframe and how this can vary in time of day and meteorology in the abstract and conclusions as well (I know it is already stated in the manuscript). Also, there is not enough information to say this is typical for AROMAT with only one example, so the added information would be in place of rewording of this phrase. I think an important conclusion that this study brings to light is that the satellite air quality community should further investigate of the impact of temporal variation on results. There are datasets out there where this can be explored more conceptually than just this one example, so give that to readers to go out and explore further.
Many times in this manuscript it is said that measurements are time and space coincident, but it is never said what the time/space constraints are. Convince me (the audience) they are coincident by listing these constraints. Other times it says measurements are not time coincident, but they are still compared and saying they are different because of time difference but a reader has no idea what that time difference is. This is really critical for interpreting the data. Here are a few examples…I am not sure if this list is all inclusive:
• Lines 247-251. If there are time coincident measurements in the morning, then why aren’t those shown to compare the datasets? Or the data that isn’t temporally coincident shouldn’t be shown at all. At the very least the time difference should be quantified.
• Lines 293-295 and Lines 323-327: The aircraft and mobile measurements are said to be simultaneous, but I don’t see data or text convincing of that as the graph is just by longitude. Plume structures can change very quickly in time, so if the time difference that may seem small could still be an impact (even if it’s as little as a half hour or so) this could explain most of the mismatch as well. Unless they are coincident down to just a few minutes then I see reason to question the temporal effects.
• Overall statement: All flights and ground based datasets shown in the manuscript should have noted time windows for data collection.
• Line 53, add the year to the Richter reference.
• Line 183: Where is RADO? Should this be in the map in Figure 2? Or does it match up to one of the existing labels in Figure 2?
• Line 218: Specify which nadir-looking spectrometer was on the UGAL ultralight.
• Line 216: Splitted should be split
• Line 294: Which mobile DOAS instrument is this? Additionally, I get the feeling when I read this paper through a few times that there may be other instances where mobile DOAS is stated but not identifying which one. Check these types of details.
• Line 565: I missed where it was quantified that the no2 ground and airborne measurements agree with 7%. Can you clarify? Is it referring to the slope between MPIC/AirMAP from the supplement? If so, it is only between the two instruments. The sentence in the conclusion is broad making it seem like all NO2 measurements are within that agreement.
• Line 579-581: The ‘structure’ was not shown for HCHO to be able to draw whether it is visible from daily satellite overpasses. Please rephrase.
• Figure 7 and Table 3: IUP-UB nadir only compact spectrometer is not listed in Table 3 for NO2, but the caption says it’s from the IUP-UB for both NO2 and HCHO. I also saw some inconsistencies in IUP-UB vs IUP-Bremen in terms of naming convention, too.
• Figure 9: Is the last two digits in the legend correspond to hour? Is it in UTC? Please be clear in the caption. Another suggestion that will make this figure infinitely easier to investigate would be to make the y-axis only expand up to 2000m or so. There is not really anything changing above 1.5 km.
• Figure 11: the color bar in the top left figure is different from the rest of the figure. Additionally, you mention Fig. S9 in the caption but looking at Fig. S9, I do not see 3 mobile DOAS sites.
• Figure S3 and S4: The colors for the lines in the caption don’t appear to be right. The fit lines I see appear to be yellow (not blue) are very thin and hard to see.
• Figure S11: add in the caption where these in situ measurements are collected in that domain. Are they at the Turceni powerplant? And are they at the ground?
• Sections S2.6 doesn’t have locations listed for these in situ measurements. And that section doesn’t have SO2 or NO2, though Figure S11 shows NO2 and SO2 in situ measurements.