Review of the first revised version of the manuscript „Commercial Microwave Links as a tool for operational rainfall monitoring in Northern Italy“ submitted to AMTD by Roversi et al. in 2020.
The authors did a thorough revision and provided good responses to my comments. They altered and extended their analysis accordingly. I still have some general comments that should be addressed which will require a minor revision, in the sense that I expect mostly only changes to the text and not to the underlying analysis or plots. In particular, the quality of the derived CML rainfall fields and their limitations are not communicated and discussed adequately.
I want to note that, in contrast to the initial assessment of reviewer 2, I think that a further extension of the presented analysis regarding an adjustment or extension of RAINLINK is not required, in particular after this first revision. Based on the limitations of RAINLINK that have been found in the presented analysis, it will, however, definitely be an important task for the authors (or for another research group with a large enough CML data set) to further investigate the transferability of RAINLINK and the need for its recalibration. But, in my opinion, this would go beyond the scope of this manuscript, which already provides enough new insights, taking RAINLINK to a new region with a new data set and doing a detailed analysis of its performance. It would however be important that the limitations of RAINLINK, in its unadjusted version as it was used in this work, are communicated clearer in the next revision of this manuscript (see my general comment on this below).
# General comments
1. Insufficient discussion of limitations of RAINLINK:
* Since the authors only apply the existing RAINLINK algorithm without any adjustments or extension, one of the main contribution from this manuscript is to discuss its limitation and clearly state them
* Fig 3. for the Vegato case shows a clear limitation of NLA (at least this is suggested in the text, see also my specific comments below). Fig 4 May 11 (middle column) also seems to support that.
* POD is quite low. ME shows clear underestimation and seems to be affected by POD (see my specific comment below)
* How are these metrics affected by parameters of RAINLINK? Is there room for improvement with adjustments of RAINLINK or better algorithms? Or is this also limited by the CML data? This should be discussed.
* The authors mention several of these issues at different places in the text, but, the information is spread across the manuscript. Since, in my opinion, communicating the limitations is important, I strongly suggest to have a dedicated section for this discussion.
* One sentence about this should also be added to the abstract.
2. Interpretation of results:
* I do not agree with some of the interpretation on the quality of the results.
* As written in my comment above, POD and ME indicate limitations of the derived RAINLINK CML rainfall products.
* Stating that the performance of the derived CML rainfall product is „very similar, when not even better“ (L11) to „adjusted radar-based precipitation gridded products“ is a bit far-fetched in my opinion. The interpolated reference data set might be a good choice to compare the interpolated CML rainfall fields with, but I wouldn’t say that the radar is worse just because it differs from interpolated gauge rainfall fields. The radar will detect small scale rainfall that the interpolated gauge data set just does not observe correctly or not at all.
* Further examples on interpretations that should be more modest are e.g. in L376 and L379 (see my specific comments).
3. Strange PDF diff format. The blue text in the diff text has these unnecessary two lines of dots underneath the beginning of each word which makes it hard to read. If possible, for the next iteration, a more readable highlighting of added text could be used.
4. Writing could still be improved. I add some examples as specific comments below, but I won’t provide a complete list. Also note that I am not a native speaker, only somebody how typically gets „good grades“ for the writing in papers. So this is a bit subjective.
# Specific comments on the revised manuscript
L7: „The results of the 15 min single-link validation with close-by raingauges show high variability, with the influence of the area physiography and precipitation patterns and the impact of some known issues“. This is a good example for a sentence that, in my opinion, needs improvement. Instead of „raingauges“ write „rain gauges“ (based online English dicts), which should be changed throughout the manuscript. The added „the“ before „influence…“ is, in my opinion not correct. The formulation „…and the impact of some know issues“ is too vague. I am also not 100% sure if I understand the sentence correctly. Maybe writing „…high variability, which can be explained by the the area physiography…“. But, as I said, I am not 100% sure what is meant here.
L13: What is a „diffuse underestimation“?
L42: I would write something like „dedicated hardware“ instead of „high-quality links“. CMLa are also high-quality, just with different priorities.
L44: I have never heard the expression „On the same token“. Online dictionaries say that the correct version would be „By the same token“, but there might also be more commonly used expression that can be used here.
L48: in my opinion it should be „…of the CML approach…“
L70: Write „on the one hand“ instead of „from one side“ since later in this sentence you write „on the other hand“.
L71: „…prevents us from performing a proper calibration of the algorithm…“. I do not understand how this prevents the calibration. You wrote this also in the response to my comment to L296 and in response to reviewer 2. Are you saying that the products currently available are not reliable enough? Please explain?
Section 2.1.2: The revised manuscript contains the following formulation, „provider engineers gave us instead some statistics of the functioning of the ATPC devices (specically, the transmitting levels, thus modulation maxima (in dB) in the time interval), …“. What does that mean? Were you able to get the maximum TSL for each 15 minutes? Or did you get the ATPC offsets that were at maximum applied during each 15 minutes? Or where the „statistics“ you got more general and you thus did a more general correction. It would be good to describe that in a clearer, not necessarily longer, explanation by improving this section.
L239: „from the dataset“ should be moved to the end of the sentence
L244: Instead of „we let“, better „we leave“
L243: „…the spatial pattern of precipitation is expected to be similar in Italian and Dutch sites (Caracciolo et al., 2006).“ This is an important assumption, but I do not see how the reference proves it valid. The reference does not study spatial structure of rainfall. But your analysis indicates that the NLA, which was optimized for dutch rainfall, might lead to false-negatives when detecting small scale intermittent rainfall (see Fig 3e, or Fig 4f). This means, either the same problems must have also arisen with the dutch CML data set, or the rainfall in your region has shorter spatial correlation lengths on the time scales relevant for the NLA. This should be mentioned here and discussed further when the limitations of RAINLINK are discussed.
Section 3.3: Some of the equations of the error metrics are not rendered correctly. Maybe they would also be more readable if they are not presented inline with the text but as separated equations.
L295: „The 75% of the…“. Do you mean 75% quantile here? Is does not seem clear from the sentence.
L341: It seems there are only speculations whether the „irregular and scattered precipitation patterns could be a factor that affects the correct classification“. Since you are already studying this event in detail, it would be important to understand what went wrong here. Can you, with or without showing the data from the surrounding CMLs that are relevant for the NLA, elaborate on this? Would an adjustment of the NLA parameters have helped here?
L376: In my opinion a change of ME from -0.26 to -0.41 is significant and not „slightly worse“ as stated here.
L377: It should be explained where the changes in ME stem from.
L379: In my opinion the ME is not in „good agreement“. Could a better POD in the dutch cases be the cause for the large difference in ME?
L486: „regardless the type of precipitation“. While the k-R relation is for sure less sensitive to DSD variations than the Z-R relation, its robustness is only true for liquid precipitation. For larger hydrometeors like in graupel, hail or snow, it will also show a lot more scatter than for rain. The Z-R relation might be worse, but CMLs can suffer from additional large WAA e.g. because of ice or wet snow on the antenna covers. Hence, I would not say that CMLs are robust „regardless the type of precipitation“.
Figure 1. It is good to now be able to see the grids without CML coverage in the target regions, even though, as you explain, their impact is small. But now we have two types of white grid cells in the plot. A simple solution would be to use two different grey tones (light for the region, darker for LC=0) or to just adjust the limits of the virids colorbar.
Figure 3. As already stated in my first revision, the meaning of the pink horizontal band, explained in the text, should also be explained in the figure caption. |