Articles | Volume 17, issue 2
https://doi.org/10.5194/amt-17-515-2024
© Author(s) 2024. This work is distributed under the Creative Commons Attribution 4.0 License.
GPROF V7 and beyond: assessment of current and potential future versions of the GPROF passive microwave precipitation retrievals against ground radar measurements over the continental US and the Pacific Ocean
Download
- Final revised paper (published on 25 Jan 2024)
- Preprint (discussion started on 01 Aug 2023)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2023-1310', Anonymous Referee #1, 27 Sep 2023
- AC1: 'Reply on RC1', Simon Pfreundschuh, 23 Oct 2023
-
RC2: 'Comment on egusphere-2023-1310', Anonymous Referee #2, 02 Oct 2023
- AC2: 'Reply on RC2', Simon Pfreundschuh, 23 Oct 2023
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Simon Pfreundschuh on behalf of the Authors (24 Oct 2023)
Author's response
Manuscript
EF by Sarah Buchmann (25 Oct 2023)
Author's tracked changes
ED: Referee Nomination & Report Request started (30 Oct 2023) by Domenico Cimini
RR by Anonymous Referee #1 (02 Nov 2023)
RR by Anonymous Referee #2 (15 Nov 2023)
ED: Publish subject to technical corrections (17 Nov 2023) by Domenico Cimini
AR by Simon Pfreundschuh on behalf of the Authors (04 Dec 2023)
Author's response
Manuscript
The manuscript provides an assessment of GPROF V7 capabilities, together with some comparisons with GPROF V5 and 3 new, not yet operationally implemented, machine learning based versions of GPROF (GPROF-NN). After the validation exercise, GPROF V7 is considered reliable over CONUS for both GMI and other sensors of the GPM constellation. GPROF-NN introduces substantial improvements in the retrievals, but the viewing geometry reduces the accuracy towards the edges of the swath. Overall, the performances of all GPROF versions are highly influenced by the accuracy of the a-priori database used (GPM CMB).
The manuscript is very well written, it provides a systematic validation of the precipitation products from many points of view, CONUS and Pacific ocean, regional, seasonal and diurnal cycle. The manuscript is definitely very dense and rich of information. I think the manuscript is ready for publication after some very minor revisions that might help clarifying a couple of points:
Other suggestions:
Line 37: ‘resolution of 10 km’ - given the global nature of IMERG maybe 0.1x0.1deg is more appropriate?
Line 171: ‘neighboring pixels’ - is this the distance from the centers of neighboring pixels?
Line 217: ‘conditioned on the validation precipitation’ - do you mean the analysis is made only on pixels where it is precipitating according to the validation (MRMS) dataset?
Line 218: ‘GPROF a priori database’ - since this database is the same as a priori or training for NN, maybe use ‘a priori/training’.
Line 221: can the spread also be due to the preprocessing clustering?
Line 226: ‘conditioned on the reference precipitation’ - is this the same as line 217, ‘validation precipitation’? As mentioned in comment 1, there is a bit of confusion in the naming of the different datasets used.
Line 229: ‘GPROF V5 is based on a different a priori database’ - I suggest to specify that V5 was based on DPR over land and CMB over ocean.
Line 234: ‘retrieval database’ – which one is the retrieval database? I suppose you are referring to GPM CMB? This should be stated more clearly earlier in the section and be consistent throughout the manuscript.
Line 244: ‘introduced rain gauge correction’ – replace with ‘introduced by the rain gauge correction’.
Figure 2: I see a very interesting behavior in the low values trend lines. The GPM CMB vs a-priori dataset (which is GPM CMB 2019) have overestimation of the GPM CMB 2019 compared to GPM CMB ‘other years’, while all the others have the opposite behaviors. Also it looks like the comparison with MRMS 2021 and 2022 shows higher bias for low values. Also the trend for higher values is worth attention. It might be nice to reference this behavior in the section and in the bias description since it provides more information on the range of precipitation that has most issues.
Line 260: ‘conventional GPROF’ – both V7 and V5?
Line 270: ‘When compared to the database’ – which database?
Line 294: ‘the fraction of confirmed raining pixels among those retrieved as raining’ – would this be ‘the fraction of confirmed raining pixels in the a priori database among those retrieved as raining by MRMS’?
Line 295: ‘the fraction of confirmed raining pixels that are detected by retrieval’ – would this be ‘the fraction of confirmed raining pixels in the a priori database that are detected by retrieval’? I might have interpreted these last two sentences incorrectly which suggests the importance of clarifying which datasets you are talking about.
Line 294-295: you talk about raining pixels. So frozen precipitation is excluded also from GPROF? I mean, it makes sense, but you mentioned it is excluded from MRMS earlier in the manuscript and never mentioned what you are doing for GPROF or CMB. I think this is a big point since it eliminates a lot of winter observations that, together with the winter precipitation mentioned in the regional analysis, needs to be clarified earlier in the manuscript.
Line 308: ‘For both the database and the MRMS’ – do you mean the a priori database?
Figure 6 caption: Panel (a) shows the detection skill for the database collocations – I would specify a priori database.
Figure 12: for better comparison I would suggest to add a column with the GMI results in these plots.
Line 464-465: I actually see more bias for GPROF V5 and V7 than from GPROF NN, am I missing something?