|I thank the authors for their revisions to the manuscript, which I feel improved it sufficiently to warrant publication. I am particularly pleased with the additional figures. I have a few comments on the reply for the author's to consider in future,|
- In your reply to Reviewer 2 about L251, you say that "It should be noted that the contributiuon of the super large particles is accounted in those bins", referring to the five lognormal bins of GRASP. While you are correct that the existing bin scheme provides very large particles from the tails of those distributions, this will be correlated to the loading of small particles (i.e. to get lots of large particles requires also having more 2.9um particles). Coarse and fine mode aerosols have different sources and sinks, such that an ideal bin model would have a separate mode for the coarse mode that can be adjusted independently.
- I am intrigued by your finding that MISR systematically underestimates AOD, as my experience with the v23 data has been quite good. This may simply be a cultural difference, but if I'd found a problem in someone else's data, that comparison would have been front-and-centre in my paper. Finally, it should be noted that the MODIS products are not perfect and differing from MODIS does not necessarily mean one is wrong. For example, there is an ongoing discussion about the AOD of remote, clean air.
- My sincere apologies for misrepresenting your relationship to Li et al. 2022. I did try to check the affiliations, but it can be difficult from Europe to search for details of researchers at Chinese institutions.
- When I asked for a comparison to another implementation of GRASP, I wasn't thinking of different GRASP versions applied to DPC. I was thinking of comparison to existing products processed with GRASP, such as POLDER (https://www.grasp-open.com/products/polder-data-release/). By using an existing algorithm like GRASP, your team is well positioned to untangle the errors caused by the algorithm and the errors caused by the instrument.
- I wish you the best in dealing with the negative drift of DPC. It sounds challenging.
- To be less flippant, I should explain that Levels 1.5 and 2.0 of AERONET are different filterings of the same underlying data, with the latter being more stringent in the removal of possible cloud contamination and applying more nuanced calibration methods. Using both was "foolish" because the dataset would contain duplicate observations. Apologies if this was not clear from AERONET's documentation.
- At the end of section 3.2, you filter out "obvious noise" with DOLP > 1. There may be value in evaluating how this filtering biases your products. While it is true that DOLP > 1 is physically impossible, it can be a valid observations when working with noisy radiances from separate sensors. For example, the co-polar channel could experience an unusually negative random fluctuation, pushing that signal below its dark current, at the same time as the cross-polar one experiences an unusually positive one. An equivalent problem occurs in lidar analysis, where it was found that removing unphysical observations introduced a positive bias into the final products as the filtering had artificially truncated the otherwise symmetrical distribution of random errors.
Some further technical corrections, using line numbers from the document with tracked changes:
Figs.2-4) There are horizontal dashed lines at semi-random levels (i.e. at 0.5 and 0.75 in 2(c) but at -0.75, -0.5, 0, 0.75 in Fig.4(c). There's nothing wrong with having a grid but try to be consistent between plots.
L25) coverage is ~2 days
L35) From most AERONET sites
L40) ability of the DPC
L92) with a relatively high spatial resolution
L97) There is a space before the period here.
L102) the surface accounts
L132) Capitalise Earth.
L147) valuable Earth observations
L173) to average satellite data
L191) in the construction of the modelled reflectance
L212) the [I,Q,U]^T represent
L219-20) "Cloudy pixels are the main factor" or "Cloud contaimination is the main factor"
L247) to generate a cloud mask
L271) For instance, the GRASP
L280-1) "The GRASP/Models approach assumes an external mixture of several" or "The GRASP/Models approach assumes externally mixed aerosols, which"
L298) using the Lagrange multiplier
L317) probably underestimates the AOD
L346) Delete "While" as this statement doesn't preceed a sentence.
L362) You say, "Figure 4b displayed the changes." Usually, one refers to things with a document in the present tense (e.g. "displays the changes"). You aren't strictly wrong here - the figures did show those things in the past - but it sounds weird. This also occurs at L436, 475.
L379) while the lower values
L385) in most areas
L404) resulted in the underestimation
L411) This phenonmenon is caused by unsuitable aerosol models, which further results in a persistent overestimation in the DT algorithm
L418-9) loading is low most of the year
L438-9) regions. A common pattern is seen in all sub-plots
L439) Either put "and" or a comma after "distributed".
L454) period to avoid how global validation statistics shift with the spatial distribution of observations
L489) "The main purpose was to evaluate" to be consistent with the tense of the previous sentence. Also, in the conclusions we're talking about things that have been done rather than things that are happening now so the past tense is appropriate.
L506) respectively at most AERONET sites
L512) overstrict cloud masking
L515-6) The study improves our understanding of DPC and finds a solution
L676) The page number for Lui 2022 is 106121.
L688) The page number for Martins 2002 is MOD4 (GRL had a period of weird numbering.)