Review of MISR NRT AOD paper

This paper describes a near real time (NRT) version of the standard MISR aerosol product. NRT data are particularly useful for assimilation and monitoring purposes, so this development is welcome as the standard aerosol product has latency of several months. There is also a FIRSTLOOK product produced with latency of several days. The main differences between the NRT and standard algorithms are that the former uses different ancillary data (also the case for FIRSTLOOK) and cloud masking because those required for standard product are not available sufficiently fast. Additionally, NRT files are split into partial orbits (“sessions”).

I am posting this review under my own name (Andrew Sayer) in the interests of transparency. I have a current collaboration with one of the co-authors of this manuscript, though that does not focus on the NRT MISR aerosol product discussed here. I have disclosed this to the Editor, and feel I can provide an unbiased review.

General comments and recommendation:
This paper describes a near real time (NRT) version of the standard MISR aerosol product. NRT data are particularly useful for assimilation and monitoring purposes, so this development is welcome as the standard aerosol product has latency of several months. There is also a FIRSTLOOK product produced with latency of several days. The main differences between the NRT and standard algorithms are that the former uses different ancillary data (also the case for FIRSTLOOK) and cloud masking because those required for standard product are not available sufficiently fast. Additionally, NRT files are split into partial orbits ("sessions").
The work is in scope for the journal. The quality of writing and presentation is high. My main issue with the analysis is that the focus of the comparison is between the FIRSTLOOK and NRT products, and not the "final" standard product and NRT. This is relevant because most climatological and validation analyses use the final product and not FIRSTLOOK. I understand that FIRSTLOOK has been the lower-latency alternative to the standard final product, but think it is worth extending the analysis to also show comparisons to the final standard product. This is because differences in ancillary data (e.g. wind speed over water) could lead to regional systematic differences in retrievals; even if not it would be good to quantify the sensitivity of the algorithm as a measure of retrieval "noise" added by the ancillary data treatment. So the present comparison is mostly examining the effects of the pixel selection criteria, and missing the effects from these other sources. The paper is not too long so I hope the authors would consider adding this in to some parts of the study. Note I am also not aware of an analysis of the differences between FIRSTLOOK and final AOD from the standard product, so adding the standard here will also be informative of that.
Other than that, I recommend publication following minor revisions.

Specific comments:
Lines 115-117: the authors mention the pixel-level AOD uncertainties. It would be good to make a statement about how these compare between NRT and standard products. It might be that they are near identical (in which case probably only a sentence is needed), while it might be that they show some differences. This could be important because NRT applications (e.g. data assimilation) often need an error model. Line 137: the paper says that the NRT aerosol products "are available" (present tense) at https://asdc.larc.nasa.gov/project/MISR . I checked and the only level 2 NRT products listed at the time of posting this review (April 2 2021) are two versions of cloud motion vectors. If the NRT aerosol product is not available at the time the paper is accepted, then the language in the paper should be changed.
Line 277: the authors introduce NRT_prot here but this is not used in figures -I think it would be clearer to use this explicitly where NRT_prot is being used (which I think is most analysis before Section 5?). Or am I misunderstanding this? Figures 2, 3: legend says "FIRTLOOK" rather than "FIRSTLOOK". I do not think Figure 2 is necessary anyway, since Figure 3 contains the same information in a more useful way (land vs. dark water split).
Line 301: I am not sure I agree with this statement. Looking at Figure 3b, it looks like the NRT_gained pdf over land is flatted overall (more low and more high values than FIRSTLOOK). Tables 1, 2, 3: I wonder if it would also be useful to add rows indicating (arithmetic or geometric) AOD standard deviation for each case here. The shift (or lack of) in the mean is one thing but the truncation in variability is another. For example as one shifts to tighter cloud thresholds, Tables 2 and 3 show that the mean AOD does not change too much, but from the corresponding Figures it looks like there is some loss at both the low and high ends (possibly cloud shadows and clouds?). Adding standard deviation would be a quantification of how much this narrows the distribution (might also be negligible, it's hard for me to guess looking at the figures).
Line 322: I think "less" should be "fewer" as in principle this is a countable quantity. and/or using a discrete or different colour table would make this clearer? Also, at present, the south polar region is in white ("zero difference") but I think should be grey ("no data"). Lines 469, 473-476: here the authors make a statement about what ARCI screening threshold is applied in the NRT product, and then make a suggestion that data users do their own experimentation for their particular use case. It is not clear to me from reading whether the NRT product contains an "unscreened" data set plus ARCI values so that users can do this (as the standard product does)? Or does it only contain retrievals prescreened with the ARCI 0.18 threshold? Somewhere in the paper it would also be good to discuss NRT file contents (whether the file spec is the same as FIRSTLOOK/final or not -if so only one sentence is needed).