the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Aerosol Optical Depth Retrievals over Thick Smoke Aerosols using GOES-17
Abstract. Severe wildfires generate thick smoke plumes, which degrade particulate matter air quality near the surface. Satellite measurements provide spectacular views of these smoke aerosols and Aerosol Optical Depth (AOD), a columnar measure of aerosol concentration widely used in assessing air quality near the surface. However, these thick smoke plumes often go undetected in satellite imagery, creating missing gaps in these high-pollution areas. In this study, we develop a new algorithm to detect and retrieve AOD from GOES-17 and compare these estimates with the Aerosol Robotic Network (AERONET), MODIS Multi-Angle Implementation of Atmospheric Correction (MAIAC), and the current GOES Operational Aerosol Optical Depth (OAOD) product. Using the clear-sky reflectance composite approach to retrieve surface reflectance, AOD accuracy increases 2 %–7 % on different days for optically thin aerosols. We also found that adding information from the red channel in AOD retrieval brings more uncertainties for low AOD retrieval but increased accuracy for high AOD retrieval. After relaxing the maximum detectable AOD values, the number of valid AOD retrievals increases by 80 %, and the accuracy also increases by about 4 % compared to AERONET AOD. Our approach to retrieving AOD has a 386,091 ~ 937,210 square kilometer increase in valid AOD values each day.
This preprint has been withdrawn.
-
Withdrawal notice
This preprint has been withdrawn.
-
Preprint
(9506 KB)
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-303', Anonymous Referee #1, 09 Dec 2022
This paper shows case studies for a retrieval of smoke aerosol optical depth (AOD) from GOES observations during intense wildfire events in the western USA in September 2020. The motivation is that the operational GOES AOD product (here, OAOD) often misses these events (the implication is that they are flagged as cloudy). The motivation is important and the topic is relevant to the journal, because extreme events are important for a variety of purposes and are not well-captured by the journal. The results from the proposed technique are compared to OAOD, ground-based AERONET sites (at two locations), and the MODIS MAIAC product.
The method is a simple one which relies on using the minimum reflectance technique to create a “background” surface albedo map at two wavelengths (470 and 650 nm) from days prior to the fires. Then, AOD is retrieved independently at each wavelength using an assumed aerosol optical model, and interpolated to 550 nm using one of two methods. These are not scientifically new ideas – it is the same concept as the initial versions of the MODIS/SeaWiFS Deep Blue algorithms (e.g. Hsu et al 2004, https://doi.org/10.1109/TGRS.2004.824067 and 2006, https://doi.org/10.1109/TGRS.2006.879540 ), with a shorter compositing period. The authors do not cite these papers but do cite Knapp (2005) which uses basically the same method as well. So I struggle to recommend this paper on grounds of scientific novelty. It is not clear to me what is new here. Minimum reflectance is a simple technique that works ok for extreme events because the neglected “background” optical thickness which is present in the reference images becomes a fairly small component of the total AOD during the intense event. But there’s no sensitivity analyses backing up any of the assumptions or justifications for choices made at all. It seems guided by intuition.
The authors use only GOES observations from 20 UTC “due to the maximum solar forcing”. That sentence does not make sense to me and I’m not sure that it is relevant. I suggest it is rephrased. Do they mean “minimum solar zenith angle” or something else? Also, why is this geometry preferred? I would have thought that minimum solar angle would mean sampling geometries closer to the surface backscattering hotspot, which normally hinders AOD retrieval. This should be better justified. Also, how much does solar zenith angle vary over the 45 day compositing period? What implications does this have for the surface albedo obtained?
It would be nice to process more than just this 20 UTC time slot. That way the authors could assess at times e.g. comparable to both Terra and Aqua overpasses, and maybe another time which polar-orbiting sensors don’t sample (an advantage of geostationary). This would also increase the data volume available for validation, and may make that part of the analysis more robust. An additional option would be keeping one time slot but expanding beyond September 2020 (looking on Worldview one sees smoke beginning mid-August and continuing at fairly large scales until late October). I understand that the relevant computational power may not be available, but if not, that makes strengthening the motivation for choosing 20 UTC more critical.
In comparing their results to OAOD, the authors make much of coverage improvements. However these discussions seem to me to be mixing two things up. There are two components here: pixel selection and retrieval algorithm (i.e. pixel-level inversion). The coverage difference likely comes from pixel selection differences between the algorithms. However the discussion seems often to be framed in terms of the retrieval algorithm itself. I think it would be more useful to focus on why OAOD is throwing out pixels as invalid first, and establish that. From Figure 6 and Table 4, OAOD doesn’t appear to perform any worse than the authors’ algorithm (though the limited data volume and short analysis make it hard to say). That makes it sound like the main benefit here is the pixel selection – so maybe it is better to focus on assessing and improving pixel selection for extreme aerosol events in OAOD. If I am misunderstanding something here, then I think the authors should rewrite the manuscript to make things clearer.
Technical statistical terms are routinely used incorrectly. The word “accuracy” appears in the manuscript 21 times, but I don’t think it is the correct word in most cases. The authors seem to be using it as a synonym for “error”. “Accuracy” should refer to how biased a measurement is with respect to the true value. “Uncertainty” is an estimate of the dispersion of repeated measurements from the true value (combining systematic and random error sources). “Error” is the mismatch between a specific measurement and specific instance of the truth. These terms are not the same. Table 3 also has a column “Accuracy within +/-0.5” measured in percent. There isn’t a charitable interpretation of the use of “accuracy” here – I think the authors might mean “Percent agreeing within +/-0.5”. Finally, the MAIAC comparison is referred to as a “validation” – this is not the case as MAIAC is not a ground truth. Only AERONET here can be reasonably assumed to be a ground truth. Correctness of terminology is important in scientific writing; the fact that a lot of published papers are sloppy is not an excuse.
Two methods are explored to estimate AOD at 550 nm from 470 and 650 nm. One takes the retrieved AODs at those wavelengths and calculates the Ångström exponent (which the paper calls angstrom – this should also be corrected). The other takes the 470 nm AOD and an assumed AE of 2. (A third method, not tested, would be to take the 650 nm AOD and an assumed AE of 2). This felt exploratory and hand-wavy to me, and I don’t think that the AERONET data volume available for this case study is sufficient to judge what is best. Really the authors seem to be trying to see whether using both wavelengths adds value over just using one, and the answer will depend on the magnitude and spectral correlation of error sources. Without a thorough uncertainty analysis (which is missing) and larger-scale validation, I don’t think we can tell, so I didn’t find this aspect useful.
The figures need improvement. In Figures 3, 8, and 9 “no retrieval” and “AOD=0” seem to be the same color. This should be fixed. The color palette is also not color blind safe, which I think is needed now by AMT submission guidelines: https://www.atmospheric-measurement-techniques.net/submission.html#figurestables Figure 4 caption does not explain horizontal/vertical bars. Figures 5 and 7 need substantial font size increases, and the background data might be better presented as a heat map. Figure 6, there are clearly nonlinearities in the relationships here, so a linear regression is inappropriate and should be removed (perhaps also true for 4). Also not clear that correlation is the right metric to highlight here for the same reason (it measures the degree of linear association between two quantities so isn’t so relevant when the underlying relationship isn’t linear).
Table 1 would be improved by listing AERONET site names here. I’m not sure of the relevance of the FRM/FEM column as it is “No” in both cases. It looks like this is listing only the two sites with a PM monitor – why not list all 15 sites used for the validation? That would also be an improvement as currently this information does not seem to be provided.
As the scientific novelty is not clear and there are numerous technical, language, and presentation shortcomings, I recommend rejection. I would encourage a resubmission if the scope were expanded to more directly illustrate and understand the reasons for coverage differences between this and OAOD, and make some actionable recommendation. In my opinion, this would require more work than the traditional scope and time frame of major revisions. I know this seems like a negative review but I do really think the idea is important – extreme events are poorly-retrieved (both in terms of pixel masking and quantitative retrieval) and these new geostationary sensors are great tools to tackle this problem. It just feels to me like this paper was submitted prematurely.
Citation: https://doi.org/10.5194/amt-2022-303-RC1 -
RC2: 'Comment on amt-2022-303', Anonymous Referee #2, 21 Dec 2022
This article demonstrates a method to retrieve aerosol optical depth (AOD) under high aerosol loading scenarios using GEOS satellite data based on one case study. The missing satellite AOD retrieval close to fire source region becomes an important issue as increasing interests and needs are drawn to wildfire events within this recent decade. There are many studies investigated this issue, e.g. (Shi et al., 2019, 2021; Mo et al., 2021, Li et al., 2015; Lu et al., 2021; Wang et al., 2007). However, only a few has potential to be incorporated into operational satellite aerosol algorithm due to that by relaxing the filtering criteria, all proposed method can have potential to introduce artifacts at other parts of the globe. That is the main reason that currently, all satellite operational global aerosol products still only retrieve partial plume. If the author can present an innovative approach that can retrieve optically intense smoke plume while maintain minimal artifacts outside of the plume region, it will be very beneficial to the aerosol and fire community. However, the method presented here still introduce high AOD values outside of the plume region (Figure 8). Further, the algorithm itself uses well-established aerosol satellite retrieval methods, which are the surface database plus LUT method, with many assumptions. These assumptions work well in some cases but may introduce large error in other. For example, if surface properties change due to vegetation cover or weather in short time period, e.g. harvesting crops, it will introduced large bias in AOD retrieval. Similarly, due to very high aerosol loading, the smoke model assumption will lead to large retrieval error as well. In this study, a generic smoke model is used which lead to bias as shown in Figure 6 lower panel, which shows that AOD-Ang underestimate at AOD < 1, and AOD-log is better but still has non-linear underestimation shown between AOD from 0.5 to 1.0. In this case some of these assumptions work well, but to generalize the approach may require a lot more work. In this paper, these are not considered at all.
Other than this main issue, the quality of the manuscript also needs large improvement. The introduction doesn’t show nice flow of the story, the importance and the innovative part of the research isn’t introduced. The background literature review is far less than comprehensive, especially on potential reasons that satellite algorithm fails to retrieve optically thick smoke. It is also very hard to follow the entire manuscript as there are many ambiguous descriptions. For example, line 75 mentioned “different surface observations”. This is totally out of content and confusing as the AERONET is the only surface observations the paper had introduced. Another example is the first sentence in Methods section. It jumps right into technical details without introduce a picture of what is this algorithm is going to do and how in short sentence this is done.
Also there are places this paper is confusing due to methodology. For example, the paper continue mention the first is to calculate the missing percentage, which is not very convincing. The way of calculating the missing percentage by comparing AERONET data to satellite is also questionable, as when surface condition is not suitable for retrieval, e.g. Snow cover, AERONET will still have data but satellite cannot. The method author used will mis-categories these situations to cases that is caused by smoke plume is too thick. Other issues such as only choosing 20 UTC is also not making sense, as the one advantage of GOES data is high temporal variation. By only choosing one time frame will limit the observation geometries to certain range and cover up some of the uncertainty that can occur while applying retrieval algorithms.
Last the visualization of the results needs to be improved. Figure 3, 8, and 9 is hard to compare and adding a differences plot marking the new retrieval will be much more helpful. More error statistics can be plotted on comparisons plot other than correlations. A good correlation does not guarantee good agreement between two datasets. In this particular case, there are many other error statistics can be helpful, e.g. RMSE, BIAS, BIAS when AOD > 1., number of data that is plotted, % within EE. Also in many plots, AOD-ang is written as AOD-log, which is confusing.
Citation: https://doi.org/10.5194/amt-2022-303-RC2 -
EC1: 'Comment on amt-2022-303', Thomas Eck, 06 Jan 2023
The topic of this paper is relevant and important for the aerosol remote sensing community, as severe smoke events at high optical depth are difficult to retrieve from satellite measurements and also important for radiative forcing and human health reasons, among others. However due to the lack of novelty and also numerous technical issues as outlined by the two reviewers, my decision as associate editor is to reject this version of the manuscript since it would take too much time and effort (within the normal revision time frame) to revise this paper so that all reviewer concerns could have been addressed. The authors are encouraged to work on the manuscript in regards to these AMT review comments, as it seems a strong paper would result from implementation of most or all of the suggestions offered.
Citation: https://doi.org/10.5194/amt-2022-303-EC1
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-303', Anonymous Referee #1, 09 Dec 2022
This paper shows case studies for a retrieval of smoke aerosol optical depth (AOD) from GOES observations during intense wildfire events in the western USA in September 2020. The motivation is that the operational GOES AOD product (here, OAOD) often misses these events (the implication is that they are flagged as cloudy). The motivation is important and the topic is relevant to the journal, because extreme events are important for a variety of purposes and are not well-captured by the journal. The results from the proposed technique are compared to OAOD, ground-based AERONET sites (at two locations), and the MODIS MAIAC product.
The method is a simple one which relies on using the minimum reflectance technique to create a “background” surface albedo map at two wavelengths (470 and 650 nm) from days prior to the fires. Then, AOD is retrieved independently at each wavelength using an assumed aerosol optical model, and interpolated to 550 nm using one of two methods. These are not scientifically new ideas – it is the same concept as the initial versions of the MODIS/SeaWiFS Deep Blue algorithms (e.g. Hsu et al 2004, https://doi.org/10.1109/TGRS.2004.824067 and 2006, https://doi.org/10.1109/TGRS.2006.879540 ), with a shorter compositing period. The authors do not cite these papers but do cite Knapp (2005) which uses basically the same method as well. So I struggle to recommend this paper on grounds of scientific novelty. It is not clear to me what is new here. Minimum reflectance is a simple technique that works ok for extreme events because the neglected “background” optical thickness which is present in the reference images becomes a fairly small component of the total AOD during the intense event. But there’s no sensitivity analyses backing up any of the assumptions or justifications for choices made at all. It seems guided by intuition.
The authors use only GOES observations from 20 UTC “due to the maximum solar forcing”. That sentence does not make sense to me and I’m not sure that it is relevant. I suggest it is rephrased. Do they mean “minimum solar zenith angle” or something else? Also, why is this geometry preferred? I would have thought that minimum solar angle would mean sampling geometries closer to the surface backscattering hotspot, which normally hinders AOD retrieval. This should be better justified. Also, how much does solar zenith angle vary over the 45 day compositing period? What implications does this have for the surface albedo obtained?
It would be nice to process more than just this 20 UTC time slot. That way the authors could assess at times e.g. comparable to both Terra and Aqua overpasses, and maybe another time which polar-orbiting sensors don’t sample (an advantage of geostationary). This would also increase the data volume available for validation, and may make that part of the analysis more robust. An additional option would be keeping one time slot but expanding beyond September 2020 (looking on Worldview one sees smoke beginning mid-August and continuing at fairly large scales until late October). I understand that the relevant computational power may not be available, but if not, that makes strengthening the motivation for choosing 20 UTC more critical.
In comparing their results to OAOD, the authors make much of coverage improvements. However these discussions seem to me to be mixing two things up. There are two components here: pixel selection and retrieval algorithm (i.e. pixel-level inversion). The coverage difference likely comes from pixel selection differences between the algorithms. However the discussion seems often to be framed in terms of the retrieval algorithm itself. I think it would be more useful to focus on why OAOD is throwing out pixels as invalid first, and establish that. From Figure 6 and Table 4, OAOD doesn’t appear to perform any worse than the authors’ algorithm (though the limited data volume and short analysis make it hard to say). That makes it sound like the main benefit here is the pixel selection – so maybe it is better to focus on assessing and improving pixel selection for extreme aerosol events in OAOD. If I am misunderstanding something here, then I think the authors should rewrite the manuscript to make things clearer.
Technical statistical terms are routinely used incorrectly. The word “accuracy” appears in the manuscript 21 times, but I don’t think it is the correct word in most cases. The authors seem to be using it as a synonym for “error”. “Accuracy” should refer to how biased a measurement is with respect to the true value. “Uncertainty” is an estimate of the dispersion of repeated measurements from the true value (combining systematic and random error sources). “Error” is the mismatch between a specific measurement and specific instance of the truth. These terms are not the same. Table 3 also has a column “Accuracy within +/-0.5” measured in percent. There isn’t a charitable interpretation of the use of “accuracy” here – I think the authors might mean “Percent agreeing within +/-0.5”. Finally, the MAIAC comparison is referred to as a “validation” – this is not the case as MAIAC is not a ground truth. Only AERONET here can be reasonably assumed to be a ground truth. Correctness of terminology is important in scientific writing; the fact that a lot of published papers are sloppy is not an excuse.
Two methods are explored to estimate AOD at 550 nm from 470 and 650 nm. One takes the retrieved AODs at those wavelengths and calculates the Ångström exponent (which the paper calls angstrom – this should also be corrected). The other takes the 470 nm AOD and an assumed AE of 2. (A third method, not tested, would be to take the 650 nm AOD and an assumed AE of 2). This felt exploratory and hand-wavy to me, and I don’t think that the AERONET data volume available for this case study is sufficient to judge what is best. Really the authors seem to be trying to see whether using both wavelengths adds value over just using one, and the answer will depend on the magnitude and spectral correlation of error sources. Without a thorough uncertainty analysis (which is missing) and larger-scale validation, I don’t think we can tell, so I didn’t find this aspect useful.
The figures need improvement. In Figures 3, 8, and 9 “no retrieval” and “AOD=0” seem to be the same color. This should be fixed. The color palette is also not color blind safe, which I think is needed now by AMT submission guidelines: https://www.atmospheric-measurement-techniques.net/submission.html#figurestables Figure 4 caption does not explain horizontal/vertical bars. Figures 5 and 7 need substantial font size increases, and the background data might be better presented as a heat map. Figure 6, there are clearly nonlinearities in the relationships here, so a linear regression is inappropriate and should be removed (perhaps also true for 4). Also not clear that correlation is the right metric to highlight here for the same reason (it measures the degree of linear association between two quantities so isn’t so relevant when the underlying relationship isn’t linear).
Table 1 would be improved by listing AERONET site names here. I’m not sure of the relevance of the FRM/FEM column as it is “No” in both cases. It looks like this is listing only the two sites with a PM monitor – why not list all 15 sites used for the validation? That would also be an improvement as currently this information does not seem to be provided.
As the scientific novelty is not clear and there are numerous technical, language, and presentation shortcomings, I recommend rejection. I would encourage a resubmission if the scope were expanded to more directly illustrate and understand the reasons for coverage differences between this and OAOD, and make some actionable recommendation. In my opinion, this would require more work than the traditional scope and time frame of major revisions. I know this seems like a negative review but I do really think the idea is important – extreme events are poorly-retrieved (both in terms of pixel masking and quantitative retrieval) and these new geostationary sensors are great tools to tackle this problem. It just feels to me like this paper was submitted prematurely.
Citation: https://doi.org/10.5194/amt-2022-303-RC1 -
RC2: 'Comment on amt-2022-303', Anonymous Referee #2, 21 Dec 2022
This article demonstrates a method to retrieve aerosol optical depth (AOD) under high aerosol loading scenarios using GEOS satellite data based on one case study. The missing satellite AOD retrieval close to fire source region becomes an important issue as increasing interests and needs are drawn to wildfire events within this recent decade. There are many studies investigated this issue, e.g. (Shi et al., 2019, 2021; Mo et al., 2021, Li et al., 2015; Lu et al., 2021; Wang et al., 2007). However, only a few has potential to be incorporated into operational satellite aerosol algorithm due to that by relaxing the filtering criteria, all proposed method can have potential to introduce artifacts at other parts of the globe. That is the main reason that currently, all satellite operational global aerosol products still only retrieve partial plume. If the author can present an innovative approach that can retrieve optically intense smoke plume while maintain minimal artifacts outside of the plume region, it will be very beneficial to the aerosol and fire community. However, the method presented here still introduce high AOD values outside of the plume region (Figure 8). Further, the algorithm itself uses well-established aerosol satellite retrieval methods, which are the surface database plus LUT method, with many assumptions. These assumptions work well in some cases but may introduce large error in other. For example, if surface properties change due to vegetation cover or weather in short time period, e.g. harvesting crops, it will introduced large bias in AOD retrieval. Similarly, due to very high aerosol loading, the smoke model assumption will lead to large retrieval error as well. In this study, a generic smoke model is used which lead to bias as shown in Figure 6 lower panel, which shows that AOD-Ang underestimate at AOD < 1, and AOD-log is better but still has non-linear underestimation shown between AOD from 0.5 to 1.0. In this case some of these assumptions work well, but to generalize the approach may require a lot more work. In this paper, these are not considered at all.
Other than this main issue, the quality of the manuscript also needs large improvement. The introduction doesn’t show nice flow of the story, the importance and the innovative part of the research isn’t introduced. The background literature review is far less than comprehensive, especially on potential reasons that satellite algorithm fails to retrieve optically thick smoke. It is also very hard to follow the entire manuscript as there are many ambiguous descriptions. For example, line 75 mentioned “different surface observations”. This is totally out of content and confusing as the AERONET is the only surface observations the paper had introduced. Another example is the first sentence in Methods section. It jumps right into technical details without introduce a picture of what is this algorithm is going to do and how in short sentence this is done.
Also there are places this paper is confusing due to methodology. For example, the paper continue mention the first is to calculate the missing percentage, which is not very convincing. The way of calculating the missing percentage by comparing AERONET data to satellite is also questionable, as when surface condition is not suitable for retrieval, e.g. Snow cover, AERONET will still have data but satellite cannot. The method author used will mis-categories these situations to cases that is caused by smoke plume is too thick. Other issues such as only choosing 20 UTC is also not making sense, as the one advantage of GOES data is high temporal variation. By only choosing one time frame will limit the observation geometries to certain range and cover up some of the uncertainty that can occur while applying retrieval algorithms.
Last the visualization of the results needs to be improved. Figure 3, 8, and 9 is hard to compare and adding a differences plot marking the new retrieval will be much more helpful. More error statistics can be plotted on comparisons plot other than correlations. A good correlation does not guarantee good agreement between two datasets. In this particular case, there are many other error statistics can be helpful, e.g. RMSE, BIAS, BIAS when AOD > 1., number of data that is plotted, % within EE. Also in many plots, AOD-ang is written as AOD-log, which is confusing.
Citation: https://doi.org/10.5194/amt-2022-303-RC2 -
EC1: 'Comment on amt-2022-303', Thomas Eck, 06 Jan 2023
The topic of this paper is relevant and important for the aerosol remote sensing community, as severe smoke events at high optical depth are difficult to retrieve from satellite measurements and also important for radiative forcing and human health reasons, among others. However due to the lack of novelty and also numerous technical issues as outlined by the two reviewers, my decision as associate editor is to reject this version of the manuscript since it would take too much time and effort (within the normal revision time frame) to revise this paper so that all reviewer concerns could have been addressed. The authors are encouraged to work on the manuscript in regards to these AMT review comments, as it seems a strong paper would result from implementation of most or all of the suggestions offered.
Citation: https://doi.org/10.5194/amt-2022-303-EC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
442 | 276 | 55 | 773 | 41 | 38 |
- HTML: 442
- PDF: 276
- XML: 55
- Total: 773
- BibTeX: 41
- EndNote: 38
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1