the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A cloud screening algorithm for ground-based sun photometry using all-sky images and deep transfer learning
Abstract. Aerosol optical depth (AOD) is used to characterize aerosol loadings within Earth’s atmosphere. Sun photometers measure AOD from the Earth’s surface based on direct-sunlight intensity readings by spectrally narrow light detectors. However, when the solar disk is partially obscured by cloud cover, sun photometer measurements can be biased due to the interaction of sunlight with cloud constituents. We present a novel deep transfer learning model on all-sky images to support more accurate AOD retrievals. We used three independent image datasets for training and testing: the novel Northern Colorado All-Sky Image (NCASI), the Whole Sky Image SEGmentation (WSISEG), and the METCRAX-II datasets from the National Center for Atmospheric Research (NCAR). We visually partitioned all-sky images into three categories: 1) clear sky around the solar disk, 2) thin cirrus obstructing the solar disk, and 3) thick, non-cirrus clouds obstructing the solar disk. Two-thirds of the images were allocated for training and one-third were allocated for testing. We trained models based on all possible combinations of the training sets. The best-performing model successfully classified 95.5 %, 96.9 %, and 89.1 % of testing images from NCASI, METCRAX-II and WSISEG datasets, respectively. Our results demonstrate that all-sky imaging with deep transfer learning can be applied toward cloud screening, which would aid ground-based AOD measurements.
This preprint has been withdrawn.
-
Withdrawal notice
This preprint has been withdrawn.
-
Preprint
(866 KB)
-
Supplement
(11339 KB)
-
This preprint has been withdrawn.
- Preprint
(866 KB) - Metadata XML
-
Supplement
(11339 KB) - BibTeX
- EndNote
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-217', Anonymous Referee #1, 27 Sep 2022
The manuscript by Wendt et al. used a deep transfer learning model to develop a cloud screening algorithm for ground-based sun photometry applications. Sun photometer images from three different sites are used for both data training and testing, and the images are classified as clear, cirrus and thin clouds. The algorithm can achieve hitting rates about 90% for the three datasets. The manuscript is overall reasonably presented. However, as a model based on deep learning, there are still some major problems with the model development and validation. The following lists my detailed comments.
1. It is noticed that a total of approximately 1500 images are used for both data training and testing, which is very small for deep learning model development. With such a small number of samples, there would be possible problems on overfitting, which can be demonstrated by the significant drop of accuracy for different datasets in Table 2. There are other problems related to small sample size as well. Thus, I don’t think such a small sample size would result any solid algorithm based on deep learning.
2. Images from three different sites are used. What’s the essential differences among them during data training and testing. It is noticed that the fractions of images of different classes (clear, cirrus or cloud) are quite different. Again, this further demonstrates the insufficiency of datasets for deep learning.
3. How are the prepared images labeled before the learning? As noticed from Session 2.2, the procedure is automated. If this is true, the authors have developed a physical model for the classification, and it is not necessary to develop a deep learning model any more. Because if we understood the physical model mentioned in Session 2.2 is the truth, the deep-learning based one will never beat it. Thus, the preparation for the image classification has to be better discussed.
4. Neither the model development nor the results were discussed in details. As an AMT articles, readers expect really detailed techniques and results to fully understand and to fully repeat the methods. The current manuscript is really concise, which makes it difficult to evaluate the method.
5. The manuscript mostly discussed their own algorithm. How is the current algorithm compared with traditional ones? Is the deep learning algorithm showing any advantages compared with conventional ones? In other words, the current model should be compared with similar ones.
Citation: https://doi.org/10.5194/amt-2022-217-RC1 -
RC2: 'Comment on amt-2022-217', Anonymous Referee #2, 10 Oct 2022
Review of “A cloud screening algorithm for ground-based sun photometry using all-sky images and deep transfer learning”, by Eric A. Wendt et al.
The manuscript introduces a new machine learning approach for cloud screening in sunphotometer measurements using an additional low-priced allsky camera. It describes in some detail the setup of camera system and machine learning approach. A training data set from three different camera systems is presented including a validation using parts of the data not used for training.
Major points of criticism:
- In my opinion there is a lack of motivation for the introduction of this system. It might be low cost, but still there is a need for additional instrumentation while cloud screening in sun photometer data is usually done using the sunphotometer data itself (by spatial of temporal variation tests, e.g., for the AERONET network).
- The manuscript only introduces a limited validation of the method and no comparison to established methods.
- The assumed better instrument independence of your approach, compared to standard methods, is at least questionable unless you show clear evidence. You are using compressed camera images of variable quality and finally state that this has clear effect yourself.
- At the same time, I doubt that the remaining presentation of the setup of a low-cost camera system from standard parts and the adjustment of an existing machine learning technique for general imagery (VGG-16, University of Oxford) to the allsky image cloud detection task is sufficient to justify a scientific publication in an atmospheric science publication like AMT.
In the present form I recommend the rejection of the manuscript. Resubmission after extension of the validation and comparison to other methods could be interesting.
More specific:
- Reading the first part of the introduction I already ask myself why you do not announce to compare your new method to a standard one based on the sunphotometer itself. An improvement could justify your approach.
- In line 47 you mention the “instrument-specific nature” of existing techniques, but in the following you will show that your method is very much instrument-specific itself. Compared to its dependence on camera system, the mentioned existing method is AOD based and thus should be – by design – instrument independent.
- From line 56-104 you describe a long list of existing machine learning approaches to analyze sky images for cloud classification and detection. I’m missing reasons why the community would consequently need your new technique. The reason that no other has been used in the context of cloud detection for sunphotometer purposes does not seem enough.
- Lines 124ff: You list the three camera systems’ imagery you will use without stating the image format provided. For the WSISEG I could check that it is PNG format. Already this casts doubts on the instrument independence of your method as quite some processing and different types of compression happened to the data.
- Lines 156 ff: The lengthy description of the partial automation of the preparation of the “truth” training data is confusing. You should in the beginning of this paragraph what is “manual” and what “automation”.
- Table1: What is a “sample”? One image, isn’t it? This all sounds like a small data set. More problematic – a small data set with just 300 samples makes it impossible to compare to other methods validated in other specific or less specific situations.
- Table 2: Quite a part of the number in here does not seem “good”. The problem is that you never stated what “sufficient” or “good” would be. And how limited other techniques are.
- Line 263: The statement “performed well relative to prior AOD screening algorithms” sounds very soft and is not corroborated by any shown data. Neither to your own “prior algorithms” nor to standard methods of the community (AERONET). The word “well” without any supporting data is used more often further down.
- In your limitations section you honestly state important points, but the manuscript does not provide the necessary cure or discussion. You are depending on image processing steps not within your control, i.e., camera configuration (white balance, contrast and color enhancement, compression, …) which makes your method instrument specific! And the selection of your small validation data set (e.g. without situations of high aerosol load) makes your scores hard to compare to other methods’ results.
Citation: https://doi.org/10.5194/amt-2022-217-RC2
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-217', Anonymous Referee #1, 27 Sep 2022
The manuscript by Wendt et al. used a deep transfer learning model to develop a cloud screening algorithm for ground-based sun photometry applications. Sun photometer images from three different sites are used for both data training and testing, and the images are classified as clear, cirrus and thin clouds. The algorithm can achieve hitting rates about 90% for the three datasets. The manuscript is overall reasonably presented. However, as a model based on deep learning, there are still some major problems with the model development and validation. The following lists my detailed comments.
1. It is noticed that a total of approximately 1500 images are used for both data training and testing, which is very small for deep learning model development. With such a small number of samples, there would be possible problems on overfitting, which can be demonstrated by the significant drop of accuracy for different datasets in Table 2. There are other problems related to small sample size as well. Thus, I don’t think such a small sample size would result any solid algorithm based on deep learning.
2. Images from three different sites are used. What’s the essential differences among them during data training and testing. It is noticed that the fractions of images of different classes (clear, cirrus or cloud) are quite different. Again, this further demonstrates the insufficiency of datasets for deep learning.
3. How are the prepared images labeled before the learning? As noticed from Session 2.2, the procedure is automated. If this is true, the authors have developed a physical model for the classification, and it is not necessary to develop a deep learning model any more. Because if we understood the physical model mentioned in Session 2.2 is the truth, the deep-learning based one will never beat it. Thus, the preparation for the image classification has to be better discussed.
4. Neither the model development nor the results were discussed in details. As an AMT articles, readers expect really detailed techniques and results to fully understand and to fully repeat the methods. The current manuscript is really concise, which makes it difficult to evaluate the method.
5. The manuscript mostly discussed their own algorithm. How is the current algorithm compared with traditional ones? Is the deep learning algorithm showing any advantages compared with conventional ones? In other words, the current model should be compared with similar ones.
Citation: https://doi.org/10.5194/amt-2022-217-RC1 -
RC2: 'Comment on amt-2022-217', Anonymous Referee #2, 10 Oct 2022
Review of “A cloud screening algorithm for ground-based sun photometry using all-sky images and deep transfer learning”, by Eric A. Wendt et al.
The manuscript introduces a new machine learning approach for cloud screening in sunphotometer measurements using an additional low-priced allsky camera. It describes in some detail the setup of camera system and machine learning approach. A training data set from three different camera systems is presented including a validation using parts of the data not used for training.
Major points of criticism:
- In my opinion there is a lack of motivation for the introduction of this system. It might be low cost, but still there is a need for additional instrumentation while cloud screening in sun photometer data is usually done using the sunphotometer data itself (by spatial of temporal variation tests, e.g., for the AERONET network).
- The manuscript only introduces a limited validation of the method and no comparison to established methods.
- The assumed better instrument independence of your approach, compared to standard methods, is at least questionable unless you show clear evidence. You are using compressed camera images of variable quality and finally state that this has clear effect yourself.
- At the same time, I doubt that the remaining presentation of the setup of a low-cost camera system from standard parts and the adjustment of an existing machine learning technique for general imagery (VGG-16, University of Oxford) to the allsky image cloud detection task is sufficient to justify a scientific publication in an atmospheric science publication like AMT.
In the present form I recommend the rejection of the manuscript. Resubmission after extension of the validation and comparison to other methods could be interesting.
More specific:
- Reading the first part of the introduction I already ask myself why you do not announce to compare your new method to a standard one based on the sunphotometer itself. An improvement could justify your approach.
- In line 47 you mention the “instrument-specific nature” of existing techniques, but in the following you will show that your method is very much instrument-specific itself. Compared to its dependence on camera system, the mentioned existing method is AOD based and thus should be – by design – instrument independent.
- From line 56-104 you describe a long list of existing machine learning approaches to analyze sky images for cloud classification and detection. I’m missing reasons why the community would consequently need your new technique. The reason that no other has been used in the context of cloud detection for sunphotometer purposes does not seem enough.
- Lines 124ff: You list the three camera systems’ imagery you will use without stating the image format provided. For the WSISEG I could check that it is PNG format. Already this casts doubts on the instrument independence of your method as quite some processing and different types of compression happened to the data.
- Lines 156 ff: The lengthy description of the partial automation of the preparation of the “truth” training data is confusing. You should in the beginning of this paragraph what is “manual” and what “automation”.
- Table1: What is a “sample”? One image, isn’t it? This all sounds like a small data set. More problematic – a small data set with just 300 samples makes it impossible to compare to other methods validated in other specific or less specific situations.
- Table 2: Quite a part of the number in here does not seem “good”. The problem is that you never stated what “sufficient” or “good” would be. And how limited other techniques are.
- Line 263: The statement “performed well relative to prior AOD screening algorithms” sounds very soft and is not corroborated by any shown data. Neither to your own “prior algorithms” nor to standard methods of the community (AERONET). The word “well” without any supporting data is used more often further down.
- In your limitations section you honestly state important points, but the manuscript does not provide the necessary cure or discussion. You are depending on image processing steps not within your control, i.e., camera configuration (white balance, contrast and color enhancement, compression, …) which makes your method instrument specific! And the selection of your small validation data set (e.g. without situations of high aerosol load) makes your scores hard to compare to other methods’ results.
Citation: https://doi.org/10.5194/amt-2022-217-RC2
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
475 | 179 | 46 | 700 | 98 | 48 | 49 |
- HTML: 475
- PDF: 179
- XML: 46
- Total: 700
- Supplement: 98
- BibTeX: 48
- EndNote: 49
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
Eric A. Wendt
Bonne Ford
John Volckens
This preprint has been withdrawn.
- Preprint
(866 KB) - Metadata XML
-
Supplement
(11339 KB) - BibTeX
- EndNote