the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Regional validation of the solar irradiance tool SolaRes in clear-sky conditions, with a focus on the aerosol module
Thierry Elias
Nicolas Ferlay
Gabriel Chesnoiu
Isabelle Chiapello
Mustapha Moulana
Download
- Final revised paper (published on 08 Jul 2024)
- Preprint (discussion started on 17 Jan 2024)
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2023-236', Anonymous Referee #1, 07 Feb 2024
The authors validate SolaRes (Solar Resource estimate) solar irradiance (including multiple components of solar irradiance) in clear sky conditions using AERONET spectral aerosol optical depths (AOD) as input to the SMART-G radiative transfer model, comparing to ground-based solar irradiance measurements. The authors use CAMS-NRT to show that SolaRes can be more broadly applied globally. The authors present statistics indicating good agreement between SolarRes and using AERONET spectral AOD as input. Because this region of the planet tends to have minimal aerosol loading, the statistics presented (even though they are normalized) are heavily skewed towards lower aerosol loading days. I recommend publication after revision.
Major feedback:
I think the authors need to spend some time copy-editing for grammar/English peculiarities.
The paper appears to be simple validation (which is a perfect fit for AMT) of SolaRes results with AERONET AOD and trace gas as input, comparing against in-situ irradiance observations. After reading the abstract, the methodology was not immediately clear, as the only statement regarding validation was “Measurements for validation are made at two sites in Northern France.” What are you measuring and what are you using to make these measurements (instruments)?
Considering that a major point of this research is to show that SolarRes can reproduce irradiance regardless of atmospheric loading, you really should constrain your errors by AERONET AOD or AOD/mu0. E.g., errors are w for 0.0<AOD<0.15, x for 0.15<AOD<0.5, y for 0.5<AOD<1.0, and z for AOD>1.0. Otherwise, you are mostly presenting evidence to suggest that SolarRes works well when mean aerosol loading is low, because it is in the mean. Figure 4 is a good example of a plot that shows how both aerosol loading (AOD) and aerosol type (ANG) are affecting your results, but this needs to be done with the errors using AERONET (and CAMS-NRT if you plan on extrapolating these results globally).
Considering how much high AOD data is screened out by the L&A cloud screening (Fig 2, DifHI), maybe just remove this technique from the paper with an explanation that the screening removed too many high AOD days? This could significantly shorten this paper.
Although the authors do tell us that this is a regional validation (it is in the title), I can’t help but think that this would be significantly more impactful if other sites were also used for validation.
Minor feedback:
P3, L65: “fastened” implies adhering to another object. “SMART-G run time is hastened…” makes more sense here.
P4, L113: “In the same field, AERONET aims to evaluate the aerosol radiative forcing, partly counteracting the greenhouse warming”.
This statement doesn’t make sense for multiple reasons. As far as I understand it, AERONET can be used to account for aerosol direct radiative forcing only at local scales, which has nothing to do with counteracting greenhouse gas warming. Additionally, as far as I know, AERONET is primarily used to validate different satellite aerosol remote sensing algorithms.
P5, L115: This sentence doesn’t make sense unless the word “are” was added by mistake.
P5, L117: This statement depends on viewpoint. I recommend you modify it such that the statement is unambiguously correct, e.g., “From a radiation perspective, one of the main impacts of aerosols is to attenuate…”. This still doesn’t address indirect effects, but this paper is about clear-sky anyways.
P6, L167: I recommend explaining what you mean by cosine errors at low sun angles. Do you mean plane-parallel radiative transfer errors?
P6, 211: You should provide a reference to aerosol temporal variability since you assert that it is not highly autocorrelated. I think you will find that AOD tends to have fairly high autocorrelation.
P6, L217: “..with possible inconvenient on solar resource precision.”
I don’t know what you are implying here. Do you mean that the L1.5 AERONET inversion data precision is insufficient? I think you will find that the drivers of error for AERONET inversions will be scattering geometry and optical loading. Good scattering geometry will only be found when the sun is low in the sky (morning or evening).
P6, L220: A 2013 citation of AERONET SSA uncertainty can not be using V3 AERONET data. I believe things are quite a bit different now as compared to V2. I’d also recommend only using AERONET SSA if AOD is >0.2 or 0.3, as SSA inversions can become rather unreliable at lower aerosol loading.
P7, L256: “… by interpolating the aerosol extinction properties at 1 minute” maybe should be “…by first interpolating the aerosol extinction properties to the same 1-minute cadence.”.
P9, L340: Couldn’t this use of plane parallel geometry dominate your errors in real-use scenarios when the sun is low in the sky?
P10, 369: Which two OPAC models, how are these chosen?
P10, L381: AERONET does not observe anything close to 280 nm or 4000 nm. Wouldn’t this significantly affect results if aerosol loading was elevated? I assume that you are just using OPAC model parameters at other wavelengths (2804000 nm), but this needs to be explicitly stated.
P10, L384: I think you mean “The vertical profile of AOT decreases exponentially with a scale height of 2 km”.
P11-P12: Your page numbering here is not working.
P12: The Canary Islands are on the northern edge of the Saharan dust transport pathway, your northern France sites are not. They will likely see much higher aerosol loading than what you will.
P14, L467: Another reason to constrain differences by AERONET AOD.
P14-15: Table 3 and Figure 1 all seem to indicate that the L&A screening is removing significantly more outliers than the Garcia method, and crucially AERONET. Angstrom exponent standard deviation is probably not useful unless AOD is greater than 0.15 (due to propagation of errors), but your cloud screening is clearly removing at least some elevated aerosol cases. I think if you overlay the histograms of AOD (for the 3 different AOD filters in Table 3), you may see this as well. I’d recommend just removing the L&A analysis.
P16, L536: If you are summing from i=1 to N, I would think you should divide by N, not nb.
P22, L716: I don’t know how this statement is supposed to read, but it is not correct as written.
P23, L740: What surface albedo from MODIS? Integrated, SW, LW + SW?
P24, L751: I thought DNI was computed by using ancillary input AOD and modeling the incident radiation field, or through observation?
P24, L754: I would change this to “These parameters can not be provided by observations alone, and the direct-sun measurements only partially describe the necessary input aerosol properties.”
P24, L750: You need a citation or evidence to suggest that there is a high time variability of aerosol properties.
P24, L759: This should read “..with a time resolution coarser than…”
P24 L788: You should identify the two OPAC models used earlier, most readers are not going to want to search down the text to find it.
P25, L821: AERONET provides information on aerosol size distribution (wavelength independent) and aerosol real/complex refractive indices. How are you extrapolating this information to (2804000 nm)?
P25, L825: I would change this to “…, with possible negative implications for solar resource precision.”
P27: Figure 7: I would plot the errors (model-observation) as a function of AOD, using different the z axis (color) to constrain by Angstrom exponent.
P28-29: I think you could shorten your conclusion section.
P28, L915: I think Figure 4 is the only direct evidence you have shown that aerosol variability is important here. The statistics will wash away your high aerosol events.
P28, L948: You may want to make the following change: ‘…,but not the aerosol absorption nor the angular…’
Citation: https://doi.org/10.5194/amt-2023-236-RC1 - AC2: 'Reply on RC1', Thierry ELIAS, 20 Apr 2024
-
RC2: 'Comment on amt-2023-236', Anonymous Referee #2, 19 Feb 2024
General Comments:
This manuscript compares the solar irradiance under clear sky conditions simulated by the SolaRes (solar resource estimation tool) at two sites with ground-based solar irradiance observations, evaluating the impact of different cloud-screening procedures and the incorporation of aerosol optical characteristics from different sources on the simulation results. The study indicates that SolaRes performs well in the study area when using AERONET AOT. The research content aligns with the publication scope of the Atmospheric Measurement Techniques journal, and publication is recommended after revisions.
Specific Comments:
1) The English grammar and structure of the manuscript need further improvement. Excessive forward and backward references make the article difficult to read. The abstract contains a lot of content but does not clearly explain the purpose of the research.
2) When assessing the impact of aerosol optical properties on the simulated results in the manuscript, the aerosol scattering phase function and single scattering albedo (SSA) are derived from different aerosol models. Due to the influence of various factors such as aerosol size distribution, chemical composition, hygroscopicity, morphology, etc., and their values vary with altitude, a thorough analysis of the corresponding errors should be conducted.
3) The two sites used by the authors for evaluation are relatively similar and both belong to regions with low aerosol loading, thus their representativeness is limited. If observational data from other sites with different aerosol concentrations and sources could be included, it would contribute to a better assessment of the application of SolaRes.
4) Line 793: “Small/larger aerosol model” is not a common expression.
Citation: https://doi.org/10.5194/amt-2023-236-RC2 - AC1: 'Reply on RC2', Thierry ELIAS, 20 Apr 2024
-
RC3: 'Comment on amt-2023-236', Anonymous Referee #3, 20 Feb 2024
The authors validate the solar irradiance tool SolaRes with global, direct, and diffuse irradiances measured at two platforms, one located in Lille, the other one in Palaiseau. An additional parameter study shows the influence of the selected AOT used from different data sources on the simulated irradiance.
The manuscript could have the potential to contribute to the understanding of discrepancies between simulated and measured irradiances. However, the wording is imprecise, the reader often has to think about what the authors wanted to say, and many "why" questions (see below) remain unanswered. This makes it difficult to assess the content of the manuscript. I therefore waver between rejection and major revision but finally agree with other reviewers. I will recommend publication after major revision.
(1) The text is written in a clear format. The grammar and spelling needs to be checked. Unfortunately, there is a large amount of missing information per sentence leaving the reader with more questions after reading than before. Additionally, the text suffers from imprecise wording which impairs the comprehension in many places.
An example from L 575:
“The comparison with the CMP11 shows a better score in MBD and a worst score in RMSD.” → The comparison of which parameters? Better/worse score than what?
The authors will find many other examples below, but I have not listed them all. I recommend that the authors go through the text and check each sentence for comprehensibility or consult an uninvolved person for doing so.
(2) The introduction into the context is incomplete.
The authors provide a great description of the ability and advantages of the SolaRes tool. However, most advantages belong to radiative 3D interactions, i.e., in the presence of clouds and strong surface albedo contrasts, shadowing effects etc. Clouds are not considered in the investigations – comparisons are done at clear-sky – and the reader do not get any information about the surface albedo and possible albedo contrasts and shadowing effects from both measuring sites which could influence the investigation.
Thus, most given advantages are of minor importance for the manuscript and the reader is left with questions, some of them being fundamental:
Why does the model have to be validated? What has been already done to validate the SolaRes tool? What has been done so far with SolaRes? Please provide a small literature research.
What is the novel idea in comparing the simulated with measured clear-sky irradiances? Why is it important to compare clear-sky irradiance simulations with measurements?
Why do the authors validate the tool only at clear-sky conditions? It becomes more interesting if the authors also considers cloud effects. Clear-sky conditions prevail only at very few times of the year (at least in the mid-latitudes). Additionally, many radiation transfer models can simulate clear-sky irradiances well enough. What is the advantage of doing it with SolaRes?
Why is it important to simulate clear-sky irradiances in 1 minute intervals? Downward irradiances in clear-skies mainly change with SZA, which can be easily interpolated with simulated irradiances much larger than in a 1 min sampling interval.
Why do the authors refer to 3D effects in the motivation part and compare simulated and measured radiation intensities on the basis of 1D simulations?
(3) Limited description of causalities.
The cloud filters influence neither the measurements nor the differences between measured and simulated irradiances. They only filter the data set. Comparison values like averages, differences, deviations, … change in relation to the filtered data set. The question is not whether and how the comparison values change when using a different cloud filter. The question is why the differently filtered data sets show different discrepancies between simulated and measured irradiances. At which AOT conditions, SZAs, seasons, discrepancies of true and assumed surface albedo,... are the discrepancies greatest/least? Why?
The authors explain greater uncertainties in the GTI in the winter season by an increased surface albedo. But is that the only reason? How large is the effect of the aerosols/atmosphere? MODIS surface albedo does not have to be representative for the surface of both measuring fields. Additionally, the MODIS-retrieved surface albedo has an uncertainty. (How large is it?) Do the authors can provide local surface albedo measurements? How large is the discrepancy between MODIS-derived and local-measured surface albedo? Can an increase of the surface albedo from 0.13 in summer to 0.35 in winter be real? Falling leaves as a reason are more of an assertion rather than a scientifically proven fact.
(4) Realization of the comparative study
Why do the authors need a time resolution of 1 minute in clear-sky conditions? Which quantity is significantly changing in this time resolution?
Comparisons should be done at 15 min sampling interval. This is because only the irradiances at sampling points at which the input variables were actually measured are valid data points. The irradiances at data points between are based on interpolation - and can be influenced by that. The interpolated data points are not (directly) based on radiative transfer calculations. Thus, they only multiply the differences between the measured and simulated irradiances at sampling points of 15 min sampling interval and add uncertainties due to interpolation errors in the finally estimated RMSD & MBD.
Why do the authors compare not only global irradiances (as this is the important parameter for solar panels) but also direct and diffuse irradiances?
L 853: “very efficient compensation between DNI and DifHI → the authors want to validate their model. If the authors compensate errors with other errors, I don't see the point of their validation study.
Why using two different cloud-screening methods? What do comparisons of RMSD/MBD of both cloud-screening methods tell us about the quality of simulations by SolaRes?
Why are only two sites are investigated which are close to each other? Can the authors expand their investigations to other measuring sites with different aerosol situations?
(5) Comparison with literature values
Comparisons of the RMSDs and MBDs with corresponding values in independent literature appear spontaneously in the manuscript (e.g. L 895). I suggest providing an overview of the comparative literature at the beginning of the manuscript, shortly indicating its data sets (when, where, what is measured) and explaining why this literature is used for comparison and what can be learned from that comparison.
Since I did not see any location and time information in the manuscript for the comparison datasets, I wonder if the RMSDs/MBDs in the literature and those in the manuscript can be compared at all. Were they measured at similar times and locations, which would allow a comparison?
(6) Contradictions
Do the authors contradict each other? On the one hand, it is said, that the radiation field changes very quickly and that a time interval of 1 min is therefore essential in the simulation of irradiances. On the other hand, aerosol data are averaged and irradiances are interpolated. However, the averaging & interpolation will not be possible without high uncertainties if the the radiation field is highly variable. However, if the authors can average & interpolate respective data sets without any concerns (and I am sure they can do it) - then the motivation about the rapidly changing radiation field is not appropriate for this work and the authors have to revise their motivation.
(7) Expansion of the study
I suggest investigating the comparison of simulated and measured irradiances as a function of the aerosol quantity. How well does the algorithm work with less/more aerosol contamination? In addition: How high was the aerosol load at the time of the measurements?
Based on Fig. 5: If the surface albedo has such a strong influence on the comparison result this parameter must also be included in the study, at least for the investigation of the tilted irradiance. Here, both the surface reflection and atmospheric aerosols influence the detected irradiance. If the aerosols describe the prevailing situation perfectly, but the irradiance is still simulated too low, something must be wrong with the assumption of the simulated ground. It is not enough to simply increase the surface albedo - can this albedo increase even be true at all for the measuring site?
Further comments:
L 5: affiliation is missing
L12: The objective of the paper is to validate SolaRes (Solar Resource estimate) in clear-sky conditions. → What is SolaRes? Why does it have to be validated?
L 22: A mixture of two aerosol models is required to compute aerosol optical properties. → Which aerosol models? Why are they required?
L 27: SolaRes is validated according to comparison scores found in the literature. →What are these comparison scores? Which kind of data sets are used for comparison (where, what, when….)? Are these data sets comparable (i.e, do they measure at the same time and place and if not, why should they still be comparable)?
L 29: The circumsolar contribution improves MBD → What is meant by the circumsolar contribution? Why does the contribution improves the MBD? Only after reading the whole manuscript, the reader understands this statement. It would be worthwhile to precisely give all necessary information in the abstract, already.
L 33: Why is DifTI measured in a vertical tilted plane?
L 38: Complementary information on angular scattering and aerosol absorption. → Which kind of information?
L 40: The uncertainty on the data source has a significant influence. → Which kind of uncertainty? Which data sources? How large is the influence? Why is there an influence?
L 42: RMSD in GHI still remains slightly smaller than state-of-the-art methods. → What are state-of-the-art methods?
L 52: Solar radiation incident on the collecting system is one of the main drivers of.. → solar irradiation is the only driver.
L 56: ... → If there are more important dependencies, please mention them. Otherwise use only one point.
L 58: Solar resource components → which ones? Please be more precise.
L 59: for any solar pant technology → Only alignment or still other parameters to consider? Please specify.
L 59: finest time resolution → Which is the finest, why it is required?
L 60: SolaRes consequently suits many applications from research to industrial fields. → which ones? Please, be more precise
L 62: Until now, a physical radiative transfer code was rarely used to respond to industrial needs in solar energy → I am sure that there are more than rare studies about efficiencies of solar power plants in the radiation field. Please refine the literature research.
L64: abaci or look-up tables → Which ones? Please provide references.
L 67: can even be simulated in a complex physical environment embedded in a realistic changing atmosphere, even considering 3D interactions between solar radiation and the environment. → Which ones? Please provide references.
L 70: precision on solar resource → I do not understand this. Please revise.
L 73: 15% error in simulated electrical power produced by PV could be avoided by computing spectrally-and-angularly refined irradiances, → I do not understand this sentence. Please revise.
L 75: be ranked in the class A of the solar resource model → What does class A mean? Please specify
L 79: which can be both of importance in other fields such as vegetation processes. → Please be more precise.
L 82: but which is consistent with computations of solar resource parameters in any panel orientation. → I do not understand this part of the sentence. Why "but"?
L 84: example Gueymard and Ruiz-Arias [2015] remind that circumsolar contribution is not considered by the 24 presented models. → Is it not considered in DNI? please be more precise.
L 86: paper presents the validation of SolaRes → Why does it have to be validated? What has been done so far with SolaRes? Please provide a small literature research.
L 97: However this is not the case for the clouds. → I do not understand what the authors want to say. Please be more precise.
L 99: The clouds. → aerosols affect the clouds, the cloud should not affect the irradiance
L 100: overcast → cloudy
L 100: Many methods are presented in literature → Please mention the most important ones.
L 104: we select a cloud-screening method missing a minimum number of clear-sky moments and representing the full AOT variability, and another cloud-screening method avoiding residual cloud influence but also missing some AOT variability. → Please shortly specify the two methods.
L 107: moments: what is a moment?
L 108: The field of study of solar energy benefits of other research areas such as the climate studies. → What do the authors want to say? Please specify the sentence.
L 114: This paper presents a radiative closure study as two categories of independent measurements are related by a radiative transfer code → If they are measured at the same place/time than they are related without the radiative transfer code. If they are not, than they are not related even if the authors use a radiative transfer code. Please revise the sentence.
L 116: The validation is performed on data sets acquired during two years at two sites of northern France. → Where? When? How long? Please specify.
L 118: The main impact of aerosols is to attenuate the direct component of the solar radiation incident at surface level → The aerosols also scatters the radiation. Please revise the sentence.
L 119: Input spectral AOT constrains efficiently this impact → This sentence is not understandable in this context. Please revise.
L 120: it depends on aerosol load and nature, aerosol nature driving the AOT spectral dependency → I do not understand this, please revise
L 122: A sensitivity study is performed which shows the impact of aerosol models reproducing the input spectral AOT. The data source is also evaluated by changing to a global product. → Please provide a context and specify.
L 125: describes the observational and modelling data sets → Please specify
L 129: radiative factors → please reword
L 130: and the water wapour content → Why this parameter is also investigated?
L 133: 1) the hypothesis on main aerosol nature, 2) the aerosol data source. → please specify
L 134: the Copernicus Atmospheric Monitoring Service (CAMS), assimilating satellite data sets to describe air quality, is also used here → Please provide a context why CAMS is also studied here.
L 141: either → or
L 158: relative stability → which value? Which parameter is stable?
Sect. 2.1.1: What is the relative/absolute uncertainty of the measured quantity? Time resolution, sampling rate of the instrument? Is there a time offset between all three instruments which can influence the comparison results?
L 162: uncertainty in shadowing device → Please specify the uncertainty.
L 162: winter gaps exist in the data time series → When? How long? How often?
L 163: in Delft (Netherland) for a recalibration (by Kipp and Zonen) or in M’Bour (Senegal) to be used as references for calibration of local instruments. → Are there references available about both calibrations? Can the authors provide the estimated calibration results for both instruments?
L 178: Michalsky et al. [1999] show a possible range of 30 W/m2 (> 5%) in GHIobs between unshaded pyranometers because of cosine errors, and that uncertainty is multiplied by 2 to 3 with unshaded pyranometers. → I do not understand this sentence. Please revise.
L 182: over 47 days → when?
L 186: which shows instrument performance stability, → I do not understand this, please revise
L 195: Uncertainty requirements → These uncertainties are required - but are there also fulfilled?
L 201: at both sites → which ones?
L 205: time resolution → resolution or sampling?
L 206: We perform 15-min averages → Below and above it was stated that the aerosol situation is changing fastly. Why do the authors use averages if only that aerosol situation is required at the time when simulated and measured irradiance is compared?
L 212: models → parameter. Which ones?
L 212: of aerosols → which parameter?
L 214: we choose for validation of SolaRes (Sect. 5) to rely on AOT acquired at around 3 minute resolution. → I do not understand this part of the sentence.
L 220: The option “hybrid” radiance products is chosen. → What does that mean?
L 227: a near real time service → what does that mean? Please specify.
L 229: considering the forecast mode between the two 12-hour runs → I do not understand this in the context of the first part of the sentence.
L 231: Angstrom coefficient is not defined.
L 232: infer AOT at both 440 and 870 nm → How large are uncertainties when infering AOT instead of measuring directly at the desired wavelengths? How large is the impact of uncertainty on the simulated irradiances?
L 234: of ~50% in AOT (0.10 at 440 nm, and 0.04 at 870 nm), and of 25% → Why are there such large discrepancies between CAMS and AERONET?
L 236: but for Germany and the CAMS reanalysis data set → I do not understand this part. Please revise.
L 238-244 → How were the additional parameters (e.g. WVC, ozone content, albedo) measured/derived?
L 240 input data → input data for the model
L 241: the surface albedo → How representative is a surface albedo of such a large surface footprint (how large?) for a local surface albedo at the measuring sites? How large are the uncertainties of the CAMS retrieved surface albedo? How is it derived? Please specify it shortly. Which kind of surface albedo? Black sky albedo?
L 243: Daily averages are computed, varying between 0.12 in November-December and 0.16 in June-July at Lille and Palaiseau. → Are these values representative for both measuring sites?
L 248: AsoRA method → please specify this method briefly.
L 249: What is a common input data set? Please specify.
L 254: time variability → Which variability is changing so fast at clear sky conditions that a 1 min time resolution is required?
L 257: aerosol extinction properties → which parameters? Please specify.
L 285: how is o defined?
L 288: according to the 'strict' definition given by Blanc et al. [2014] → please specify
L 289: DNIpyr as it is observed by a pyrheliometer. → how is it observed (comparing field of view (FOV) to FOV of sun disk)
L 291: Underestimation of DNIobs by DNIstrict → DNI_obs and DNIstrict are two irradiance quantities. The one cannot underestimate the other one.
Equ. 6: lambda_? → lambda sup.
L 297: Which extra-terrestrial solar irradiance spectrum is used here?
L 298: which can be decomposed → include “under clear-sky assumption”.
L 311: How is the Earth sphericity taken into account? Please specify
L 318: The comparison scores of what?
L 360: Please specify g-points.
L 361: Surface reflection is modelled by the surface albedo, considered spectrally independent. → Surface reflection and surface albedo are two surface reflection quantities. The one cannot modelled by the other. Do the authors want to say, that they assume a Lambertian surface?
L 365: Kato parameterisation → I am not in expert in this topic. Can you give more information here?
L 369: two OPAC aerosol models AM1 and AM2 → Why do the authors use these models? What are these models? Please provide more information.
Eq. 12 a/b : Summarize both equations in one and use only lambda.
L 379: the other aerosol optical properties → You should specify all "other aerosol optical properties".
L 382: at which four wavelengths?
L 383: inter and extrapolated → from which to which wavelength?
L 383: other wavelengths → which ones?
L 385: height → resolution/sampling interval
L 389: conditions, when aerosols affect the surface solar irradiance but not the clouds. → see above
Sect 4: The authors should say at the begin of Sect 4 why the cloud-screening methods are required. The information will be given later but should be shifted to that place.
L 392: evalutions → situations
L 394: expected to show contrasted results → why do both methods show different results? Why do the authors use these two methods?
Eq 13: How do the authors estimate TPS, and FPS? Which kind of parameter is used to calculate TPS and FPS? Are these quantities based on irradiance measurements, all-sky images ... ? Which time resolution/sampling interval is used for the parameter used to calculate TPS/FPS?
L 409: 24% → 24% seems very low. Why the value is so low? Where are the "problems" of the Garcia/L&A methods?
L 416: It's worth mentioning that the Garcia method relies on collocated AOT information, which enables it to better detect the presence of clouds, particularly for higher aerosol loads. → I do not understand this sentence. Please revise.
L 418: the tests are adjusted and relaxed → How? What to the authors mean with relaxed?
L 420: The first two tests remove obvious cloudy moments characterized → What is the input? What is the output? What are moments?
L 421: what is the normalization factor?
L 421: Definition of threshold values → Please provide the value of the threshold.
L 433: assuming constant AOT during the day → Can the authors assume a constant AOT during the day?
L 437: for each iteration → Which quantity is iterated over?
L ???: However, the correlation coefficient is only 0.20, which is lower than the value reported by Garcia et al. [2014]. Additionally, the correlation coefficient for bGHI is only 0.30, which is significantly smaller than the value of Garcia et al. [2014]. → How large are the reference values? Please refer to the Table. Why are the values of the authors lower?
L 442: 14 to 16% of the moments can be declared clear-sky by Garcia in 2018-2019 at Lille and Palaiseau, and only 8 to 10% by the stricter L&A cloud-screening → How do the moments that are found by Garcia but not by L&A differs from the moments that are found by both methods? Are all moments found by L&A also found by Garcia?
L 442 – 450: What is 100 %, i.e. the reference? How many moments (what is a moment?) are classified as correct/wrong?
L 454: Part of these differences → How much? What is the cause of the other part?
L 460-465: What are these multiplication factors? What do they imply?
L 463: 1.7-2.0 → Please use here a multiplication factor so that the value can be compared with both other factors.
L 465 increases by 420-450 W/m² → what is the reference?
L 468 It is interesting to note → Why is this interesting? The diffuse radiance has low variation at clear-sky condition.
L 469-470: Garcia clear-sky, L&A clear-sky → please reword
L 470: 113 % → Why is it larger than 100%? What is the reference?
L 489: aerosol properties → which ones?
L 493: And what is the seasonal variation of the angstrom coefficient?
L 499: atmospheric properties that are most relevant for clear-sky radiative transfer simulations → which ones?
L 575: The worst score in RMSD is explained by the seasonal influence, → I do not understand this as a reason for a large RMSD.
L 578: These values of RMSD are similar to the RMSD of 1.9% between the observations themselves. → ?, please revise
L 580: be explained by the difference between the observations themselves. → I do not understand this as a reason.
L 583: The cosine error of the unshaded CMP11 pyranometer may be responsible for this discrepancy. → How large is the the cosine error? What are the SZAs in the data of Table 4. Can the authors verify their statement by examining the discrepancy as a function of the SZA?
L 589: This is certainly caused by clouds in the sky vault but undetected by the Garcia cloud-screening as 1) the L&A screening procedure gets rid of these points, consistently with its lower FPS by Gueymard et al. [2019], and 2) AERONET Level 2.0 provides values of AOT and WVC all day, meaning that no clouds are seen in the solar direction. → Couldn't a large amount of aerosols also be the reason?
L 621-624: Can DNI_obs contain some circumsolar contribution? How much are RMSDs changing in case simulated direct irradiance + circumsolar contribution is compared with DNI_obs?
L 637: 1) DifHI depends on the distinction between scattering and absorption, while DNIstrict depends only on extinction → I do not understand this sentence and it is not clear why this statement is a reason for the large RMSD.
L 638: DifHI depends on surface reflection → DifHI does not only depend on surface reflection. Please revise.
Tab. 4, 5, 6, 7 and all others → Please indicate - e.g. in the caption-, what MBD and RMSD were calculated from.
L 656: It must be noted that mean solar resource parameters remain unchanged at Palaiseau (Table 2) when adding the AERONET cloud-screening → ?, please revise.
L 663: atmospheric scattering is not in play. → This is a false conclusion, or it could be. Please revise.
L 664: We could then make the hypothesis that the Gschwind et al. [2019] cloudscreening procedure rejects large values of SZA, and mean SZA would be smaller than in our data set (Table 2), explaining the increase in both DirHIobs and DifHIobs and consequently in GHIobs. → This is just a guess, can the authors prove it?
L 665: mean SZA would be smaller → The SZA can also be smaller because Gschwind et al., could measure at other times and days.
L 669: cloud-screening influence is also ~ 15 W/m² → What does that mean?
L 673: is 8 +/- 6 W/m → On which data set this value is based on?
L 675: Delta DifNI_circ: → How is this simulated? Which SZAs?
L 686: Underestimation should be expected → of which parameter?
L. 699: I would suggest giving a reason, why the pyranometer is tilted 90°.
L 702: Fig 5.4 → Fig 5.
L 719: Comparison in GTI: What is compared with what?
L 722: is 8.7% and the RMSD is 12.1%. → Why are bias and RMSD larger in the morning? Additionally, I suggest to include the numbers in Table 7.
L 731: Comparison for the Sun facing → what is compared with what?
L 733: is partly caused by variability in the effective surface albedo. → And the other part is caused by what?
Fig 5: How large is the impact on GTI if parameters other than SAL, e.g., aerosol parameters, are changed?
Is there a reason, why SAL should be three times larger? If MODIS is correct, there is another reason for higher global irradiance. If MODIS is not correct, than you cannot use this value but use a local measured surface albedo.
If the surface albedo has such a strong influence on the comparison result than this parameter has to be included in this study (manuscript) as well.
L 739: MBD changes from +3.7% in summer to -6.5% in winter (3rd and 4th lines in Table 7) → Why?
L 740: While the surface albedo derived from satellite changes little → Is the satellite-derived surface albedo representative for the measuring site? Which kind of albedo is it? (Black-sky albedo?) How large is the uncertainty in the retrieved surface albedo?
L 742: surface albedo of 0.35 → Is this value representative at this time for the measuring site?
L 745: could be responsible of such differences between a satellite surface albedo and an effective surface albedo for a vertical instrument. → Can the authors calculate uncertainties in the surface albedo due to shadowing and 3 D effects? Which 3D effects? What about lack of representativity of the satellite surface albedo for the local surface albedo, anisotropic reflection of the surface and the comparison of different kind surface albedos (the authors need blue sky albedo but I assume that they refer to the black sky albedo) as a reason?
L 746: The differences between winter and summer could be caused by fallen leaves of surrounding trees, in relation with the sun position in the sky. → The authors should estimate the uncertainties of their assumptions and not just make assumptions.
L 760: Given the high time variability of aerosol properties, → How large /fast is it?
L 765: This Section shows the sensitivity of the computed solar resource parameters to the parameterisation of the aerosol properties and also to the aerosol data source. → Please be more precise, here. I am not sure what is what.
L 783: Two aerosol OPAC models → which ones?
L 784: This mixture defines aerosol microphysical properties (size distribution and refractive index) which are processed according to Mie theory to provide the aerosol optical properties as the phase function and the single scattering albedo at any wavelengths. → Please restructure the manuscript. Here are some pieces of information which are required earlier.
L 799: of solar radiation, the sensitivity is mainly caused by the spectral behaviour of AOT. → ?, please revise
L 803: with the model for larger aerosols (Table 8) having a secondary influence. → which influence?
L 811: The impact of what?
L 812: The efficient compensation → Is this the intention of the authors? I think biases in GHI, DNI, DifHI depend on the SZA. Using a different data set whose averaged SZA differs from the actual one will give completely different biases in GHI.
L 816: No combination could change the sign of MBD in GHI to positive. → What does that mean physically?
L 826: 520 time steps → what does that mean?
Sect. 6.2: How large are time offsets between irradiance measurements and AERONET-retrieved aerosol characteristics? Are data pairs with large time offsets useful since the authors have explained that the aerosol situation can change quickly?
L 829: Table 9 shows the comparison scores → What is compared with what?
L 830: from 1.7 to 1.2% with Garcia, and from 1.2 to 0.8% → What is the reference?
L 830: Ruiz-Arias et al. [2013] also make comparisons between observation and computations exploiting Level 1.5 AERONET inverted products with a radiative transfer code, but for smaller mean AOT. → What is the reason to compare MPD, RMSD, ... with values from literature? Are these comparisons feasible, e.g., the data sets comparable (same time, same location, same weather/aerosol conditions, same SZA ....)?
L 833: are similar with RMSD → similar to what?
L 837: high spatial variability → what does that mean?
L 837: two sites → which ones. Are they comparable with those studied here?
L 838: presents 0.4% difference in MBD between Lille and Palaiseau → What does that mean?
L 840: showing that the simpler approach based on the AOT data set is appropriate to get high precision in DNIpyr. → Why? What is the physical effect behind it?
L 845: The improvement is not significantive in DifHI_pyr → What is improved by what?
L 847: and our scores for DifHIpyr are similar to what is presented for one site, → ?, please revise
L 849: as the surface albedo → Only that? Cannot e.g. uncertainties in the retrieved AERONET product and uncertainties in model parameterizations, aerosol parameterizations, surface parameterization, ... also be a reason?
L 853: “very efficient compensation between DNI and DifHI → the authors want to validate their model. If the authors compensate errors with other errors, I don't see the point of their validation study.
Table 9 and respective text: Can the authors provide an error estimation for different SZA bins?
Table 9: Please provide a better caption which is understandable
L 869: computed solar resource components → Please be more precise.
Caption Fig 7: GHI is missing.
L 876: Comparison is performed → Comparison of what?
L 878: has less spatial and temporal resolution → please write again which one. Resolution or sampling?
L 886: “the impact of aerosols in direct surface irradiance is about three to four times larger than it is in global surface irradiance”, → contradiction: Above, the authors stated that aerosols have no influence on direct irradiance.
L 888: Level 2.0 clear-sun cloud-screening → what is this?
L 892: However their simulated DNI overestimates observation, even if their uncertainty source analysis suggests tendency for DNI underestimation, consistently with SolaRes results. → Why is this happening?
L 895: RMSD in SolaRes GHI → RMSD between simulated and which GHI?
L 905-910: Please be more precise
L 913: The first step … → what are the other steps?
L 930: ~1.6%, but for conditions more representative of the mean aerosol conditions. Results are similar with another instrument operating in a slightly restricted spectrum. → ?, please revise
L 934: Under-estimation of DNIobs by SolaRes decreases by 1% to reach a MBD of -1.0%, and the RMSD slightly decreases to reach ~2%. → what is the reference?
L 964: increasing by 0.6 to 1.0% → ?, please revise
L 967: advantage to cover the entire globe for many years, → that is true, but only for clear-skies, a condition that is rarely met in most areas.
L 977: Furthermore, SolaRes in global mode will be tested in all-sky conditions. → Please give one or more reason(s) for doing that.
The conclusions can be shortened.
General errors:
it should be either … or …
resp. → respectively
underestimation, overestimation
an RMSD, an MBD (not a RMSD, MBD)
Citation: https://doi.org/10.5194/amt-2023-236-RC3 -
AC3: 'Reply on RC3', Thierry ELIAS, 20 Apr 2024
Thank you for your comments.
Please find attached the answers to your comments, in a pdf file.
Best regards,
Thierry Elias.
Citation: https://doi.org/10.5194/amt-2023-236-AC3
-
AC3: 'Reply on RC3', Thierry ELIAS, 20 Apr 2024