the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Evaluation of FY-4A/AGRI visible reflectance using the equivalents derived from the forecasts of CMA-MESO using RTTOV
Abstract. The Advanced Geostationary Radiation Imager (AGRI) onboard the FY-4A geostationary satellite provides high spatiotemporal resolution visible reflectance data since 12 March 2018. Data assimilation experiments under the framework of observing system simulation experiments have shown great potential of these data to improve the forecasting skills of numerical weather prediction (NWP) models. To assimilate the AGRI visible reflectance observations, it is important to evaluate the data quality and to correct the biases contained in these data. In this study, the FY-4A/AGRI channel 2 (0.55 μm - 0.75 μm) reflectance data were evaluated by the equivalents derived from the short-term forecasts of the China Meteorological Administration Mesoscale Model (CMA-MESO) using the Radiative Transfer for TOVS (RTTOV, v 12.3). It is shown that the observation minus background (O – B) statistics could be used to reveal the abrupt changes related to the measurement calibration processes. The mean differences of O - B statistics are negatively biased. Potential causes include the NWP model errors, the unresolved aerosol processes, the forward-operator errors, etc. The relative biases of O-B computed for cloud-free and cloudy pixels were used to correct the systematic differences in different conditions. After applying the bias correction method, the biases and standard deviations of O-B departure were reduced. The bias correction based on ensemble forecasts is more robust than deterministic forecasts due to the advantages of the former in dealing with cloud simulation errors. The findings demonstrate that analyzing the O-B departure is effective to monitor the performance of FY-4A/AGRI visible measurements and to correct the associated biases, which facilitates the assimilation of these data in conventional data assimilation applications.
- Preprint
(1840 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Comment on amt-2024-12', Anonymous Referee #1, 05 May 2024
This paper compares the FY-4A/AGRI 0.65-um visible reflectance (O) with the model simulations generated from CMA-MESO forecasts using the RTTOV (B). The potential sources contributing to the differences between O and B, such as the unresolved aerosol processes, the ice scattering models, are analyzed.
The paper is relevant to the cloud remote sensing field, as the growing international fleet of next-generation geostationary imagers can be expected to aid in our understanding of the diurnal cycles of clouds and aerosols. Well understood and characterized the biases of their observations will therefore be well received by the community. However, the authors make what I think are several unsubstantiated assertions (see my detailed comments). I recommend major revisions before reconsidering for publication. My general and specific comments are below.
General Comments:
- A comparison with the model simulations cannot be called an “evaluation”, especially when the model simulations are not as accurate as expected. Currently, the RTTOV forward-operator for clouds and/or within the visible and shortwave infrared spectral ranges is still questionable, and the forecasts from CMA-MESO also lack adequate evaluations.
- As (1), if the authors persist in characterizing the biases of AGRI reflectance observations by comparing with the model simulations, the performances of RTTOV forward-operator and the forecasts from CMA-MESO should be evaluated first.
- The bias characteristics are not well analyzed. How about the spatial distributions or seasonal variations of AGRI biases? Do they have differences before and after the FY-4A satellite’s U–turn at the vernal and autumnal equinoxes?
Specific Comments:
- Lines 16, 22, 33 and 72: The abbreviations (FY, TOVS, and so on) should be given full name when first appeared in the abstract and text.
- Line 85: Himawari-8 satellite should be introduced because not all readers know it is the first one of the Japanese next-generation geostationary satellite.
- Line 82: How about the spatial coverage of CMA-MESO, or the region of interest in this study?
- Lines 96 and 117? Here, the authors give two cloud mask definitions. Which one will be used for Tables 1 and 2?
- Lines 201-203: I can’t understand this sentence. Aren’t the “microphysical properties therein” “cloud variables”?
- Figure 6: Readers can hardly identify the differences between observed and model simulated reflectance. The authors are suggested using a different colormap or adding figures to show their differences.
Citation: https://doi.org/10.5194/amt-2024-12-RC1 - AC1: 'Reply on RC1', Yongbo Zhou, 30 May 2024
-
AC2: 'Reply on RC1', Yongbo Zhou, 25 Jul 2024
We received another reviewer’s comments on this manuscript, and extra modifications were made. Therefore, this version of response letter has some differences compared with the previously uploaded one, but your valuable suggestions were respected and the revisions were made according to your general and specific comments. Please take this version as the final confirmation. We are sorry for the inconvenience. A detailed point-by-point response was uploaded in the the supplement.
-
RC2: 'Comment on amt-2024-12', Anonymous Referee #2, 01 Jul 2024
Review of "Evaluation of FY-4A/AGRI visible reflectance using the equivalents derived from the forecasts of CMA-MESO using RTTOV" by
Yongbo Zhou, Yubao Liu, Wei Han, Yuefei Zeng, Haofei Sun and Peilong Yu (https://doi.org/10.5194/amt-2024-12)Recommendation: Major Revisions
General Comments:
In this manuscript, differences between observed visible satellite images from the AGRI instrument onboard FY-4A and synthetic images computed with the RTTOV forward operator package from deterministic forecasts of the CMA-MESO model are discussed and a bias correction method is proposed. The authors address a relevant question. Fast forward operators have become available for the up to now underused visible satellite channels in the last years, several services have started monitoring experiments for these channels and at the German weather service visible reflectances are assimilated operationally since March 2023. So there is a clear interest in exploiting the cloud and aerosol information contained in visible channels for data assimilation and understanding systematic errors is an important first step.
While the topic is clearly relevant, the methods employed by the authors are in my opinion not really sufficient to get information on the origin of reflectance errors. The latter could be caused by the instrument (e.g. calibration problems), the model (e.g. deficiencies in the representation of clouds) or the forward operator (e.g. albedo errors or 3D effects). In particular, I think the authors skipped one first, important step: Comparing reflectances histograms for O and B. In contrast to O-B statistics, histograms are not sensitive to the location of the clouds (to correct that is the job of data assimilation) and can provide a lot of information (see e.g. Geiss et al 2021). I would also suggest to distinguish not only between cloudy and clear pixels, but also between regions for which we would expect different error characteristics. Given the CMA-MESO domain contains the Himalaya and oceans, it could make sense to distinguish between land, sea (lower albedo error) and extreme orography (potentially larger model and albedo errors). And finally, it would be important to exclude cases from the statistics, for which it is clear that one cannot get reasonable results. This would e.g. be pixel for which the BRDF atlas contains contribution from snow and thus no reliable albedo information is available.
The bias correction proposed in the last part of the mansucript is in principle interesting. However, the results are not discussed in a sufficiently clear way and I do not understand the motivation for the "ensemble" version.
Specific Comments:
- l. 24: How do you mean "negatively biased"? In Fig. 2 and Tab. 1 O-B is in general positive.
- l. 27: The standard deviations are actually not reduced significantly and that is also not
what you would expect from a bias correction.- Introduction: There are two papers comparing synthetic and observed visible reflectances that
should be cited here, Geiss et al. (2021) and Lopez & Matricardi (2022).- l. 57: "The key assumption [...] is that the model equivalents do not generate systematic biases"
Well, this is just not a reasonable assumption! Systematic errors due to model and forward
operator cannot be assumed to be smaller than instrument/calibration errors. What you can assume
is that the different error contributions have different spatial and temporal characteristics
(e.g. the jump in Fig. 2 is very likely related to the calibration changes), which can be used to
identify likely error sources.- l. 61: "NWP model errors could be alleviated [] by temporally averaging several instants over a
long period of time". Averaging over a long period can help to detect systematic errors, but it
cannot "alleviate" them. No DA sytem works with time-averaged states...- 2.1: The parameterization for unresolved sub-grid clouds was found to be very important in
Geiss et al. (2021). How is CMA-MESO handling this and are the subgrid contributions included
in the input data for RTTOV?- l. 95 / Fig. 5: For the Baum ice clouds you need also the effective ice particle radius -- which
parameterization did you use? Do you have an idea why the effect visible in Fig. 5 is so much
stronger than what Geiss et al. found for modified ice optical properties?- l. 98 / 117: How well does "cloudy" in the CLM product correspond to CWP > 0.01kg/m3 ? This
probably rather unclear (no pun intended). The CLM algorithm may even use infrared channels and
could therefore "see" clouds that are actually transparent in the visible range (I have not
checked that). A more reliable way to define cloudy pixels would be to use reflectance > threshold
value (e.g. r>0.2 as in Geiss et al. 2021) or even better reflectance > clear sky reflectance +
threshold value. Exactly the same criterion could be applied to the observed and the synthetic
reflectances.- End of 2.1: Maybe here would be a good place to add some more information on RTTOV. In contrast to
Anonymous Reviewer #1 I do not think RTTOV in the visible range "is still questionable". No
forward operator is perfect, but there are several studies (Geiss et al 2021, Lopez & Matricardi 2022)
that demonstrate the capabilities and discuss limitations of RTTOV for visible satellite channels.
Moreover, RTTOV is used for the operational assimilation of the visible Meteosat SEVIRI channel at
DWD, which is a clear indication that is produces reasonable results. While most of them are based
on the MFASIS solver (Scheck et al. 2016, 2022), the latter is just an emulator for the DOM solver used
in this study, so error estimates derived for MFASIS present upper bounds for DOM errors.
(I guess MFASIS coefficients are not yet available for FY-4A).- l. 109: It would be good to provide the wavelength
- l. 116: If you produce output with CMA-MESO every every 15 minutes and FY-4A/AGRI starts scanning
from the north of the disk to the south at the same times (I am not sure about the scanning
strategy), the time difference is probably smaller than 7.5 minutes. Please check...- Fig 2: Is all = cloudy + cloud-free? Then I do not understand how the bias for all can be larger
than for cloudy. Or is the third category "uncertain" missing in the plot? Then it would be good to
add it. Moreover, the number of pixels in cloudy / cloud-free / uncertain as a function of time
would be interesting.
Maybe this is helpful for the interpretation: If the jump on 9th of September is related to a
changed calibration factor (the constant used to convert the number of detected photons to a
radiance) then the change in reflectance bias in cloudy / cloud-free should be proportional to
the mean reflectance in cloudy / cloud-free.
The fact that the cloud-free bias increases indicates that either the calibration has in fact
become worse on September 9th or that before a calibration-related bias compensated a clear-sky
bias (e.g. due t an albedo bias).
- l. 145: If I understand this correctly, your "ensemble" is just seven deterministic model states
for different lead times. This is not what anybody in data assimilation would call an ensemble and
you are definitely not evaluating an "ensemble forecast" (l. 146). This is really misleading and
I also do not see why this "ensemble" should be interesting. If you compute the average of
synthetic images for different lead times you will of course get a blurred version of the 3h
lead-time image and increase the cloud cover. But no operational data assimilation system I know
uses time-averaged states.
Is the point here that in a real ensemble DA system you would use the "real ensemble" for the
bias correction but as you have none available here you are using the set of deterministic model
states, because it has similar properties (the clouds are not exactly at the same locations in
all of the members)?- l. 147 / 211: Using the overbar for both ensemble mean and spatial average is confusing.
- Table 1 (deterministic forecast): I am missing a better discussion of the results.
The cloudy biases are reduced by 40-60%, but in all cases the bias correction actually *increases*
the bias for the clear pixels. For SEP 15 & 17 the bias correction changes the sign of the
bias for the clear pixels to negative. As the the bias is positive for the cloudy pixel, this leads
to compensating biases for the "all" category. Why is the bias correction ineffective for the clear
pixels?- l. 253: What does "cloudy" mean for the ensemble? That in the ensemble mean state CWP>0.01kg/m2 ?
So potentially clear member pixels contribute to bar{B_cld}?- l. 304ff: There are two YZs -- maybe use the full names.
References:
- Geiss et al. (2021): "Understanding the model representation of clouds based on visible and infrared satellite observations", ACP, Volume 21, issue 16, https://doi.org/10.5194/acp-21-12273-2021
- Lopez & Matricardi (2022): "Validation of IFS+RTTOV/MFASIS0.64-μm reflectances against observations from GOES-16, GOES-17, MSG-4 and Himawari-8", ECMWF Technical memorandum, DOI 10.21957/l4u0f56lm, https://www.ecmwf.int/en/elibrary/81322-validation-ifsrttovmfasis064-mm-reflectances-against-observations-goes-16-goes-17
- Scheck et al. (2016): "A fast radiative transfer method for the simulation of visible satellite imagery", Journal of Quantitative Spectroscopy and Radiative Transfer, Volume 175, 54-67, https://doi.org/10.1016/j.jqsrt.2016.02.008.
- Scheck, L., Weissmann, M., and Bernhard, M. (2018): "Efficient Methods to Account for Cloud-Top Inclination and Cloud Overlap in Synthetic Visible Satellite Images", J. Atmos. Ocean. Tech., 35, 665-685, doi: 10.1175/JTECH-D-17-0057.1
- Scheck (2020): "A neural network based forward operator for visible satellite images and its adjoint", Journal of Quantitative Spectroscopy and Radiative Transfer, Volume 274, November 2021, 107841, https://doi.org/10.1016/j.jqsrt.2021.107841
Citation: https://doi.org/10.5194/amt-2024-12-RC2 - AC3: 'Reply on RC2', Yongbo Zhou, 25 Jul 2024
Status: closed
-
RC1: 'Comment on amt-2024-12', Anonymous Referee #1, 05 May 2024
This paper compares the FY-4A/AGRI 0.65-um visible reflectance (O) with the model simulations generated from CMA-MESO forecasts using the RTTOV (B). The potential sources contributing to the differences between O and B, such as the unresolved aerosol processes, the ice scattering models, are analyzed.
The paper is relevant to the cloud remote sensing field, as the growing international fleet of next-generation geostationary imagers can be expected to aid in our understanding of the diurnal cycles of clouds and aerosols. Well understood and characterized the biases of their observations will therefore be well received by the community. However, the authors make what I think are several unsubstantiated assertions (see my detailed comments). I recommend major revisions before reconsidering for publication. My general and specific comments are below.
General Comments:
- A comparison with the model simulations cannot be called an “evaluation”, especially when the model simulations are not as accurate as expected. Currently, the RTTOV forward-operator for clouds and/or within the visible and shortwave infrared spectral ranges is still questionable, and the forecasts from CMA-MESO also lack adequate evaluations.
- As (1), if the authors persist in characterizing the biases of AGRI reflectance observations by comparing with the model simulations, the performances of RTTOV forward-operator and the forecasts from CMA-MESO should be evaluated first.
- The bias characteristics are not well analyzed. How about the spatial distributions or seasonal variations of AGRI biases? Do they have differences before and after the FY-4A satellite’s U–turn at the vernal and autumnal equinoxes?
Specific Comments:
- Lines 16, 22, 33 and 72: The abbreviations (FY, TOVS, and so on) should be given full name when first appeared in the abstract and text.
- Line 85: Himawari-8 satellite should be introduced because not all readers know it is the first one of the Japanese next-generation geostationary satellite.
- Line 82: How about the spatial coverage of CMA-MESO, or the region of interest in this study?
- Lines 96 and 117? Here, the authors give two cloud mask definitions. Which one will be used for Tables 1 and 2?
- Lines 201-203: I can’t understand this sentence. Aren’t the “microphysical properties therein” “cloud variables”?
- Figure 6: Readers can hardly identify the differences between observed and model simulated reflectance. The authors are suggested using a different colormap or adding figures to show their differences.
Citation: https://doi.org/10.5194/amt-2024-12-RC1 - AC1: 'Reply on RC1', Yongbo Zhou, 30 May 2024
-
AC2: 'Reply on RC1', Yongbo Zhou, 25 Jul 2024
We received another reviewer’s comments on this manuscript, and extra modifications were made. Therefore, this version of response letter has some differences compared with the previously uploaded one, but your valuable suggestions were respected and the revisions were made according to your general and specific comments. Please take this version as the final confirmation. We are sorry for the inconvenience. A detailed point-by-point response was uploaded in the the supplement.
-
RC2: 'Comment on amt-2024-12', Anonymous Referee #2, 01 Jul 2024
Review of "Evaluation of FY-4A/AGRI visible reflectance using the equivalents derived from the forecasts of CMA-MESO using RTTOV" by
Yongbo Zhou, Yubao Liu, Wei Han, Yuefei Zeng, Haofei Sun and Peilong Yu (https://doi.org/10.5194/amt-2024-12)Recommendation: Major Revisions
General Comments:
In this manuscript, differences between observed visible satellite images from the AGRI instrument onboard FY-4A and synthetic images computed with the RTTOV forward operator package from deterministic forecasts of the CMA-MESO model are discussed and a bias correction method is proposed. The authors address a relevant question. Fast forward operators have become available for the up to now underused visible satellite channels in the last years, several services have started monitoring experiments for these channels and at the German weather service visible reflectances are assimilated operationally since March 2023. So there is a clear interest in exploiting the cloud and aerosol information contained in visible channels for data assimilation and understanding systematic errors is an important first step.
While the topic is clearly relevant, the methods employed by the authors are in my opinion not really sufficient to get information on the origin of reflectance errors. The latter could be caused by the instrument (e.g. calibration problems), the model (e.g. deficiencies in the representation of clouds) or the forward operator (e.g. albedo errors or 3D effects). In particular, I think the authors skipped one first, important step: Comparing reflectances histograms for O and B. In contrast to O-B statistics, histograms are not sensitive to the location of the clouds (to correct that is the job of data assimilation) and can provide a lot of information (see e.g. Geiss et al 2021). I would also suggest to distinguish not only between cloudy and clear pixels, but also between regions for which we would expect different error characteristics. Given the CMA-MESO domain contains the Himalaya and oceans, it could make sense to distinguish between land, sea (lower albedo error) and extreme orography (potentially larger model and albedo errors). And finally, it would be important to exclude cases from the statistics, for which it is clear that one cannot get reasonable results. This would e.g. be pixel for which the BRDF atlas contains contribution from snow and thus no reliable albedo information is available.
The bias correction proposed in the last part of the mansucript is in principle interesting. However, the results are not discussed in a sufficiently clear way and I do not understand the motivation for the "ensemble" version.
Specific Comments:
- l. 24: How do you mean "negatively biased"? In Fig. 2 and Tab. 1 O-B is in general positive.
- l. 27: The standard deviations are actually not reduced significantly and that is also not
what you would expect from a bias correction.- Introduction: There are two papers comparing synthetic and observed visible reflectances that
should be cited here, Geiss et al. (2021) and Lopez & Matricardi (2022).- l. 57: "The key assumption [...] is that the model equivalents do not generate systematic biases"
Well, this is just not a reasonable assumption! Systematic errors due to model and forward
operator cannot be assumed to be smaller than instrument/calibration errors. What you can assume
is that the different error contributions have different spatial and temporal characteristics
(e.g. the jump in Fig. 2 is very likely related to the calibration changes), which can be used to
identify likely error sources.- l. 61: "NWP model errors could be alleviated [] by temporally averaging several instants over a
long period of time". Averaging over a long period can help to detect systematic errors, but it
cannot "alleviate" them. No DA sytem works with time-averaged states...- 2.1: The parameterization for unresolved sub-grid clouds was found to be very important in
Geiss et al. (2021). How is CMA-MESO handling this and are the subgrid contributions included
in the input data for RTTOV?- l. 95 / Fig. 5: For the Baum ice clouds you need also the effective ice particle radius -- which
parameterization did you use? Do you have an idea why the effect visible in Fig. 5 is so much
stronger than what Geiss et al. found for modified ice optical properties?- l. 98 / 117: How well does "cloudy" in the CLM product correspond to CWP > 0.01kg/m3 ? This
probably rather unclear (no pun intended). The CLM algorithm may even use infrared channels and
could therefore "see" clouds that are actually transparent in the visible range (I have not
checked that). A more reliable way to define cloudy pixels would be to use reflectance > threshold
value (e.g. r>0.2 as in Geiss et al. 2021) or even better reflectance > clear sky reflectance +
threshold value. Exactly the same criterion could be applied to the observed and the synthetic
reflectances.- End of 2.1: Maybe here would be a good place to add some more information on RTTOV. In contrast to
Anonymous Reviewer #1 I do not think RTTOV in the visible range "is still questionable". No
forward operator is perfect, but there are several studies (Geiss et al 2021, Lopez & Matricardi 2022)
that demonstrate the capabilities and discuss limitations of RTTOV for visible satellite channels.
Moreover, RTTOV is used for the operational assimilation of the visible Meteosat SEVIRI channel at
DWD, which is a clear indication that is produces reasonable results. While most of them are based
on the MFASIS solver (Scheck et al. 2016, 2022), the latter is just an emulator for the DOM solver used
in this study, so error estimates derived for MFASIS present upper bounds for DOM errors.
(I guess MFASIS coefficients are not yet available for FY-4A).- l. 109: It would be good to provide the wavelength
- l. 116: If you produce output with CMA-MESO every every 15 minutes and FY-4A/AGRI starts scanning
from the north of the disk to the south at the same times (I am not sure about the scanning
strategy), the time difference is probably smaller than 7.5 minutes. Please check...- Fig 2: Is all = cloudy + cloud-free? Then I do not understand how the bias for all can be larger
than for cloudy. Or is the third category "uncertain" missing in the plot? Then it would be good to
add it. Moreover, the number of pixels in cloudy / cloud-free / uncertain as a function of time
would be interesting.
Maybe this is helpful for the interpretation: If the jump on 9th of September is related to a
changed calibration factor (the constant used to convert the number of detected photons to a
radiance) then the change in reflectance bias in cloudy / cloud-free should be proportional to
the mean reflectance in cloudy / cloud-free.
The fact that the cloud-free bias increases indicates that either the calibration has in fact
become worse on September 9th or that before a calibration-related bias compensated a clear-sky
bias (e.g. due t an albedo bias).
- l. 145: If I understand this correctly, your "ensemble" is just seven deterministic model states
for different lead times. This is not what anybody in data assimilation would call an ensemble and
you are definitely not evaluating an "ensemble forecast" (l. 146). This is really misleading and
I also do not see why this "ensemble" should be interesting. If you compute the average of
synthetic images for different lead times you will of course get a blurred version of the 3h
lead-time image and increase the cloud cover. But no operational data assimilation system I know
uses time-averaged states.
Is the point here that in a real ensemble DA system you would use the "real ensemble" for the
bias correction but as you have none available here you are using the set of deterministic model
states, because it has similar properties (the clouds are not exactly at the same locations in
all of the members)?- l. 147 / 211: Using the overbar for both ensemble mean and spatial average is confusing.
- Table 1 (deterministic forecast): I am missing a better discussion of the results.
The cloudy biases are reduced by 40-60%, but in all cases the bias correction actually *increases*
the bias for the clear pixels. For SEP 15 & 17 the bias correction changes the sign of the
bias for the clear pixels to negative. As the the bias is positive for the cloudy pixel, this leads
to compensating biases for the "all" category. Why is the bias correction ineffective for the clear
pixels?- l. 253: What does "cloudy" mean for the ensemble? That in the ensemble mean state CWP>0.01kg/m2 ?
So potentially clear member pixels contribute to bar{B_cld}?- l. 304ff: There are two YZs -- maybe use the full names.
References:
- Geiss et al. (2021): "Understanding the model representation of clouds based on visible and infrared satellite observations", ACP, Volume 21, issue 16, https://doi.org/10.5194/acp-21-12273-2021
- Lopez & Matricardi (2022): "Validation of IFS+RTTOV/MFASIS0.64-μm reflectances against observations from GOES-16, GOES-17, MSG-4 and Himawari-8", ECMWF Technical memorandum, DOI 10.21957/l4u0f56lm, https://www.ecmwf.int/en/elibrary/81322-validation-ifsrttovmfasis064-mm-reflectances-against-observations-goes-16-goes-17
- Scheck et al. (2016): "A fast radiative transfer method for the simulation of visible satellite imagery", Journal of Quantitative Spectroscopy and Radiative Transfer, Volume 175, 54-67, https://doi.org/10.1016/j.jqsrt.2016.02.008.
- Scheck, L., Weissmann, M., and Bernhard, M. (2018): "Efficient Methods to Account for Cloud-Top Inclination and Cloud Overlap in Synthetic Visible Satellite Images", J. Atmos. Ocean. Tech., 35, 665-685, doi: 10.1175/JTECH-D-17-0057.1
- Scheck (2020): "A neural network based forward operator for visible satellite images and its adjoint", Journal of Quantitative Spectroscopy and Radiative Transfer, Volume 274, November 2021, 107841, https://doi.org/10.1016/j.jqsrt.2021.107841
Citation: https://doi.org/10.5194/amt-2024-12-RC2 - AC3: 'Reply on RC2', Yongbo Zhou, 25 Jul 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
356 | 116 | 37 | 509 | 24 | 20 |
- HTML: 356
- PDF: 116
- XML: 37
- Total: 509
- BibTeX: 24
- EndNote: 20
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1