the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Analysis of the measurement uncertainty for a 3D wind-LiDAR
Abstract. High-resolution three-dimensional (3D) wind velocity measurements are of major importance for the characterization of atmospheric turbulence. The use of a multi-beam wind-LiDAR focusing on a measurement volume from different directions is a promising approach for obtaining such wind data. This paper provides a detailed study on the propagation of measurement uncertainty of a three-beam wind-LiDAR designed for mounting on airborne platforms with geometrical constraints that lead to increased measurement uncertainties of the wind components transverse to the main axis of the system. The uncertainty analysis is based on synthetic wind data generated by an Ornstein-Uhlenbeck process as well as on experimental wind data from airborne and ground-based 3D ultrasonic anemometers. For typical atmospheric conditions, we show that the measurement uncertainty of the transverse components can be reduced by about 30 %–50 % by applying an appropriate post-processing algorithm. Optimized post-processing parameters can be determined in an actual experiment by characterizing measured data in terms of variance and correlation time of wind fluctuations. These results allow an optimized design of a multi-beam wind-LiDAR with strong geometrical limitations.
- Preprint
(1399 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on amt-2023-184', Anonymous Referee #1, 01 Apr 2024
The authors present an analysis of how Normally distributed errors in line-of-sight velocity measurements and turbulence can potentially impact the error in 3D wind estimates for the CloudKite Turbulence LiDAR (CTL). The concept proposed for this work is very important to building wind sensors and is a necessary analysis to such an instrument development. It’s relevant in scope for AMT if the analysis is conducted accurately and robustly. However, I am not convinced that the simulated errors are modeled correctly. As such, the analysis becomes circular where the assumed noise characteristics are artificially imposed on both the simulated and experimental data and the efficacy of the solutions are reliant on the assumed model being accurate.
The central issue with this manuscript is how the wind uncertainty is modeled. I am skeptical that it represents an accurate description of uncertainty in such a wind measurement system. If the authors are correct in how the noise is modeled, the manuscript needs to include a robust justification. If this noise model is incorrect, the analysis needs to be updated so that it accurately models the instrument and its uncertainties.
What follows are four points regarding the content of the manuscript (with one aside on the instrument). The most important point, and most significant barrier to my approving this for publication is the first item. Most of the other topics are tangentially related to my concern with the noise model.
Noise model:
Stated in section 2.1:
“As the detection is shot noise limited, a Gaussian distribution is assumed, for which σ = FWHM/(2√2ln2). Consequently, a detector uncertainty in terms of standard deviation can conservatively be estimated to be σdet = 0.04 ms−1. Also, the LiDAR system has an internal reference channel which suggests that the detector noise is even below 0.04 ms−1”
Detectors don’t output velocities. Detector noise manifests as an analog voltage at the output of the detector. That output is typically digitized, and stored as a fixed length time series, passed through an FFT operation to which a peak finding algorithm is applied. The corresponding peak frequency corresponds to a line-of-sight velocity dependent on the wavelength of the light. I cannot intuitively see how Poisson noise (shot noise is Poisson not normally distributed) results in Normally distributed independent line-of-sight velocities. Furthermore, I would expect uncertainty in velocity is highly dependent on the amount of aerosol loading in the scattering volume. Typically peaks in spectra resulting from air motion are very easy to find when the backscatter is high (significantly above variations in the noise floor and those systematic variations in the baseline we all pretend don’t exist in our instruments), but as the peak height drops near the noise floor, the velocity error tends to increase rather rapidly (and is probably not normally distributed). In order to increase the signal to noise ratio, one typically does not average velocity estimates (as done in this analysis), but instead averages FFT spectra before applying the peak finding algorithm. Factors that would tend more towards the uncertainty described in this manuscript would have to do with the resolution of the system (which means subsequent samples are not generally independent and therefore effectively suppressed with averaging). Those would be terms such as laser line width, windowing function and the FFT sequence length.
There is a caveat to my comments here because while I can say that I have analyzed 3D wind sensor data before (not unlike what is described here) that instrument did not employ FMCW. It is not entirely clear to me if I am missing how FMCW would alter the way noise manifests in the eventual line-of-sight velocity estimate. If FMCW alters this, such that the presented noise model is a valid representation of the effects of shot noise, the authors need to robustly show it. As part of this, it would be very helpful if they would provide a more thorough description of the FMCW scheme being employed (e.g. specific modulation scheme) and how the raw data is processed to obtain a line-of-sight wind measurement.
Dependence on Sampling Rate
The section investigating a dependence on sampling rate does not strike me as an accurate analysis of the tradeoffs in sampling. Inherently moving to a higher sampling rate will
- Change the frequency resolution and Nyquist frequency of the FFT
- Change the noise characteristics
An analysis using the same noise model and assuming the same Normally distributed noise characteristics basically assumes that one gets something for free from higher sampling rates. That’s not the case. In general, assuming the detector and anti-aliasing filters can support the increased bandwidth, one would expect the noise amplitude to go up because what comes with increased sampling rates is shorter integration times and comparatively less noise suppression.
Section 5.1 Uncertainty analysis with experimental wind data
The approach described in this section might make sense if my principle concerns about the analysis had to do with the accuracy of the wind simulations. However I’m concerned about the accuracy of the instrument noise model, which is not encapsulated in this analysis at all. To my perspective this is just another case of synthetic data, and does not reassure me that the authors are accurately modeling the instrument.
Appendix A: Geometric tolerances
I was somewhat bothered by this statement:
“We expect this to be a negligible source of error since the precise geometric dimensions of the measurement frame can be measured before mounting of the device to the CloudKite balloon. This includes the distances between the telescopes (sidelength), but also the distance and lateral position of the focus.”
This isn’t reassuring because it just says the authors assume it’s not an issue. Since we all know there is no such thing as exact dimensions, it would be advisable to include an analysis of the tolerances in those dimensional measurements and how they propagate to uncertainty in the wind measurement. Some description of how the beam pointing is determined is also helpful (e.g. see 2.3.2 in Cooper et al 2016 – full reference below). Without that analysis, this just looks like a convenient assumption.
Comment on the architecture
While not directly relevant to the review of this paper, I am concerned about the minimalist design of this instrument. Using 3 beams to measure a 3D wind vector leaves one relatively blind to unexpected error sources (this is based on personal experience). I would highly recommend that the CTL design include at least one additional beam. By having four line-of-sight velocity measurements, the estimation problem becomes over constrained and it creates a mechanism for detecting unexpected errors and biases in the instrument. This often becomes another source of irritation in the testing process (when you have to track down the errors you forgot to consider), but if you are committed to making an accurate measurement, it’s essential insurance to catch those errors.
Reference:Cooper, W. A., Friesen, R. B., Hayman, M., Jensen, J., Lenschow, D. H., Romashkin, P., Schanot, A. J., Spuler, S. M., Stith, J. L., Wolff, C. (2016). Characterization of Uncertainty in Measurements of Wind from the NSF/NCAR Gulfstream V Research Aircraft (No. NCAR/TN-528+STR). doi:10.5065/D60G3HJ8
Citation: https://doi.org/10.5194/amt-2023-184-RC1 - AC1: 'Reply on RC1', Philipp von Olshausen, 30 Aug 2024
-
RC2: 'Comment on amt-2023-184', Anonymous Referee #3, 01 Apr 2024
- AC2: 'Reply on RC2', Philipp von Olshausen, 30 Aug 2024
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
301 | 98 | 34 | 433 | 21 | 21 |
- HTML: 301
- PDF: 98
- XML: 34
- Total: 433
- BibTeX: 21
- EndNote: 21
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1