Articles | Volume 14, issue 11
https://doi.org/10.5194/amt-14-7221-2021
© Author(s) 2021. This work is distributed under the Creative Commons Attribution 4.0 License.
Unravelling a black box: an open-source methodology for the field calibration of small air quality sensors
Download
- Final revised paper (published on 17 Nov 2021)
- Supplement to the final revised paper
- Preprint (discussion started on 22 Feb 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on amt-2020-489', Anonymous Referee #3, 31 May 2021
- AC1: 'Reply on RC1', Seán Schmitz, 07 Sep 2021
- AC2: 'Reply on RC2', Seán Schmitz, 07 Sep 2021
-
RC2: 'Comment on amt-2020-489', Anonymous Referee #4, 10 Aug 2021
- AC2: 'Reply on RC2', Seán Schmitz, 07 Sep 2021
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Seán Schmitz on behalf of the Authors (14 Sep 2021)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (02 Oct 2021) by Piero Di Carlo
AR by Seán Schmitz on behalf of the Authors (12 Oct 2021)
General comments
This manuscript provides a seven-step methodology for the calibration and quality assurance of low-cost air quality sensors. Thanks to the generalised nature of this method, it can be applied to a wide range of sensors and potentially be used as a standard calibration procedure. The data processing script was made publicly available which maximises the applicability of this method and the impact of this research.
The authors have pointed out current challenges in the use of low-cost sensors including the lack (or incomparability) of calibration procedures in many low-cost sensor application studies. They stress the need of a reliable and reproducible data calibration and post-processing method. This manuscript is an important step towards this aim and, therefore, a valuable contribution to the literature in this field as it has the potential to improve the data quality in future applications of low-cost sensors. The manuscript is well structured and clearly written.
My main suggestions to further improve the scientific quality of the manuscript are:
Specific comments
1. Please discuss the limitations of your calibration method in more detail (Point 2.1)
You stressed the importance of calibrating the sensors under conditions that are similar to those under which they will be (or have been) operated during the experimental application. This needs to be considered when defining the application range of the sensors.
Thanks to their silent operating conditions and small size, low-cost sensors are suited for indoor as well as mobile applications (e.g. wearable sensors for personal exposure assessment). However, if the calibration is conducted outdoors, the sensors might not be suited for such applications as the environmental conditions may differ significantly in these environments. Furthermore, mobile deployments would require further data cleaning and validation steps as rapidly changing environments may have an impact on the sensor performance (e.g. Alphasense Ltd., 2013).
As you have pointed out, low-cost sensors are often temperature and RH dependent as well as cross-sensitive to other pollutants. Therefore, it should be recommended to apply the presented calibration method to sensor systems (with additional sensors for T, RH and cross-sensitive gases) rather than individual sensors.
In this step, point outliers are removed based on the assumption of a slowly changing airfield where peak exposures over a few seconds do not occur. However, such short-term (< 10 sec) emissions may occur in certain settings (e.g. traffic emissions of nearby passing vehicles, cigarette emissions of passengers etc.). One advantage of the high spatial and temporal resolution of low-cost sensors is that such peak exposures may be captured. The proposed method, however, excludes such events. Please include this argument when defining the application range of the sensors (Point 2.1).
2. Line 115: You state that, while demonstrated here with MOS, the proposed calibration method can equally be applied to electrochemical sensors. To strengthen this argument, please add a brief physical explanation, a reference, or experimental proof.
3. Line 221, line 240: Please explain how you have determined the splitting ratio between training and validation period. How much differ you results when using other ratios?
4. Table 6: Please explain why you are using the medians and not the means of your statistical parameters. (whereas in Line 221 you were speaking about the average RMSE)
5. While the manuscript nicely discusses the implications of a finding, it sometimes does not offer physical explanations for them:
What causes this instability and how can you ensure that the model stays stable under field conditions?
6. Line 292: Please specify “decent” and “good” agreement (e.g. with mean R2 & RMSE)
7. Line 327: You deployed (at least) two low-cost sensors. Have you quantified the agreement between the two sensors? If so, add a small sentence here as it may be a strong argument why it is sufficient to only look at the data of one representative sensor. Perhaps summarise the performance of the second sensor briefly in the main text. How can you explain the non-linear response of sensor s72 (Figure S8)?
8. Figure 8 (optional): Adding histograms showing the overlap between colocation and experiment would make the Figure easier to comprehend and help to understand the flagging procedure.
9. Line 596: Replace “for those who enjoy” with “to achieve”
Technical comments
10. Please use subscripts for NO2 and O3 and superscripts for R2 throughout the document.
11. Lines 93 and 96: What means SVM? Do you mean SVR (support vector regression)?
12. Line 149: Delete “for use in statistical calibration” (the general quality of the final data is likely to be higher)
13. Line 154 (Style, optional): Replace “What follows in this section is a” with “This section provides a”
14. Line 196: How do you define the range of the colocation data? As the range between the minimum and maximum observations? (Or percentiles?)
15. Line 219: Please provide references for AIC and VI
16. Line 263 (optional): Perhaps add a sentence or reference explaining the term “smearing” as the audience might not be familiar with this practice.
17. Line 295: “more information 295 in section 3.2” – this is section 3.2
18. Table 1: Is it correct that the sensor models for the reducing and the oxidising gases are identical? (SGX Sensortech MICS- 4514)
19. Figure 2 (optional): Adding a timeline with (rough) dates would help to comprehend the paragraph above quicker.
20. Figures 4 c, d; 6; 7 a, b; 10 etc: Make sure that all axes have units (even if only arbitrary units).
21. Figures 14 and 15 (optional): Although you have already mentioned them in Tables 8 and 9, add the R2 and RMSE values to the graphs to provide a comprehensive overview.
22. Line 503: “the reference instruments did not impact the predictive accuracy of the models and can therefore [in this case] be ignored as a potential interference” – can this be generalised for all sensors? If not, add “in this case”
23. Line 508: “The uncertainty between RF models and MLR models was fairly similar” - replace “between” with “of”
Reference
Alphasense Ltd. (2013). Alphasense Application Note 110: Environmental Changes: Temperature, Pressure, Humidity. Retrieved from www.alphasense.com, pages 1–6.