Mobile Air Quality Monitoring and Comparison to Fixed Monitoring Sites for Quality Assurance
Abstract. Air pollution monitoring using mobile ground-based measurement platforms can provide high quality spatiotemporal air pollution information. As mobile air quality monitoring campaigns extend to entire fleets of vehicles and embrace lower cost air quality sensors, it is important to address the quality assurance needs for validating these measurements. We explore collocation-based evaluation of air quality instruments in a mobile platform against fixed regulatory sites, both when the mobile platform is parked at the fixed regulatory site and when moving at distances of meters to kilometers from the site. We demonstrate agreement within 4 ppbv (for NO2 and O3 + NO2) to 10 ppbv (for NO) when using a running median of 40 hourly differences between the moving mobile platform measurements and stationary site measurements. The comparability is strong when only measurements from residential roads are used but are only slightly diminished when all roads except highways are included in the analysis. We present a method for assessing mobile measurements of ozone (O3), nitrogen dioxide (NO2), and odd oxygen (OX = O3 + NO2) on an ongoing basis through comparisons with fixed regulatory sites.
Andrew R. Whitehill et al.
Status: open (until 07 Jun 2023)
- RC1: 'Comment on amt-2023-82', Anonymous Referee #1, 25 May 2023 reply
- RC2: 'Comment on amt-2023-82', Anonymous Referee #2, 26 May 2023 reply
- RC3: 'Comment on amt-2023-82', Anonymous Referee #3, 31 May 2023 reply
Andrew R. Whitehill et al.
Andrew R. Whitehill et al.
Viewed (geographical distribution)
In their manuscript “Mobile Air Quality Monitoring and Comparison to Fixed Monitoring Sites for Quality Assurance”, Whitehill and coauthors present comparison measurements of ozone, NO, NO2, and Ox (O3 + NO2), measured on-board mobile platforms with those measured in stationary air quality network stations. The mobile platform measurements were performed both, during parking near the network stations and during mobile measurements in the vicinity of the network stations.
Linear regression analysis of the data from the mobile and the stationary platforms showed good agreement, albeit with small systematic negative biases for O3 and NO2, and large coefficients of determination for ozone, NO2 and Ox for both data sets from both platforms. NO data, however, did show much poorer agreement and correlation, compared to the other variables.
Generally, agreement between the mobile platform data and the network station data was better when the mobile platform data were collected closer to the network station. Nevertheless, for the abovementioned variables reasonable agreement between the data was found for distances up to 10 km from the network station.
These results were used to develop a method to quality assure mobile measurement data from long-term field measurements by using data acquired within a certain distance to an air quality network station and using median values over 40 hours of data.
The manuscript is well structured and clearly written. The results show that for some pollutants, comparison of mobile data with data from an air quality network station in the vicinity might be used to determine calibration drifts or malfunctions of mobile instruments under certain conditions. For this purpose, the manuscript will be useful for applications where mobile measurements were conducted over extended times within a certain area, which also contains a fixed monitoring site. However, the poor comparison results for NO, which is a pollutant, which is dominated by local sources, shows that such a quality assurance approach can only be applied for pollutants which have a very homogeneous spatial distribution over an area of several km extension. Strong spatial variability of pollutant concentrations will make such an approach impossible. This limitation will likely not only apply for NO (which was measured in the study), but also for particle-related variables like BC or particle number concentration, which also show a large spatial inhomogeneity. It would be desirable, if the authors would more critically assess their approach with respect to such limitations also in the general sections of their manuscript like the abstract and the conclusions section. This critical assessment should also include the fact that regionally homogeneous distribution of pollutants – which is necessary for the suggested in-field quality assurance method – also likely makes it less necessary to map out pollutant distribution with a narrow-gridded driving pattern, which takes a lot of effort to perform. The balance between mapping out pollutants with sufficient spatial resolution and having the possibility of using “remote” stationary measurements for quality assurance – potentially by applying sufficient temporal averaging – should be discussed.
I recommend publication in AMT after these and several other minor issues have been addressed, as detailed below.
Section 2 – Overview of Methods:
According to previous publications and the Aclima website, also black carbon and particle number concentration was measured on Aclima vehicles. It is not clear, whether this was also the case during the measurements used for this study. Since especially these particle-related variables probably have a very large spatial (and temporal) variability, it would be very relevant to include such data in this assessment of comparability of mobile and stationary measurements.
Section 3.1 – Mobile platforms parked at reference site - methods:
How large was the distance between the mobile measurement platform and the stationary measurement setup inlets? Which were the altitudes above ground level for both inlets? Further below (caption of Figure 2) it is stated that the distance was between 80 and 145 m and between 10 and 85 m for the two sites. Why did the distances vary so much? For both sites, the largest distances of the parking locations to the measurement sites are larger than the distance to the closest roads. Why was the vehicle not parked directly at the site for comparison? How large is the influence of the distance on the comparability of the results?
Section 3.2 – Mobile platforms parked at reference site - Results and Discussion; Line 188-193:
Can the typical differences between the car measurements and the fixed site measurements be explained by the sampling situation? Are these differences arbitrary differences or can they be explained by the sampling environment (e.g. traffic), meteorological conditions (e.g. wind direction) or other external influences? Do these differences and the variability of these differences (i.e. the r2 values) between the car and fixed site measurements reflect the spatial inhomogeneity of the respective pollutant concentrations? How are they related? How do they depend on the distance between both sampling locations? As a consequence, can (mobile) measurements of spatial inhomogeneity of pollutants be used to estimate how well such a measurement comparison (or the quality assurance approach presented further below) will work for a certain pollutant?
Section 4.1 – Mobile platforms driving around a fixed reference site in Denver – Methods:
Line 198: Impact by emission plumes from mobile sources is not only a problem for parked vehicles, but much more for driving vehicles (i.e. mobile measurements), which are often surrounded by other vehicles, driving on the same road.
Line 212: Can “lower traffic” be somehow quantified? An “empty” highway affects the measurements probably less than a congestion in front of a traffic light or at an intersection on a residential road.
Table 2: This is rather detailed information which could be shifted into the supplement.
Figure 3: In this figure correlations between 1-min averaged ozone concentrations measured at a fixed site and during mobile measurements at distances up to 10 km are shown. For distances larger than may be 100 m, it is clear that different air parcels are compared in every data point. Therefore, these correlation plots are not a comparison of the performance of the measurement instruments in the car and on the fixed site, but an investigation of spatial homogeneity of the ozone concentrations. Since ozone is rather homogeneously distributed, such a comparison can be used for quality assurance of the instrument (at the same time it is questionable, why in such a case small-meshed mobile measurements should be made). For other pollutants like NO (but likely also BC or PNC), the correlations are (or would likely be) much weaker and consequently the quality assurance approach would not work. All this should be treated and critically discussed in such a paper which focuses on the possibility to use such measurements as quality assurance approach (not only in a short sentence, which explains why NO shows much weaker correlations – line 262/263).
In addition, I think all these correlation plots could go into the supplement and rather a plot that shows r2 (and slope in another panel) versus distance for the various road types should be shown here. The same is true for Figure 4.
Line 268-270: I agree that it is important to exclude local emission plumes from the averages. I wonder why the highly time-resolved data were not used to exclude such plumes before averaging, like it was shown in this journal earlier for mobile measurements (e.g. Drewnick et al., AMT 2012).
Line 372-374: To me it seems rather doubtful that a few seconds of data can reasonably represent a one-hour average. How would the comparison (mobile/stationary) change or improve, if a minimum data coverage would be introduced?
Section 5.3 – Using driving data for ongoing performance evaluation:
If I understand this performance evaluation approach correctly, it assumes a constant drift of the calibration of the mobile instruments over longer time intervals or a malfunction which causes a change in the calibration or the response of these instruments, in order to be able to detect the bias. What about temperature-related drifts in instrument calibrations, which would possibly occur repeatedly over the course of a day? Would such biases be detected by this approach?
Figure 8 – caption: shouldn’t it be “running median” instead of “running mean”?
Line 450-455: How do you know that these DeltaX-values actually reflect changes in the calibrations of the mobile instruments and consequently could be used to correct for those? Couldn’t there be other reasons for these differences, like local sources or an issue with the stationary instruments?