the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Site and Season Specific Calibrations Improve Low-cost Sensor Performance: Long-term Field Evaluation of PurpleAir Sensors in Urban and Rural India
Mark Joseph Campmier
Jonathan Gingrich
Saumya Singh
Nisar Baig
Shahzad Gani
Adithi Upadhya
Pratyush Agrawal
Meenakshi Kushwaha
Harsh Raj Mishra
Ajay Pillarisetti
Sreekanth Vakacherla
Ravi Kant Pathak
Joshua S. Apte
Abstract. We report on the long-term performance of a popular low-cost PM2.5 sensor, the PurpleAir PA-II, at multiple sites in India, with the aim of identifying robust calibration protocols. We established 3 distinct sites in India (North India: Delhi, Hamirpur; South India: Bangalore), where we collocated PA-II with reference beta-attenuation monitors to characterize sensor performance and to model calibration relationships between PA-IIs and reference monitors for hourly data. Our sites remained in operation across all major seasons of India. Without calibration, the PA-IIs had high precision (Normalized Root Mean Square Error [NRMSE] among replicate sensors ≤ 15 %) and tracked the overall seasonal and diurnal signals from the reference instruments well (Pearson’s r ≥ 0.9) but were inaccurate (NRMSE ≥ 40 %). We used a comprehensive feature selection process to create optimized site-specific calibrations. Relative to the uncalibrated data, parsimonious least-squares long-term calibration models improved PA-II performance at all sites (cross-validated NRMSE: 20–30 %, R2: 0.82–0.95), particularly by reducing seasonal and diurnal biases. Because aerosol properties and meteorology vary regionally, the form of these long-term models differed by site. Likewise, using a moving-window calibration, we find a calibration scheme using seasonally specific information somewhat improves performance relative to a static long-term calibration model. In contrast, we demonstrate that a successful short-term calibration exercise for one season may not transfer reliably to other seasons. Overall, we demonstrate how the PA-II, when paired with a careful calibration scheme, can provide actionable information on PM2.5 in India with only modest irreducible uncertainty.
- Preprint
(860 KB) -
Supplement
(3739 KB) - BibTeX
- EndNote
Mark Joseph Campmier et al.
Status: closed
-
RC1: 'Comment on amt-2023-35', R Subramanian, 12 Apr 2023
Disclosure: I will be joining CSTEP in July, where two coauthors were working during this study (and one continues to be part of CSTEP and will be reporting to me). CSTEP is also one of the three sites in this study.
Overall, this is a thorough study on LCS performance and correction at three different sites across India, nicely presented. My comments/questions are mostly minor. However, I am not sure whether the title is entirely correct - monthly corrections aren't a great improvement and don't even always carry over to other months of the same season. The final recommendation is to collocate at least one sensor with a reference for the entire study duration. As for site-specific, the Hamirpur and Delhi corrections appear reasonably interchangeable (see comment 20).
Comments:
- Abstract uses both Pearson r and R2; please use one or the other for consistency. The R2 values for "raw" (I call it uncorrected) data (0.55-0.74, Fig S16 - which show the final corrections in much better light!) show sensor performance that is not as impressive as r >= 0.9. Incidentally, the Pearson r result only appears in the abstract and not in the main text.
- Showing a table of fit statistics for the uncorrected and final corrected data (and maybe the spatial transferability results) in the main text would improve clarity. Currently, these key results are discussed in the text but only presented graphically in Figure S16.
- Line 43: The Plantower sensors do sense particles above 0.8 micron, even up to 2 micron - just not very efficiently. Kuula et al. say 0.8 micron but based on "Valid detection ranges...defined as the upper half of the detection efficiency curve" - which seems different from "failing to characterize". Maybe something like "do not adequately characterize".
- From Wallace et al. (2021): "The ALT method is based on the number of particles per deciliter reported by the PMS 5003 sensors in the PurpleAir instrument for the three size categories less than 2.5 μm in diameter." It is unclear that these are independent measures; Kuula, He/Dhaniyala, Ouimette, Andy May, and others have shown the size distribution is not real. Since the ALT method isn't finally used, maybe move these results to SI and improve clarity by focusing on the two metrics (CF1 and ATM) that most people use anyway.
- Line 83: "while can the BAMs provide" should be "while the BAMs can provide"
- Line 150: Instead of "block averaged", recommend using "hourly averages of" - because I think that's what is being done. "Block averaging" is not otherwise clarified in the manuscript and "hourly averaging" is easily understood.
- Line 160: "the quotient of the mean and standard deviation" seems the inverse of the CV - might relative standard deviation be easier to understand?
- Eq. 1 is an unusual formulation, so perhaps the original study that used this formulation (as far back as I can track it!) should be cited?
-
Zhang et al. (1994) https://doi.org/10.1080/1073161X.1994.10467244 (Their Figure 4 was used in a workshop report Laulainen et al. 1993 that was then cited by Chakrabarti et al. 2004.)
-
- Line 216 says the rolling OLS performance was compared against other two-week periods, but the results suggest monthly evaluations. Please clarify.
- Lines 263-264: 15% and 14% seem not that different to warrant an explanation.
- Lines 267-268: Why is Delhi so unusually lossy? Both IGP sites are significantly lossier (data recovery <40%) than the CSTEP site (75%), which is surprising and not well explained given e.g. my comment #11 that even a 1% difference in the results was explained even if not seemingly necessary.
- Lines 280-282: This is unclear from the figure, which I interpreted as "the PA line is mostly above or close to the BAM line for the pre-monsoon period at Hamirpur; it underestimates about half the time for Delhi, maybe."
- Line 282: coarse aerosol are particles larger than 2.5 micron. PM2.5 is called fine PM, e.g. https://www.epa.gov/pm-pollution/particulate-matter-pm-basics
- Lines 297-301: I appreciate this decision. Good call.
- Line 306: I don't think T & dewpoint coefficients were reported anywhere. Are these statistically different from zero?
- Lines 326-327: Text is unclear ("CFs"??) and not presenting these key results in the main text (ideally as the table requested earlier) is not helping.
- Fig S16 uses "RHC" to indicate the theory-driven fRH-esque approach, which is odd. What does RHC stand for? Maybe just say "theory".
- The discussion of Fig S16 doesn't align with the results shown. The Bengaluru theory-driven performance has the same R2 as the 1-parameter model, lower than the 2-parameter model. NRMSE is lower for theory model than any of the empirical models. (One might quibble about differences of 0.02 and argue that is "comparable", but the surrounding text touts differences of 2-3% so...)
- Eq 2-4 - is RH used as a fraction in these equations? Please specify. RH is usually reported as % and can be used directly in such equations, so maybe use that convention instead.
- Line 408: Unclear if this parenthetical is really the case. Applying Hamirpur correction to Delhi or vice-versa produces relatively similar R2 and NRMSE 0.82/39% or 0.78/35% with no clear winner.
- Lines 422-423: Dust storms were not identified nor discussed elsewhere in the text. Dust is only hypothesized as a potential explanation for a result (lines 280-284).
- Lines 436-437 - check the sentence.
- Data availability: insert data repository link.
- Vos et al. is almost three pages of authors for one reference that doesn't actually contribute much to this specific manuscript. Can you just say "GBD 2019 Diseases and Injuries Collaborators" as the group is known on the paper?
- Fig 4 caption is really long, but has a simpler explanation of the results than what is in the text...
Citation: https://doi.org/10.5194/amt-2023-35-RC1 - AC1: 'Reply on RC1', Joshua S. Apte, 02 Jun 2023
-
RC2: 'Comment on amt-2023-35', Anonymous Referee #2, 15 Apr 2023
Site and Season Specific Calibrations Improve Low-cost Sensor Performance: Long-term Field Evaluation of PurpleAir Sensors in Urban and Rural India
Overall, this paper is well-written and was completed in a systematic manner. The main drawback is it fails to identify and highlight innovation in the work. The stated aim of the paper was to “identifying robust calibration protocols”, which feels like a step in completing the QAQC process. As a whole, the Plantower/purple-air pm sensor's accuracy and precision are well-studied. I recommend that the authors revisit the abstract and introduction in order to highlight the contribution to the scientific body of knowledge that this work provides.
Abstract:
I suggest strengthening the aims in your opening line since there are now a lot of calibration papers for the Plantower/purple air. I.e., what are you adding to this body of literature?
Clarify why these three are distinct and what they add to the study (e.g., urban, suburban, background, forested, etc.)
Can you clarify what a “major season” is?
It would be useful to the reader if you briefly state how the aerosol and meteorology vary by the site since I assume they capture unique environments
This sounds like a more innovative part of the work, expand?
We used a comprehensive feature selection process to create optimized site-specific calibrations.
Since the form varies by site, do you make a recommendation to other users on what to include in their model?
Can you clarify how it's “successful” if it does not work overall?
Introduction:
Ln 39: Is “mischaracterize” the correct word here? I am confused by the goal of the statement.
Line 41-45 – This fails to cite enough work to support this statement. I suggest including more work from the past 2-3 years. This will also help you identify the innovative part of this work. Several LCS PM networks have thorough publications on calibration methods based on long-term field data.
The end of the introduction feels more like concluding remarks.
Methods:
150: Please define “block-averaged” in the text.
Ln 151: How do you determine what constitutes “imprecise points”?
CV using the 6 nodes?
Ln 160: “the quotient of the mean and standard deviation of the sensors” – How are these values were used?
Ln 185: I think this is partially correct. Testing has shown that it exponentially overestimates at high RH, but these conditions are less likely to be sustained in real world environments.
Can you also provide local regulatory values in addition to WHO?
Please provide a reason for “We removed all raw PM2.5 data points outside of the range 5 – 500 µg/m3”
Strongly contradicts “with peak daily (hourly) in excess of 500 µg/m3” line 103
How many minutes were required for the data to be averaged up to 1 hr? Is that the 80%?
You should cite the sources that influenced you to choose these covariates (e.g., ln 176, 188, etc.)
Ln 215: A similar method was employed in “Identifying optimal co-location calibration periods for low-cost sensors." Compare results?
Ln 230. Please clarify “US EPA’s data reduction process”
Ln 249: missing word?
A general comment on the results: it is very acronym heavy, and I think it sometimes takes longer to mentally decrypt than it would be if it was just written out.
Ln 363 – extra comma
Ln 369 – Can you add the reasons for this trend here?
How do you think the notable differences in data likely influenced some of the transferability?
In my experience, LCS struggle more at high concentrations. Can you discuss concentration ranges more?
How do you think the difference in complexities affected the transferability? Since they are quite different, they may be overfitting and that reduced the transferability too. It would be interesting to see something like Figure 5 all with the same model structures.
Figure 1: To clarify, is 779 the number of data points total (could be 10 points for 1 am and 100 points for 9 am) or the number for each hour? If it’s the first, did you check that there are a comparable number of points per hour?
Figure 1 would also be nice with the range highlights like Figure 2.
Figure 3 – Check this figure for visual accessibility.
Figure 5 – Is there a training/testing split in time for each site? I am confused as to why the site applied to itself changes.
Conclusions:
Ln 416 - extra comma
Ln 423 – Dust storms were not discussed with these numbers in the text. Please add a discussion so it is appropriate as a conclusion.
Ln 430 – Some of the thoughts on seasonal and location transferability have been described elsewhere, but the discussion on the difference in the PM composition of the sites is interesting. Do you have thoughts on how the community can account for these differences if co-location is not feasible?
Citation: https://doi.org/10.5194/amt-2023-35-RC2 - AC2: 'Reply on RC2', Joshua S. Apte, 02 Jun 2023
-
RC3: 'Comment on amt-2023-35', Anonymous Referee #3, 25 Apr 2023
Review of “Site and Season Specific Calibrations…” by Campmier et al
In this manuscript, the authors analyze standard BAM and Purple Air data from three sites in India to identify the optimum calibration procedures. The analysis may include some important and useful results on use of low cost sensor data, but at present, I had a hard time following the authors procedures and conclusions. There is some extraneous results in the manuscript that is distracting (like the theory driven calibrations that are not used) and too little clarity on what the authors did and how their calibrations performed compared to some of the standard published calibrations equations.
It is certainly reasonable to expect that site and season specific calibrations will do better than generic calibration equations. This is stated as a conclusion in the abstract, but its not clearly shown in the manuscript (except in Figure 5). The authors need to provide a table showing R2, NRMSE and bias for the SFS generated values along with those from a more generic calibration, such as that used by the US EPA, either Barkjohn 2021 or the updated EPA equation given here: https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=353088&Lab=CEMM
This would be key to showing that site specific calibrations actually matter. The magnitude matters here. One could argue that improvements of a percent or so in NRMSE or 0.01 in the R2 are rather trivial.
In addition, I don’t understand how the authors went from the more complex calibration relationships shown in Table 2, to the much simpler relationships shown by equations 2, 3 and 4.
Other comments:
Line 76: need to provide equations for “Alt” corrections in SI.
77: what does three refer to?
83, 90: grammar issues.
150: What are “unreasonably large”? What is “block average”? Same as hourly average?
155: Not sure what you mean by statistically paired.
164: I don’t think BME 280 is defined anywhere.
177: Not just RH, many other factors.
211: Grammar.
227: Suggest a citation for defs of these std relationships. (MBE, NMBE, etc)
235: This is not a bias.
284: Suggest this ref for quantitative analysis of dust with Pas
https://doi.org/10.5194/amt-16-1311-2023
292: Table 2 does not summarize the procedure, but rather results.
294: This does not seem to be true for Bangalore, PM x temp is the most relevant, right?
295: I assume this refers to RH SQUARED, right?
323-325: Not sure what this refers to.
350 and equations 2,3 and 4:
After describing the details of a multi-linear SFS model, and showing in Table 2 how various permutations of the parameters give the best fits, I don’t understand how you arrive at these much simpler relationships. These look like very standard PA calibration equations that have been developed by others.
Citation: https://doi.org/10.5194/amt-2023-35-RC3 - AC3: 'Reply on RC3', Joshua S. Apte, 02 Jun 2023
Status: closed
-
RC1: 'Comment on amt-2023-35', R Subramanian, 12 Apr 2023
Disclosure: I will be joining CSTEP in July, where two coauthors were working during this study (and one continues to be part of CSTEP and will be reporting to me). CSTEP is also one of the three sites in this study.
Overall, this is a thorough study on LCS performance and correction at three different sites across India, nicely presented. My comments/questions are mostly minor. However, I am not sure whether the title is entirely correct - monthly corrections aren't a great improvement and don't even always carry over to other months of the same season. The final recommendation is to collocate at least one sensor with a reference for the entire study duration. As for site-specific, the Hamirpur and Delhi corrections appear reasonably interchangeable (see comment 20).
Comments:
- Abstract uses both Pearson r and R2; please use one or the other for consistency. The R2 values for "raw" (I call it uncorrected) data (0.55-0.74, Fig S16 - which show the final corrections in much better light!) show sensor performance that is not as impressive as r >= 0.9. Incidentally, the Pearson r result only appears in the abstract and not in the main text.
- Showing a table of fit statistics for the uncorrected and final corrected data (and maybe the spatial transferability results) in the main text would improve clarity. Currently, these key results are discussed in the text but only presented graphically in Figure S16.
- Line 43: The Plantower sensors do sense particles above 0.8 micron, even up to 2 micron - just not very efficiently. Kuula et al. say 0.8 micron but based on "Valid detection ranges...defined as the upper half of the detection efficiency curve" - which seems different from "failing to characterize". Maybe something like "do not adequately characterize".
- From Wallace et al. (2021): "The ALT method is based on the number of particles per deciliter reported by the PMS 5003 sensors in the PurpleAir instrument for the three size categories less than 2.5 μm in diameter." It is unclear that these are independent measures; Kuula, He/Dhaniyala, Ouimette, Andy May, and others have shown the size distribution is not real. Since the ALT method isn't finally used, maybe move these results to SI and improve clarity by focusing on the two metrics (CF1 and ATM) that most people use anyway.
- Line 83: "while can the BAMs provide" should be "while the BAMs can provide"
- Line 150: Instead of "block averaged", recommend using "hourly averages of" - because I think that's what is being done. "Block averaging" is not otherwise clarified in the manuscript and "hourly averaging" is easily understood.
- Line 160: "the quotient of the mean and standard deviation" seems the inverse of the CV - might relative standard deviation be easier to understand?
- Eq. 1 is an unusual formulation, so perhaps the original study that used this formulation (as far back as I can track it!) should be cited?
-
Zhang et al. (1994) https://doi.org/10.1080/1073161X.1994.10467244 (Their Figure 4 was used in a workshop report Laulainen et al. 1993 that was then cited by Chakrabarti et al. 2004.)
-
- Line 216 says the rolling OLS performance was compared against other two-week periods, but the results suggest monthly evaluations. Please clarify.
- Lines 263-264: 15% and 14% seem not that different to warrant an explanation.
- Lines 267-268: Why is Delhi so unusually lossy? Both IGP sites are significantly lossier (data recovery <40%) than the CSTEP site (75%), which is surprising and not well explained given e.g. my comment #11 that even a 1% difference in the results was explained even if not seemingly necessary.
- Lines 280-282: This is unclear from the figure, which I interpreted as "the PA line is mostly above or close to the BAM line for the pre-monsoon period at Hamirpur; it underestimates about half the time for Delhi, maybe."
- Line 282: coarse aerosol are particles larger than 2.5 micron. PM2.5 is called fine PM, e.g. https://www.epa.gov/pm-pollution/particulate-matter-pm-basics
- Lines 297-301: I appreciate this decision. Good call.
- Line 306: I don't think T & dewpoint coefficients were reported anywhere. Are these statistically different from zero?
- Lines 326-327: Text is unclear ("CFs"??) and not presenting these key results in the main text (ideally as the table requested earlier) is not helping.
- Fig S16 uses "RHC" to indicate the theory-driven fRH-esque approach, which is odd. What does RHC stand for? Maybe just say "theory".
- The discussion of Fig S16 doesn't align with the results shown. The Bengaluru theory-driven performance has the same R2 as the 1-parameter model, lower than the 2-parameter model. NRMSE is lower for theory model than any of the empirical models. (One might quibble about differences of 0.02 and argue that is "comparable", but the surrounding text touts differences of 2-3% so...)
- Eq 2-4 - is RH used as a fraction in these equations? Please specify. RH is usually reported as % and can be used directly in such equations, so maybe use that convention instead.
- Line 408: Unclear if this parenthetical is really the case. Applying Hamirpur correction to Delhi or vice-versa produces relatively similar R2 and NRMSE 0.82/39% or 0.78/35% with no clear winner.
- Lines 422-423: Dust storms were not identified nor discussed elsewhere in the text. Dust is only hypothesized as a potential explanation for a result (lines 280-284).
- Lines 436-437 - check the sentence.
- Data availability: insert data repository link.
- Vos et al. is almost three pages of authors for one reference that doesn't actually contribute much to this specific manuscript. Can you just say "GBD 2019 Diseases and Injuries Collaborators" as the group is known on the paper?
- Fig 4 caption is really long, but has a simpler explanation of the results than what is in the text...
Citation: https://doi.org/10.5194/amt-2023-35-RC1 - AC1: 'Reply on RC1', Joshua S. Apte, 02 Jun 2023
-
RC2: 'Comment on amt-2023-35', Anonymous Referee #2, 15 Apr 2023
Site and Season Specific Calibrations Improve Low-cost Sensor Performance: Long-term Field Evaluation of PurpleAir Sensors in Urban and Rural India
Overall, this paper is well-written and was completed in a systematic manner. The main drawback is it fails to identify and highlight innovation in the work. The stated aim of the paper was to “identifying robust calibration protocols”, which feels like a step in completing the QAQC process. As a whole, the Plantower/purple-air pm sensor's accuracy and precision are well-studied. I recommend that the authors revisit the abstract and introduction in order to highlight the contribution to the scientific body of knowledge that this work provides.
Abstract:
I suggest strengthening the aims in your opening line since there are now a lot of calibration papers for the Plantower/purple air. I.e., what are you adding to this body of literature?
Clarify why these three are distinct and what they add to the study (e.g., urban, suburban, background, forested, etc.)
Can you clarify what a “major season” is?
It would be useful to the reader if you briefly state how the aerosol and meteorology vary by the site since I assume they capture unique environments
This sounds like a more innovative part of the work, expand?
We used a comprehensive feature selection process to create optimized site-specific calibrations.
Since the form varies by site, do you make a recommendation to other users on what to include in their model?
Can you clarify how it's “successful” if it does not work overall?
Introduction:
Ln 39: Is “mischaracterize” the correct word here? I am confused by the goal of the statement.
Line 41-45 – This fails to cite enough work to support this statement. I suggest including more work from the past 2-3 years. This will also help you identify the innovative part of this work. Several LCS PM networks have thorough publications on calibration methods based on long-term field data.
The end of the introduction feels more like concluding remarks.
Methods:
150: Please define “block-averaged” in the text.
Ln 151: How do you determine what constitutes “imprecise points”?
CV using the 6 nodes?
Ln 160: “the quotient of the mean and standard deviation of the sensors” – How are these values were used?
Ln 185: I think this is partially correct. Testing has shown that it exponentially overestimates at high RH, but these conditions are less likely to be sustained in real world environments.
Can you also provide local regulatory values in addition to WHO?
Please provide a reason for “We removed all raw PM2.5 data points outside of the range 5 – 500 µg/m3”
Strongly contradicts “with peak daily (hourly) in excess of 500 µg/m3” line 103
How many minutes were required for the data to be averaged up to 1 hr? Is that the 80%?
You should cite the sources that influenced you to choose these covariates (e.g., ln 176, 188, etc.)
Ln 215: A similar method was employed in “Identifying optimal co-location calibration periods for low-cost sensors." Compare results?
Ln 230. Please clarify “US EPA’s data reduction process”
Ln 249: missing word?
A general comment on the results: it is very acronym heavy, and I think it sometimes takes longer to mentally decrypt than it would be if it was just written out.
Ln 363 – extra comma
Ln 369 – Can you add the reasons for this trend here?
How do you think the notable differences in data likely influenced some of the transferability?
In my experience, LCS struggle more at high concentrations. Can you discuss concentration ranges more?
How do you think the difference in complexities affected the transferability? Since they are quite different, they may be overfitting and that reduced the transferability too. It would be interesting to see something like Figure 5 all with the same model structures.
Figure 1: To clarify, is 779 the number of data points total (could be 10 points for 1 am and 100 points for 9 am) or the number for each hour? If it’s the first, did you check that there are a comparable number of points per hour?
Figure 1 would also be nice with the range highlights like Figure 2.
Figure 3 – Check this figure for visual accessibility.
Figure 5 – Is there a training/testing split in time for each site? I am confused as to why the site applied to itself changes.
Conclusions:
Ln 416 - extra comma
Ln 423 – Dust storms were not discussed with these numbers in the text. Please add a discussion so it is appropriate as a conclusion.
Ln 430 – Some of the thoughts on seasonal and location transferability have been described elsewhere, but the discussion on the difference in the PM composition of the sites is interesting. Do you have thoughts on how the community can account for these differences if co-location is not feasible?
Citation: https://doi.org/10.5194/amt-2023-35-RC2 - AC2: 'Reply on RC2', Joshua S. Apte, 02 Jun 2023
-
RC3: 'Comment on amt-2023-35', Anonymous Referee #3, 25 Apr 2023
Review of “Site and Season Specific Calibrations…” by Campmier et al
In this manuscript, the authors analyze standard BAM and Purple Air data from three sites in India to identify the optimum calibration procedures. The analysis may include some important and useful results on use of low cost sensor data, but at present, I had a hard time following the authors procedures and conclusions. There is some extraneous results in the manuscript that is distracting (like the theory driven calibrations that are not used) and too little clarity on what the authors did and how their calibrations performed compared to some of the standard published calibrations equations.
It is certainly reasonable to expect that site and season specific calibrations will do better than generic calibration equations. This is stated as a conclusion in the abstract, but its not clearly shown in the manuscript (except in Figure 5). The authors need to provide a table showing R2, NRMSE and bias for the SFS generated values along with those from a more generic calibration, such as that used by the US EPA, either Barkjohn 2021 or the updated EPA equation given here: https://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=353088&Lab=CEMM
This would be key to showing that site specific calibrations actually matter. The magnitude matters here. One could argue that improvements of a percent or so in NRMSE or 0.01 in the R2 are rather trivial.
In addition, I don’t understand how the authors went from the more complex calibration relationships shown in Table 2, to the much simpler relationships shown by equations 2, 3 and 4.
Other comments:
Line 76: need to provide equations for “Alt” corrections in SI.
77: what does three refer to?
83, 90: grammar issues.
150: What are “unreasonably large”? What is “block average”? Same as hourly average?
155: Not sure what you mean by statistically paired.
164: I don’t think BME 280 is defined anywhere.
177: Not just RH, many other factors.
211: Grammar.
227: Suggest a citation for defs of these std relationships. (MBE, NMBE, etc)
235: This is not a bias.
284: Suggest this ref for quantitative analysis of dust with Pas
https://doi.org/10.5194/amt-16-1311-2023
292: Table 2 does not summarize the procedure, but rather results.
294: This does not seem to be true for Bangalore, PM x temp is the most relevant, right?
295: I assume this refers to RH SQUARED, right?
323-325: Not sure what this refers to.
350 and equations 2,3 and 4:
After describing the details of a multi-linear SFS model, and showing in Table 2 how various permutations of the parameters give the best fits, I don’t understand how you arrive at these much simpler relationships. These look like very standard PA calibration equations that have been developed by others.
Citation: https://doi.org/10.5194/amt-2023-35-RC3 - AC3: 'Reply on RC3', Joshua S. Apte, 02 Jun 2023
Mark Joseph Campmier et al.
Mark Joseph Campmier et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
653 | 204 | 24 | 881 | 56 | 12 | 11 |
- HTML: 653
- PDF: 204
- XML: 24
- Total: 881
- Supplement: 56
- BibTeX: 12
- EndNote: 11
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1