the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Characterization of inexpensive metal oxide sensor performance for trace methane detection
Daniel Furuta
Tofigh Sayahi
Jinsheng Li
Bruce Wilson
Albert A. Presto
Download
- Final revised paper (published on 08 Sep 2022)
- Preprint (discussion started on 23 May 2022)
Interactive discussion
Status: closed
-
RC1: 'Comment on amt-2022-110', Anonymous Referee #1, 24 May 2022
It is an interesting manuscript, especially to researchers who’re building distributed or airborne monitoring systems with MOx gas sensors. Several similar papers have been published in AMT. So, the manuscript is a good fit to the journal. As a former user of TGS26XX sensors, I don’t see major technical problems. My specific comments are as follow:
- Why did the authors select a 10-kΩ voltage divider (resistor)? As shown in Figure 2, most MOx sensors had a resistance (Rs) far greater than 10 kΩ during the experiment. From a circuit standpoint, a voltage divider with resistance close to typical Rs values would make the measurement more sensitive or accurate. A relevant question is – why 10-kΩ for all sensors?
- Figure 1: What was the inlet airflow rate? To my understanding, the LI-7810 analyzer has a sampling airflow rate of 0.25 LPM. Did this go back to the testing chamber?
- (Again) Figure 1: What was the response time of the LI-7810 analyzer, considering the length of tubing, averaging time, and so on? Did the authors synchronize the readings from MOx sensors and the LI-7810 by considering response time differences?
- (Again) Figure 1: I saw quite a few capacitors on the PCB. Please briefly explain their purpose (to avoid unnecessary confusion regarding the measurement circuit).
- Please provide the ADC’s bit info for LabJack T7. As per the company, a LabJack T7 may use an ADC from 12 to 24-bit. The number of bits can have a large influence on the resolution of acquired data, especially for high Rs
- Line 149: Vs is a bit misleading. It is the voltage drop across the 10-kΩ voltage divider instead of the sensor (Rs). I suggest the authors use a different subscript.
- Equation 2: Even though the sensors are heated, they still suffer from temperature variation, which in turn would influence the sensors’ resistance. That being said, why was temperature not included in the calibration equation?
- Figure 2: A side note – The highest Rs we observed for TGS2600 at ~2 ppm was close to 800 kΩ. That was achieved by filtering out all VOCs and water vapors.
- Line 187: As per Figure 2, the relative humidity decreased with an increase in temperature. To me, this is related to the temperature dependency of vapor pressure. I would suggest the authors remove “likely as the result of a condensation and evaporation cycle.”
- Line 195-196: I suggest the use of r instead of R for Pearson’s correlation coefficients, to avoid unnecessary confusion (R versus R2).
- Figure 4: Were all the three sensors of the same model coming from the same batch of products? Different batches of sensors could differ in response factors.
- I would suggest the authors offer a discussion about the potential application scenarios of those CH4 sensors for ambient air and source measurement towards the end of the manuscript.
- A final thought: As per the Figaro Company, the TGS26XX sensors’ response is nearly linear (Rs/R0 versus gas concentration) on a log-log graph. I just feel there could be a calibration equation better fitting the experimental data than equation 2. Here comes a question – why was general linear regression used to build the calibration curve?
Citation: https://doi.org/10.5194/amt-2022-110-RC1 -
AC1: 'Reply on RC1', Daniel Furuta, 06 Jun 2022
Thank you for your perceptive and thorough comments. For the most part, we agree with your notes and requests for clarification. We respond point-by-point.
- Why did the authors select a 10-kΩ voltage divider (resistor)? As shown in Figure 2, most MOx sensors had a resistance (Rs) far greater than 10 kΩ during the experiment. From a circuit standpoint, a voltage divider with resistance close to typical Rs values would make the measurement more sensitive or accurate. A relevant question is – why 10-kΩ for all sensors?
We agree that 10 kΩ may not be an optimal resistance. As Fig. 4 illustrates, we found Rs for TGS2600 and TGS2602 in the 10 to 30 kΩ; TGS2611-C00 in the 20 to 40 kΩ range, and TGS2611-E00 and MQ4 in the 40 to 125 kΩ range. We selected the voltage divider values as a best guess for a reasonable resistance from manufacturer datasheets and a quick initial test. Interestingly and, we believe, coincidentally, the measurements with the worst sensitivity (MQ4 and TGS2611-E00) showed the best performance.
Figures 2 and 4 provides enough information to pick better resistance values for future work, as they show the expected resistance ranges for the sensors in these conditions; we will add a note to this extent in the discussion.
- Figure 1: What was the inlet airflow rate? To my understanding, the LI-7810 analyzer has a sampling airflow rate of 0.25 LPM. Did this go back to the testing chamber?
As the reviewer notes, the reference instrument has a flow rate of 0.25 LPM. We had no other pump in the system, and so airflow into the test chamber was the same 0.25 LPM. The outlet of the LI-7810 was vented to the outside laboratory, causing the gradual decrease in methane concentration in the test chamber for each pulse.
- (Again) Figure 1: What was the response time of the LI-7810 analyzer, considering the length of tubing, averaging time, and so on? Did the authors synchronize the readings from MOx sensors and the LI-7810 by considering response time differences?
The reference analyzer was connected to the test chamber by a short length of small-diameter tubing. We found that the LI-7810 responded to the methane pulse injections into the test chamber within seconds. As we averaged all measurements to the minute and the pulse decay to background took place over a relatively long time scale, we considered this error negligible and did not synchronize the readings beyond aligning system clocks for the various devices.
- (Again) Figure 1: I saw quite a few capacitors on the PCB. Please briefly explain their purpose (to avoid unnecessary confusion regarding the measurement circuit).
The capacitors in Fig. 1 (the small red and brown components seen on the PCB) were for power supply bypassing, to reduce the effect of power supply noise and to isolate the effects of any transients. As we were using a bench power supply for this experiment these components were likely unnecessary. As the reviewer suggests, the capacitors were not in the direct path of the measurement circuit.
- Please provide the ADC’s bit info for LabJack T7. As per the company, a LabJack T7 may use an ADC from 12 to 24-bit. The number of bits can have a large influence on the resolution of acquired data, especially for high Rs
Our setup used the default T7 settings, which correspond to an effective bit depth of 19.1 bits, and an effective resolution of 37 μV as per the company (https://labjack.com/support/datasheets/t-series/appendix-a-3-2-2-t7-noise-and-resolution). The largest Rs we observed was less than 150 kΩ; at a 150 kΩ resistance, a 37 μV change corresponds to a resistance change of around 0.013%. We believe that errors due to ADC resolution are likely insignificant compared to the uncertainty of the sensor itself, and likely even compared to overall electrical system noise.
As our measurements were also averaged to a longer time scale for analysis (from approximately one reading every five seconds to the minute scale), the actual accuracy may be somewhat better than this simple calculation.
- Line 149: Vs is a bit misleading. It is the voltage drop across the 10-kΩ voltage divider instead of the sensor (Rs). I suggest the authors use a different subscript.
We agree; VOUT would be more suitable.
- Equation 2: Even though the sensors are heated, they still suffer from temperature variation, which in turn would influence the sensors’ resistance. That being said, why was temperature not included in the calibration equation?
Due to limitations of our experimental setup, temperature and absolute humidity were highly correlated (Pearson’s r = 0.95, see Fig. 3). Accordingly, to avoid a non-generalizable improvement in R2 we chose to only include one of the two in our calibration equation, selecting absolute humidity for the stated reasons. As you correctly note, ambient temperature will also have an effect on sensor response. As we mentioned in our manuscript, these effects will need to be untangled in future work, and this should be a priority for further calibration work using these sensors.
- Figure 2: A side note – The highest Rs we observed for TGS2600 at ~2 ppm was close to 800 kΩ. That was achieved by filtering out all VOCs and water vapors.
Thank you for the interesting note. This very high resistance likely emphasizes further the importance of environmental conditions in the sensor response (as compared to 10 to 30 kΩ we found in normal humidity levels).
- Line 187: As per Figure 2, the relative humidity decreased with an increase in temperature. To me, this is related to the temperature dependency of vapor pressure. I would suggest the authors remove “likely as the result of a condensation and evaporation cycle.”
We agree that this effect is responsible for some of the change in relative humidity. However, we also saw a cycle in absolute humidity levels along with temperature. We believe this was the result of some moisture condensing on and evaporating from the walls of the test chamber as the temperature cycled.
- Line 195-196: I suggest the use of r instead of R for Pearson’s correlation coefficients, to avoid unnecessary confusion (R versus R2).
We agree.
- Figure 4: Were all the three sensors of the same model coming from the same batch of products? Different batches of sensors could differ in response factors.
The Figaro sensors of each type were taken from the same batch. The MQ4 does not appear to have a batch code printed on it; the MQ4 sensors were ordered from the distributor at the same time, but they were individually packaged and so we are unsure whether they were from the same batch. This could be a possible influence on the greater variability for the MQ4 than for the Figaro sensors. We agree that evaluating the sensors for batch-to-batch consistency can be an important addition to future work.
- I would suggest the authors offer a discussion about the potential application scenarios of those CH4 sensors for ambient air and source measurement towards the end of the manuscript.
Thank you for this suggestion. Other reviewers mentioned the same issue. Prior to the preprint we revised the discussion section to emphasize this important question. In particular, we believe some of these sensors may be usable as-is for natural gas leak detection in urban areas or around fossil fuel infrastructure; it is also possible that more sophisticated algorithms or system design that reduces environmental influences may allow ambient air measurement around atmospheric levels.
- A final thought: As per the Figaro Company, the TGS26XX sensors’ response is nearly linear (Rs/R0 versus gas concentration) on a log-log graph. I just feel there could be a calibration equation better fitting the experimental data than equation 2. Here comes a question – why was general linear regression used to build the calibration curve?
We did try fitting log-transformed data, and did not find meaningful performance improvement. The manufacturer curves for most of the sensors (excepting TGS2600) lack data in the methane concentration range we examined; possibly the sensor response in this very low range (relative to design parameters) is not linear.
We agree that Equation 2 is a crude calibration equation, and we do not propose it as necessarily the best option for a sensing system using these sensors. As our goal was to compare these devices and their sensitivity to environmental conditions, we thought a simple linear model was suited to roughly evaluating sensor response to methane and humidity/temperature, while being relatively robust with a low risk of overfitting. For real-world use, a more sophisticated model (whether non-linear regression, machine learning, or something else) might provide better performance, but we think it unlikely that a better algorithm will upend the relative performance of these sensors.
As we note in Section 4.1, our RMSE and R2 values are similar to those found in some previous studies. Two previous studies found better error values for TGS2600. Riddick et al. (2020a) had uncertainty of 0.01 ppm, but only an R2 of 0.23 for their best model, as compared to our R2 of 0.16. Collier-Oxendale et al. (2018) also found good performance for TGS2600 in a field study with more sophisticated algorithms, but found different algorithms to perform better at their two sites - due to the complexities of real-world deployments, there may be more influence from site-specific effects or overfitting than in our relatively controlled laboratory environment. Although we agree with the reviewer that more sophisticated algorithms could perform better, we feel that the general “ballpark” agreement of our results with previously published work supports our overall conclusions.
Citation: https://doi.org/10.5194/amt-2022-110-AC1 -
RC2: 'Reply on AC1', Anonymous Referee #1, 14 Jun 2022
The authors have properly addressed most of my comments – Great work! (Not intend to be picky) I have a few further comments on their response.
Original comment and response #1:
- (Again) Figure 1: What was the response time of the LI-7810 analyzer, considering the length of tubing, averaging time, and so on? Did the authors synchronize the readings from MOx sensors and the LI-7810 by considering response time differences?
The reference analyzer was connected to the test chamber by a short length of small-diameter tubing. We found that the LI-7810 responded to the methane pulse injections into the test chamber within seconds. As we averaged all measurements to the minute and the pulse decay to background took place over a relatively long time scale, we considered this error negligible and did not synchronize the readings beyond aligning system clocks for the various devices.
Comments: I would suggest the authors list the response time of LI-7810 since it is an optical gas meter. As per the instrument supplier, LI-7810 has a response time of 2 sec for CH4 between 0-2 ppm (without considering the transfer time in tubing’s). This is in fact very fast for an optical gas meter. Providing such information won’t hurt (but rather help) the manuscript.
Original comment and response #2:
- Please provide the ADC’s bit info for LabJack T7. As per the company, a LabJack T7 may use an ADC from 12 to 24-bit. The number of bits can have a large influence on the resolution of acquired data, especially for high Rs
Our setup used the default T7 settings, which correspond to an effective bit depth of 19.1 bits, and an effective resolution of 37 μV as per the company (https://labjack.com/support/datasheets/t-series/appendix-a-3-2-2-t7-noise-and-resolution). The largest Rs we observed was less than 150 kΩ; at a 150 kΩ resistance, a 37 μV change corresponds to a resistance change of around 0.013%. We believe that errors due to ADC resolution are likely insignificant compared to the uncertainty of the sensor itself, and likely even compared to overall electrical system noise.
As our measurements were also averaged to a longer time scale for analysis (from approximately one reading every five seconds to the minute scale), the actual accuracy may be somewhat better than this simple calculation.
Comments: The authors talked about noise-free bits. Such information is useful but not usually required as the bits are related to many factors such as sampling rates. Please simply provide the ADC bit information. Based on the authors’ response, I guess it could be 24-bits, which is adequate for the sensor comparison experiments.
Citation: https://doi.org/10.5194/amt-2022-110-RC2 -
AC2: 'Reply on RC2', Daniel Furuta, 17 Jun 2022
Thanks again for your detailed comments and suggestions. We agree with both points and will incorporate them into the final revision.
- I would suggest the authors list the response time of LI-7810 since it is an optical gas meter. As per the instrument supplier, LI-7810 has a response time of 2 sec for CH4 between 0-2 ppm (without considering the transfer time in tubing’s). This is in fact very fast for an optical gas meter. Providing such information won’t hurt (but rather help) the manuscript.
We agree. The very fast response time for the instrument is another useful detail with regards to concerns about synchronization. We will add this information to the revision.
- The authors talked about noise-free bits. Such information is useful but not usually required as the bits are related to many factors such as sampling rates. Please simply provide the ADC bit information. Based on the authors’ response, I guess it could be 24-bits, which is adequate for the sensor comparison experiments.
We agree about concisely specifying bit information. The hardware ADC operates at 16 bits; the noise floor is lowered further by averaging, giving the effective bit rate we mentioned previously. The manufacturer discusses the noise and resolution at length (https://labjack.com/support/datasheets/t-series/appendix-a-3-2-2-t7-noise-and-resolution); we will simply state the 16-bit hardware depth with averaging for 37 µV nominal effective resolution in the revision, and cite the datasheet for the details.
Citation: https://doi.org/10.5194/amt-2022-110-AC2
-
RC3: 'Comment on amt-2022-110', Anonymous Referee #3, 13 Jul 2022
The manuscript discusses the performance of 5 off the shelf MOx sensors for methane. The manuscript fits the scope of the journal and is overall well written. Some aspects could benefit from clarifications and context.
It is not very clear why the test conditions with a concentration range between 2 and 10 ppm were chosen as this is way higher than ambient but lower than serious leaks. Also at the end you discuss applicability. So one wonders why this was chosen up front?
The manuscript should have a clear conclusions section for AMT, different from the discussion section.
In the discussion to other studies it is not always clear how the calibration/regression in the current study differs from other studies. This could be useful to specify.
Some figures are unclear especially figure 7. What humidity is shown as <1% and >2%? What is shown there. Other comments below.
Details
Table headers are normally on top of the tables not under
Table 1 can you use similar performance metrics? And if similar performance metrics do not exist in the papers, may be you could use this as an additional justification of your paper?
Table 2, can you please add a date on when the price data was gathered…. Please also specify what is meant with “fast response” and “filtered” (not really explained in the text).
In the discussion of the poor performance of TGS 2602 (and to an extent TGS 2600) it should be made clearer that these are NOT methane sensors.. it is mentioned later but at times they just look like poor performing sensors which is a little unfair to them.
Figure 1: could you improve it so that you see where the reference analyzer samples form relative to the PCB? Also do you have some basic data such as residence time in your chamber. Not critical but would make for a better description. Also inlet air is lab air? Or zero air? I assume lab air because of the RH issue?
The RH variability is bothersome to me. The changes in RH are substantial over short periods of time. The explanation of lab air variability seems very odd as a 10% RH variability 3 times over 1 hour is just odd (fig 2). It would be useful to strengthen this discussion. That data looks like a flow or sensor issues especially seeing how regular it is more than any real room RH variability
Figure 1 : A,B,C,D and E on the PCB are hard to see.
Table 3 and when discussing RMSE can you please remind people of the range of values this is based upon
Figure 3 legend: please format the formulas with subscript.
Figure 4 has a poorer resolution than other figures and seems to have some lines to the left? And between first and second column of panels
Figure 5 and 7. please format chemical formulas as formulas (subscript 4)
Figure 7 and the humidity. I am confused on what is shown here <1% or >2% what “humidity” is that?
Citation: https://doi.org/10.5194/amt-2022-110-RC3 -
AC3: 'Reply on RC3', Daniel Furuta, 28 Jul 2022
Thank you for your insightful and detailed review of our manuscript. We appreciate your emphasis on clarity, and for the most part we agree with your suggestions. We respond point-by-point below. We have changed the order of your comments to collect related revisions. We have added revised figures as a pdf file.
- It is not very clear why the test conditions with a concentration range between 2 and 10 ppm were chosen as this is way higher than ambient but lower than serious leaks. Also at the end you discuss applicability. So one wonders why this was chosen up front?
2 ppm is the ambient concentration at our location, which is slightly higher than but close to the global average background concentration. As we were primarily interested in characterizing the sensors close to ambient levels, rather than at higher concentrations where we would expect better performance, we did not want too high of an upper bound. However, we also did not want too narrow of a range to evaluate sensor applications, as we did not expect particularly low RMSE values and as we are interested in using the sensors for leak detection and similar uses.
The previous low-concentration studies in Table 1 have upper bounds of 2, 5.85, 7, and 9 ppm. We felt that 10 ppm was in keeping with this general range of previous work, allowing us to make some comparisons with these studies, and was not too high to evaluate performance in this more difficult concentration range for the sensors.
To emphasize this in the paper, we will add this text before Table 1:
Background methane levels at our location are approximately 2 ppm, consistent with the low-concentration studies in Table 1. Eugster and Kling (2012) have a small concentration range of 1.85-2 ppm, while the other low concentration studies have upper concentrations of 5.85 ppm (Riddick et al., 2020a), 7 ppm (Collier-Oxandale et al., 2018), and 9 ppm (Van den Bossche et al., 2017). To allow comparisons with these previous studies, for our study we chose a similar concentration range of background to 10 ppm. We chose a slightly higher upper concentration with the additional goal of evaluating suitability for detecting minor leaks and similar enhancements, while still remaining in this apparently difficult to sense low-concentration range.
- The manuscript should have a clear conclusions section for AMT, different from the discussion section.
We will explicitly mark the final paragraph as the paper conclusion and expand it as follows:
5 Conclusion
Environmental conditions have a large effect on MOx sensors, dominating the sensor response at low methane levels. Applications of these sensors for monitoring trace methane will require careful sensor calibration and algorithms to address humidity, or system designs that reduce environmental variation. We believe that addressing environmental sensitivity is the main challenge to real-world applications with the studied sensors, but that their potential to enable inexpensive sensor networks merits further development.
We believe that monitoring background methane concentrations with the studied sensors will be difficult. It is likely that more advanced algorithms would allow improved performance over our relatively simple model, but it remains to be seen if implementation improvements will enable these sensors to perform at adequate levels. However, we believe that the better-performing sensors have potential immediate application in leak detection networks and similar settings, and that the performance we found should be sufficient for use around fossil fuel production infrastructure, in urban leak detection, and in other similar applications.
- In the discussion to other studies it is not always clear how the calibration/regression in the current study differs from other studies. This could be useful to specify.
We agree; we will add a brief discussion to the introduction about the algorithms used in other studies at line 70:
The low-concentration studies we discuss used several algorithms to fit sensor response to methane levels. Eugster and Kling (2012) used a linear regression to fit a scaled sensor resistance, which was first corrected for relative humidity and temperature. Van den Bossche et al. (2017) followed the same method. As mentioned previously, Riddick et al. (2020a) did not find this algorithm to produce optimal results, and instead developed a more complicated non-linear regression to fit their data. Finally, Collier-Oxandale et al. (2018) evaluated 11 different regressions of varying complexity with different variables, including time, time of day, temperature, and humidity, finding that different models worked better at their different sites. These studies selected their models for performance rather than clarity or ease of interpretation; although it may give worse performance, we believe a simpler model may better illustrate the effects of different conditions on sensor response in our comparative study.
- Some figures are unclear especially figure 7. What humidity is shown as <1% and >2%? What is shown there. Other comments below.
We will clarify that this figure shows trends for subsets of the data with different temperatures and water vapor concentrations, and is intended as an illustrative tool to show the large differences in sensor response under different environmental conditions. Primarily, we intended to show that environmental conditions can have a large impact on sensor performance. As major confounding factors, these environmental parameters need to be considered when determining the limit of detection and implementation of these sensors. We will revise the preceding paragraph as follows:
Figure 7 emphasizes the importance of temperature and humidity by showing the responses of three sensors for subsets of the data, selected to show two disjunct ranges each for temperature and water vapor. At higher temperatures the sensors have lower resistance values than at lower temperatures; the same is true for water vapor levels. In the methane concentration range we examined, the change in sensor response due to these environmental conditions can exceed the change in sensor response due to methane levels. Accordingly, field deployments of these sensors will require system designs that control temperature and humidity, co-located measurements and algorithms to account for these factors, or both.
- Details
- Table headers are normally on top of the tables not under
We will correct the formatting.
- Table 1 can you use similar performance metrics? And if similar performance metrics do not exist in the papers, may be you could use this as an additional justification of your paper?
As you note, the previous studies do not use easily comparable metrics. We will further emphasize that this was a goal of our paper by adding the following paragraph immediately before Table 1:
The previous studies we cite use different performance metrics, with some reporting differing results for the same sensors. Eugster and Kling (2012) found an unimpressive R2 of 0.19 for TGS2600, for example, while Riddick et al. (2020a) found an impressive accuracy of 0.01 ppm for the same sensor. As these varying results are difficult to interpret and compare, an evaluation of these inexpensive sensors with a consistent methodology will be helpful for future work in the field.
- Table 2, can you please add a date on when the price data was gathered…. Please also specify what is meant with “fast response” and “filtered” (not really explained in the text).
We will specify in the caption that the prices were current in November 2021. The TGS2611-E00 has a built-in filter to reduce interference from alcohols and other gasses, while the “fast response” TGS2611-C00 does not, according to the manufacturer. As far as we have found, the manufacturer does not specify the filter material. We will clarify this in the revision.
- In the discussion of the poor performance of TGS 2602 (and to an extent TGS 2600) it should be made clearer that these are NOT methane sensors.. it is mentioned later but at times they just look like poor performing sensors which is a little unfair to them.
We agree. The TGS2602 is not intended to sense methane, and the TGS2600 is advertised as a general gas sensor, but has been used in several previous methane studies. We will emphasize in section 4.1 that TGS2602 is not designed for methane, and that TGS2600 is marketed as general purpose.
- Figure 1: could you improve it so that you see where the reference analyzer samples form relative to the PCB? Also do you have some basic data such as residence time in your chamber. Not critical but would make for a better description. Also inlet air is lab air? Or zero air? I assume lab air because of the RH issue?
We agree that more detail here would be helpful. Referee #1 raised related issues; we reproduce the discussion here (referee #1 comments in bold):
Figure 1: What was the inlet airflow rate? To my understanding, the LI-7810 analyzer has a sampling airflow rate of 0.25 LPM. Did this go back to the testing chamber?
As the reviewer notes, the reference instrument has a flow rate of 0.25 LPM. We had no other pump in the system, and so airflow into the test chamber was the same 0.25 LPM. The outlet of the LI-7810 was vented to the outside laboratory, causing the gradual decrease in methane concentration in the test chamber for each pulse.
(Again) Figure 1: What was the response time of the LI-7810 analyzer, considering the length of tubing, averaging time, and so on? Did the authors synchronize the readings from MOx sensors and the LI-7810 by considering response time differences?
The reference analyzer was connected to the test chamber by a short length of small-diameter tubing. We found that the LI-7810 responded to the methane pulse injections into the test chamber within seconds. As we averaged all measurements to the minute and the pulse decay to background took place over a relatively long time scale, we considered this error negligible and did not synchronize the readings beyond aligning system clocks for the various devices.
In addition to the revisions discussed above, we will state that the analyzer draws from the top of the chamber, while the PCB sits in the bottom, and that the inlet takes in lab air. We will also emphasize that we had a small fan in the test chamber running continuously to ensure that the test chamber air was well mixed.
We will revise and expand the paragraph at line 124 as follows:
We built our test chamber inside a commercially available chest freezer, shown as a diagram in Fig. 1. We placed a heater (Vivosun 10”×20.75” seedling heat mat) at the bottom of the freezer, with both the freezer and heater controlled by a temperature controller (Inkbird ITC-308). As our test chamber, we placed a plastic tub inside the freezer with the sensor PCB, data acquisition unit, and temperature and humidity logger enclosed. The host computer, reference instrument, and power supply were outside of the chamber, with connections made through the lids of the freezer and tub. The reference instrument continuously drew air from the test chamber by its internal sampling pump at a flow rate of 0.25 lpm, sampling from the top of the test chamber as seen in Fig. 1. This sampling airflow was the only source of air movement into or out of the chamber. Makeup air was provided from ambient laboratory air via an inlet tube, which also connected to the top of the test chamber. The sensor PCB was placed at the bottom of the test chamber, and a small fan inside the test chamber ensured mixing.
The reference instrument was connected to the test chamber by approximately four feet of tubing. We found that the instrument responded to methane pulses injected into the test chamber within seconds; as we later averaged all data to a minute time scale, we considered this lag to be negligible and did not synchronize the systems further beyond ensuring consistent system clocks.
- The RH variability is bothersome to me. The changes in RH are substantial over short periods of time. The explanation of lab air variability seems very odd as a 10% RH variability 3 times over 1 hour is just odd (fig 2). It would be useful to strengthen this discussion. That data looks like a flow or sensor issues especially seeing how regular it is more than any real room RH variability
We agree that further clarification on this point would be useful. Referee #1 touched on the same concern; we reproduce our response here:
Line 187: As per Figure 2, the relative humidity decreased with an increase in temperature. To me, this is related to the temperature dependency of vapor pressure. I would suggest the authors remove “likely as the result of a condensation and evaporation cycle.”
We agree that this effect is responsible for some of the change in relative humidity. However, we also saw a cycle in absolute humidity levels along with temperature. We believe this was the result of some moisture condensing on and evaporating from the walls of the test chamber as the temperature cycled.
The changes in RH occur at the same frequency as the temperature control cycling, as can be seen in Figure 2. We attribute the cycling in RH to a combination of changing vapor pressure due to temperature fluctuations and, as noted in our response to referee #1, a condensation/evaporation cycling occurring due to the temperature control. As our environmental chamber was a low-cost, ad-hoc design, this difficulty in controlling humidity was one of the main challenges in our study. As we discuss, more precise control over environmental conditions in future work would be helpful in identifying the limits of performance for these sensors.
We will further emphasize these points in the revision by expanding the paragraph at line 184:
Due to high humidity variation in the ambient laboratory environment, the experimental setup was unable to maintain stable relative humidity levels. Although this behavior was not desired, our experimental design still provided a range of humidity levels for our analysis.
As can be seen in Fig. 2, relative humidity, water vapor levels, and temperature all show a regular cycle, with levels fluctuating several times per hour. This cycle is synchronized with the on/off period of our temperature control system. From visual observation of the system, we believe that the cycle in water vapor levels is due to condensation and evaporation on the walls of the test chamber due to the cooling and heating temperature cycle.
- Table 3 and when discussing RMSE can you please remind people of the range of values this is based upon
We will add a note in the table caption that the values are for the 2 to 10 ppm range we examined.
- Figure 1 : A,B,C,D and E on the PCB are hard to see.
- Figure 4 has a poorer resolution than other figures and seems to have some lines to the left? And between first and second column of panels
- Figure 3 legend: please format the formulas with subscript.
- Figure 5 and 7. please format chemical formulas as formulas (subscript 4)
We will correct these formatting and display issues as seen in the attached pdf.
-
AC3: 'Reply on RC3', Daniel Furuta, 28 Jul 2022