the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Pre-launch calibration and validation of the Airborne Hyper-Angular Rainbow Polarimeter (AirHARP) instrument
J. Vanderlei Martins
J. Dominik Cieslak
Roberto Fernandez-Borda
Anin Puthukkudy
Xiaoguang Xu
Noah Sienkiewicz
Brian Cairns
Henrique M. J. Barbosa
Download
- Final revised paper (published on 30 Sep 2024)
- Preprint (discussion started on 23 May 2023)
Interactive discussion
Status: closed
-
RC1: 'Referee report on egusphere-2023-865, entitled "Pre-launch calibration and validation of the Airborne Hyper-Angular Rainbow Polarimeter (AirHARP) instrument"', Anonymous Referee #1, 17 Jul 2023
I enjoyed very much reading the manuscript entitled "Pre-launch calibration and validation of the Airborne Hyper-Angular Rainbow Polarimeter (AirHARP) instrument". The contribution is well written, interesting and significant, and is scientifically sound. The subject is both timely and highly relevant for the community. I have only two significant remarks for the authors to consider:
- On page 7, it is mentioned: "In sensitivity studies on AirHARP dark image data, the dark counts do not depend on integration time, but are sensitive to operating temperature." This confuses me. The longer the integration time, the more dark counts should be registered, or what am I missing here? Or is meant that the dark counts per second does not depend on the integration time?
- In section 3.5.1, an Avantes spectrometer is used to correct the AirHARP measurements for any variation in Ekspla laser power
over the course of the testing period. Is it only used to monitor temporal variations of the laser power at a fixed wavelength, or also to check (and correct for) the difference in power between different wavelengths when scanning the Ekspla over one of the spectral bands of AirHARP? In the latter case, the spectral response of the Avantes itself should be taken into account. This could be explained in more detail.
Apart from that, I only have minor, mostly textual remarks:
- Some acronyms, while probably standard in the community, are not explicitly explained in the text, such as FPA, SRF, and VZA. For all clarity, they should we written out at least once when they are used for the first time.
- At the end of page 3 it is mentioned "...NASA PACE mission in 2023.", whereas earlier it is mentioned that the PACE mission will be launched in 2024.
- Section 2 on page 4: AirHARP has three detectors, not one. This is a bit unclear in the beginning of this section, where it is mentioned that the four channels are selected passively using a custom stripe filter on top of a charge-coupled device (CCD) detector FPA. It could be made clear from the beginning that AirHARP has three detectors each capturing a different angle of polarized light.
- At the top of page 20, a reference is made to Eq. 17, but I think Eq. 16 is meant: "It is convention to sometimes include an extra term in the denominator of Eq. (17) to account for the view zenith angle."
- Labels (a) and (b) are missing in Fig. 3
- Caption of Fig. 4: it is not explicitly mentioned what is shown in panel (c).
- Figure 10: Unity is the black dashed line, but a black solid line is indicated in the legend.
- Minor typo's: Double space in section 1 on page 2 "improving our knowledge of microphysical properties", typo in Fig. 1 caption: "... which carries the same the 1.5U instrument", section 3.2 on page 8: "...will vignette photons toward the edge the FPA.", on page 9: "...where f is the valye of the flatfield correction", section 6 (Appendix): "in the above text defined as R and and DOLP is described below", strange page break in the middle of a sentence on page 32: "we found that the intercomparison with AirHARP did not...", on page 35: "Martjin Smit".
Citation: https://doi.org/10.5194/egusphere-2023-865-RC1 - AC1: 'Reply on RC1', Brent McBride, 08 Oct 2023
-
RC2: 'Comment on egusphere-2023-865', Anonymous Referee #2, 21 Aug 2023
This is a good manuscript that is within scope and of sufficient merit for publication after minor revisions. I appreciate the level of detail to which the authors describe the calibration and uncertainty assessment process for this class of instrument. Most of my comments address vagueness or sloppiness in the description with the intent of making the work more easily ‘repeatable’. These should be easy to address.
I have two main complaints
- Figure 10b is used to demonstrate that AirHARP and the RSP instrument agree within 1% in DoLP. Because of the plotting log-scaling, I don’t agree that one could conclude that from the figure for the majority of cases for which DoLP > 0.1. My comments below expand on this and offer suggestions on how to resolve.
- I think the simplified AirHARP uncertainty model in the appendix is too simplified. Furthermore, the paper doesn’t show how this model was derived based on previous equations. It also does not depend on scene reflectance, which it should. More details on this below.
I do think this is an important paper and commend the authors for clearly describing an approach that has been developed over several years. It will be an important tool as the PACE/HARP2 instrument and other similar polarimeters come online.
Abstract:
Line 18: clarify if RMS is defined in units of DoLP, or is relative. Also note RMS acronym should be spelled out. My personal preference is to use the terminology in Povey et al:
Povey, A. C. and Grainger, R. G.: Known and unknown unknowns: uncertainty estimation in satellite remote sensing, Atmos. Meas. Tech., 8(11), 4699--4718 , https://doi.org/10.5194/amt-8-4699-2015, 2015.
… in which case this statement would be more like “One sigma uncertainty of 0.25”
Line 19: Spell out FOV
Rest of the paper
Page 2, line 44: again here you might want to point out that 0.5% is in units of DoLP, not relative as is the case for the radiometric calibration
Page 2, lines 46-50, page 3 lines 55-60
Other SPEX references you might want to consider adding:
Rietjens, J., Campo, J., Chanumolu, A., Smit, M., Nalla, R., Fernandez, C., Dingjan, J., van Amerongen, A., and Hasekamp, O.: Expected performance and error analysis for SPEXone, a multi-angle channeled spectropolarimeter for the NASA PACE mission. in: Polarization Science and Remote Sensing IX 34 -- 47) SPIE., 2019.
van Amerongen, A., Rietjens, J., Campo, J., Dogan, E., Dingjan, J., Nalla, R., Caron, J., and Hasekamp, O.: SPEXone: A compact multi-angle polarimeter. in: Proc. SPIE 11180, International Conference on Space Optics --- ICSO 2018 , 2018.
Also note that the SPEX spectral spectral resolution is different for intensity and polarization. Polarimetric samples are every 10-40nm, depending on where this is in the spectrum.
Page 3, line 65: here you refer to the instrument MSPI, above it was described as AirMSPI
Page 3, lines 71-72 AirMSPI needs to aggregate pixels to reach <0.005 DoLP. Further explored in
van Harten, G., Diner, D. J., Daugherty, B. J. S., Rheingans, B. E., Bull, M. A., Seidel, F. C., Chipman, R. A., Cairns, B., Wasilewski, A. P., and Knobelspiesse, K. D.: Calibration and validation of Airborne Multiangle SpectroPolarimetric Imager (AirMSPI) polarization measurements, Appl. Optics, 57(16), 4499--4513 , https://doi.org/10.1364/AO.57.004499, 2018.
Knobelspiesse, K., Tan, Q., Bruegge, C., Cairns, B., Chowdhary, J., van Diedenhoven, B., Diner, D., Ferrare, R., van Harten, G., Jovanovic, V., Ottaviani, M., Redemann, J., Seidel, F., and Sinclair, K.: Intercomparison of airborne multi-angle polarimeter observations from the Polarimeter Definition Experiment, Appl. Optics, 58(3), 650--669 , https://doi.org/10.1364/AO.58.000650, 2019.
Note the difference in conclusion of these two papers. It’s not particularly relevant to this paper though.
Page 4, line 95 I’m not familiar with the terminology “Alternative Vision”. Can you provide a reference?
Section 3.1: I like the differentiation between AirHARP, HARP2, HARPCubesat in the language and the use of ‘HARP’ as general description, but would have expected based on this that this section would use the language of HARP rather than AirHARP. Does that mean the detector specification and background correction is different for the various instruments? Perhaps so, but I think it would have been nicer to include details on all three instruments in table 1
Figure 3: needs labels for (a) and (b) in the figure. Also there appears to be some sort of border irregularly surrounding the plots.
Page 7, line 174 – AirHARP and HARP Cubesat have an internal shutter, does this mean PACE HARP doesn’t?
Page 7, last paragraph: considering that the CCD dark current is different in the cross track direction, would have it made more sense to have oriented the CCD 90 from this so that the vignetted areas to use for a synthetic dark calculation are more uniform?
Page 8, line 198-199: “with a port fraction <5% and sphere multiplier >10 (PerkinElmer)” uses terminology that may not be familiar to all readers – please define port fraction and sphere multiplier.
Page 8, line 206: I’m confused by what appears to be a link: andor.oxinist.com, which is actually a reference.
Page 10, line 260: Wouldn’t it have been simpler to just represent this as integration time instead of ‘integration lines’? It is confusing at first glance because larger values mean opposite things. For example the statement “Sensor B saturates earlier than Sensors A or C” is confusing because the lowest ‘integration line’ corresponds to the longest integration time, if I understand correctly.
Section 3.3: If possible, add a sentence or two explaining the likelihood that doubling the integration time is identical to doubling the sphere intensity. This test, which varies exposure time but not intensity, relies on this.
Page 11, line 264: how is the ‘linear region’ defined at 3000 ADU? I think I understand what you are doing in Figure 5b that expresses this, but feel like you need another sentence or two to explain how you’re going from fig 5a to fig 5b
Equation 4: I was confused by this equation until I realized that the DNflat on the left hand side of the equation is different from that on the right hand side. The left hand side is the value of DNflat from the linear fit to the data, not the actual data. If my understanding is correct, I suggest you use some notation to indicate that they are different things.
Page 16, line 396: Can you clarify if the polarimetric coefficients are determined for all pixel bins, or just the nadir bin?
Page 16, line 398: do you mean ‘deviate’ rather than ‘derivate’?
Page 17, line 412: you’re referencing equation 5, but do you mean equation 10 instead?
Page 17 line 413: is sigma_c_ij really from Table 2a rather than 2b?
Page 18, line 440: I suspect that you’re referring to a different equation here than eq 5, suggest you verify all references to equations are correct
Page 18, line 442: I thought you were referring to “Detector A, B, C” rather than “Sensor 1, 2, 3”. Is there a difference? If so, explain, if not, please make the terminology consistent
Page 20, line 484: “the illumination of the nine lamps is linear” – does this mean that each lamp adds linearly to the total illumination?
Page 21 line 501 and equation 17: If gamma doesn’t mean the same thing as in equation 11, I suggest using a different parameter. If it does, please note this.
Page 21 line 504 “compatible with zero within 3 sigma” what does this mean? Does it mean ‘equivalent to zero’? I presume so, but I don’t know what within 3 sigma means in this context.
Page 23, line 560: references characterization in “late 2022” Is this a typo, or if this already happened please rephrase (and note where the error model is documented)
Page 24, lines 571-9: I think you reference them earlier, but this would be a good place to put references for the LMOS and ACEPOL field campaigns as well
Page 24, line 591: does this require a priori information on wind speed / surface roughness?
Page 25, line 600: perhaps some citations for the RSP instrument here, specifically about calibration?
Page 25, line 610: is the RSP footprint a circle, or an oval? There is the IFOV, but an integration time that makes it like an oval. The Knobelspiesse 2019 paper you cite has a model for this as well as McCorkel et al 2016. It would potentially make your fits slightly better.
McCorkel, J., Cairns, B., and Wasilewski, A.: Imager-to-radiometer in-flight cross calibration: RSP radiometric comparison with airborne and satellite sensors, Atmos. Meas. Tech., 9(3), 955--962 , https://doi.org/10.5194/amt-9-955-2016, 2016.
Page 27, line 663: only Fig 10b appears to be log-scaled
Figure 10 refers to “Figure 32” which isn’t a part of the paper. I presume you mean Fig 9
Figure 10 (right): This figure doesn’t convince me that RSP and AirHARP agree within 1% for the log scaled DoLP. Because of overplotting, it is impossible to tell if the difference is beyond 1% for DoLP > 0.1 (possibly majority of observations based on Fig 9). It would have been better to plot the difference between RSP and AirHARP. It would have been even better to consider how that difference compares to the paired RSP and AirHARP uncertainty estimates. Validation of the uncertainty model is what is most important for algorithms which will use that model in retrieval. The models are not fixed: for RSP, the uncertainty varies with DoLP and reflectance, for AirHARP the simplified model shows it varies with DoLP (see later comment about the AirHARP simplified model). Furthermore, you cite the papers (Knobelspiesse et al. 2019, Smit et al. 2019, van Harten et al. 2018) that I’m not sure support the assertion that the difference is ‘reasonable’ since they relate to other instruments besides AirHARP. However, they do provide examples of more in depth analysis on this.
So, my recommendation is to redo figure 10b to show the difference between RSP and AirHARP scaled by the squared sum of the uncertainty estimate for the given DoLP and Reflectance value. For one sigma uncertainties 67% should agree within those bounds. You could also check with 2 sigmas / 95%.
Appendix A, line 760
You show a ‘simplified’ AirHARP error model which is what an algorithm developer would use. Presumably this comes from equation 19 and 14, but those are not expressed in terms of reflectance or DoLP. This appendix should show the full model for reflectance and dolp, and then the simplified model as you show.
Additionally, I am concerned the simplified model you show is too simplified. The DoLP uncertainty does not depend on shot noise or reflectance, and I find that that drives performance for scenes over the ocean. The (Knobelspiesse et al. 2019, Smit et al. 2019, van Harten et al. 2018) papers you cite all focus to some extent on this. Of course, the binning of AirHARP compared to RSP will drive down shot noise. Random uncertainty is reduced by sqrt(n), so I am also confused why you have the 1/B term in equations 26 and 27 rather than 1/sqrt(B) or similar.
Finally, I would have expected some decreasing performance from the center for where the calibration is performed to other pixels for which it is extended. Is this expressed in the calibration coefficients? Knowledge on this characteristic if it is indeed different would also be useful for algorithm developers who can account for this in retrieval algorithms.
Citation: https://doi.org/10.5194/egusphere-2023-865-RC2 - AC2: 'Reply on RC2', Brent McBride, 08 Oct 2023
-
CC1: 'Comment on egusphere-2023-865', D. J. Diner, 27 Aug 2023
This is a generally well-written paper explaining the AirHARP calibration process. Several detailed comments/questions are below.
Lines 65-72: For consistency with earlier text, suggest changing “MSPI” to “AirMSPI”.
Line 67: Change “gimble” to “gimbal”.
Line 68: Change “FPA” to “focal plane array (FPA)” (spell out acronyms when first used).
Line 68: The AirMSPI FPA is two-dimensional but not used in the sense of a framing area array. Perhaps change “two-dimensional” to “pushbroom”.
Line 81: I believe PACE launch is now in 2024.
Lines 95 and 832-833: Check the Alternative Vision link. I couldn’t get it to work.
Line 150: The Semiconductor Components Industries 2015 reference is missing.
Line 172-173. If the dark counts are sensitive to operating temperature, then they are either related to dark current, or there is some offset in the video signal that is temperature sensitive. Counts attributed to dark current would presumably be sensitive to integration time. Do you have an explanation of why the dark counts are not integration-time dependent?
Equation (2) and surrounding text: It might be helpful to put a subscript on the delta parameter to show that it is pixel dependent. Does the 𝐷𝑁𝑟𝑎𝑤 [0−200,1848−2048] represent a mean (or perhaps median) of dark values over pixels 0-200 and 1848-2048? Are 0-200 and 1848-2048 the full range of masked pixels, because in line 209 you state that the active science area is pixels 200-1800.
Lines 198-199: Define the terms port fraction and sphere multiplier. The Perkin Elmer reference is missing.
Line 204: The smoothing process could cause individual pixels with greater or less responsivity to have a residual calibration artifact. Has this been considered?
Line 206: The andor.oxinist.com link does not work.
Line 209 and Figure 4b: If the active area is pixels 200-1800, why don’t the plots cover this full range? Is this because of the sliding smoothing window?
Line 218 et seq.: Perhaps put subscripts on the flatfield correction factor f to indicate that it is pixel (and row) dependent.
Line 234: Are the optics perfectly telecentric, so that the AOI on the detector is exactly 0º for all pixels? In practice there is generally some slight non-telecentricity on the order of a fraction of a degree to a few degrees.
Section 3.3: The linearization curves are derived by changing integration time, but not illumination level. Have you done any tests to verify that reciprocity holds? Why is linearization done after flat-fielding correction; shouldn’t linearization occur first?
Line 418: Define what a superpixel is, as this is the first time this term is introduced.
Line 424: Spell out SRF when first used (applies to all acronyms).
Lines 444-446. Text indicates that narrowing of the leading edge of the 870 nm channel was discussed earlier, however I could not find such discussion.
Lines 446-545. The rolloff at the short wavelengths of the 440 nm band changes the effective band center and bandwidths of this channel. I don’t follow the logic “Rayleigh-like” SRF adjustment. Why not simply represent this channel by its effective spectral parameters that roughly represent the same radiometric output as the actual SRF?
Line 461: Leading and trailing edges suggest something to do with flight direction. Blue edge or short-wavelength edge, and red edge or long-wavelength edge would be better terminology in my opinion.
Line 469. I think you mean Eq. (15).
Line 475: Do you mean solar zenith angle rather than view zenith angle?
Line 560: Since this refers to testing in late 2022, were the referenced measurements acquired?
Line 611: Change “RSB” to “RSP”.
Lines 686-688: I would have expected the sunglint pattern to have a steeper variation in signal as a function of view angle than the desert scene, so I don’t follow the argument that the ocean signal is more robust against angular misregistration.
Figure 4 caption: Perhaps change “only the SNR remains” to “only pixel-to-pixel variations due to noise remains”.
General comment: Directional measurements of radiance, multiplied by pi and normalized by illumination irradiance, are referred to as “reflectance” throughout the paper. Per Schaepman-Strub et al. (2006), Rem. Sens. Environ. 103, 27-42 and Nicodemus et al. (1977), NBS publication, reflectance is a ratio of fluxes (exitance/irradiance), hence a measure of albedo. In the nomenclature defined in these publications, the quantity referred to as reflectance in McBride et al. is a “reflectance factor” for illumination at normal incidence.
Citation: https://doi.org/10.5194/egusphere-2023-865-CC1 - AC3: 'Reply on CC1', Brent McBride, 28 Oct 2023