the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Multistar calibration in starphotometry
Liviu Ivănescu
Norman T. O'Neill
We explored the improvement in starphotometry accuracy using a multistar Langley calibration in lieu of the more traditional onestar Langley approach. Our goal was a 0.01 calibrationconstant repeatability accuracy, at an operational sealevel facility such as our Arctic site at Eureka. Multistar calibration errors were systematically smaller than singlestar errors and, in the midspectrum, approached the 0.01 target for an observing period of 2.5 h. Filtering out coarsemode (supermicrometre) contributions appears mandatory for improvements. Spectral vignetting, likely linked to significant UV/blue spectrum errors at large air mass, may be due to a limiting field of view and/or suboptimal telescope collimation. Starphotometer measurements acquired by instruments that have been designed to overcome such effects may improve future star magnitude catalogues and consequently starphotometry accuracy.
 Article
(5873 KB)  Fulltext XML
 BibTeX
 EndNote
Starphotometry involves the measurement of attenuated starlight in semitransparent atmospheres as a means of extracting the spectral optical depth and thereby estimating columnar properties of absorbing and scattering constituents such as aerosols, trace gases and optically thin clouds. Dedicated instrument development already began in the late 1950s (Dachs, 1960, 1966; Dachs et al., 1966), with increased activity after 2000 (Théorêt, 2003; Gröschke et al., 2009; Pérez Ramírez, 2010; Oh, 2015). One of the earliest comprehensive investigations of starphotometry errors and their influence on calibration was reported in the astronomical literature by Young (1974). Calibration strategies for retrieving accurate photometric observations in variable optical depth conditions were proposed by Rufener (1964, 1986). Those studies were recently updated and complemented using measurements from our High Arctic, sealevel observatory at Eureka, NU, Canada (Ivănescu, 2015; Baibakov et al., 2015; Ivănescu et al., 2021), using a commercialspectrometerbased starphotometer^{1}, attached to a Celestron C11 telescope. This, more recent, work underscored certain challenges in performing calibration at such a highlatitude/lowaltitude site. The remoteness of the Eureka site and the significant infrastructure requirements of the starphotometer render calibration campaigns at a dedicated mountain site, onerous. The alternative to a calibration campaign (particularly at an Arctic site like Eureka) is to improve onsite calibration methods by overcoming the relatively large optical depth variability typical of operational sites. Much can be learned by exploring this option at an Arctic location like Eureka (see O'Neill et al., 2016, for a discussion of optical depth variability).
Stardependent (onestar) Langley calibration that depends on large air mass variations is the current standard in starphotometry (see PérezRamírez et al., 2008, 2011). This is mainly due to the limited accuracy of available extraterrestrial star magnitudes (Ivănescu et al., 2021). A good number of High Arctic stars cannot, however, be calibrated in such a way since they do not go through large elevation (i.e. air mass) changes (in the extreme case of a site at the pole, there are no elevation changes). Our goal is to demonstrate that a sub0.01 optical depth error (partly linked to calibration errors) can be achieved by performing the type of instrumentdependent, starindependent calibration referred to in Ivănescu et al. (2021).
2.1 Langley calibration
The starphotometer retrieval algorithm is based on extraterrestrial and atmospherically attenuated magnitudes of nonvariable bright stars, denoted by M_{0} (provided by the Pulkovo catalogue of Alekseeva et al., 1996) and M, respectively (see Ivănescu et al., 2021, for a more comprehensive elaboration of this section). Their corresponding instrument signals, expressed in terms of magnitude, are ${S}_{\mathrm{0}}=\mathrm{2.5}\mathrm{log}{F}_{\mathrm{0}}$ and $S=\mathrm{2.5}\mathrm{log}F$, respectively, with F_{0} and F being the actual measurements in counts s^{−1}. The starindependent conversion factor between the catalogue and instrument magnitudes is (Ivănescu et al., 2021)
The C factor accounts for the optical and electronic throughput of the starphotometer, as well as the photometric system transformation between the instrument signal magnitude and the extraterrestrial catalogue magnitude. In terms of magnitude, the Beer–Bouguer–Lambert atmospheric attenuation law is
where m is the observed air mass, and τ is the total optical depth. Inserting Eq. (1) yields
where $x=m/\mathrm{0.921}$. This expression can be used to retrieve C from a linear regression of M_{0}−S versus x, if τ is assumed constant. Such a procedure is referred to as the Langley calibration technique or Langley plot. In the absence of an accurate M_{0} spectrum, Eq. (2) can be used to transform Eq. (4) into
for which a catalogue is no longer required. This linear regression enables the retrieval of S_{0} instead of C and thus represents a stardependent calibration.
The right side of Eq. (4) notably indicates that M_{0}−S is star independent: it thus represents a linear regression that any star can contribute to and, accordingly, a framework for multistar Langley calibration.
3.1 Measurement accuracy
The differential of (rearranged) Eq. (4) yields the calibration accuracy error:
The (δ_{x}τ+xδ_{τ}) component underscores the rationale for performing calibrations at a highaltitude site (where τ, δ_{τ} and δ_{x}τ are typically smaller) and the advantage of maintaining small x in order to minimize the xδ_{τ} contribution to δ_{C}. The sky stability during the retrieval of C may be monitored by computing τ for each sample, with Eq. (4). The δ_{S} error component accounts for any systematic signal changes: optical transmission degradation, misalignment error and star spot vignetting, etc. The ${\mathit{\delta}}_{{M}_{\mathrm{0}}}$ component accounts for any magnitude bias in the brightstar catalogue (i.e. the average of accuracyerror spectra for all catalogue stars: see Ivănescu et al., 2021, for a detailed discussion of error bias in the Pulkovo and other catalogues). Because it is a cataloguespecific constant, the optical depth retrieval accuracy will not be affected by its consistent use^{2}.
3.2 Regression precision
A linear regression applied to a plot of $y={M}_{\mathrm{0}}S$ versus x yields the slope ($\widehat{\mathit{\tau}}$) and intercept ($\widehat{C}$) of the Langley Eq. (4). The regression equation is then $\widehat{y}=\widehat{\mathit{\tau}}x+\widehat{C}$, and the linearfit residuals are represented by $r=y\widehat{y}$. The standard error of the regression slope and intercept for a large number of measurements^{3} can be expressed as (see, for example, Montgomery and Runger, 2011)
It should be noted that $\stackrel{\mathrm{\u203e}}{r}$ (the mean of the residual) =0 is a corollary of the linear regression constraints.
The Langley calibration y axis embodies two independent sets of measurements: N “measurements” of M_{0} and n measurements of S. From a purenoise standpoint, the residuals can be represented by an ensemble of individual measurements ($r=({M}_{\mathrm{0}}S)(\mathit{\tau}x+C)$) where each parameter (except C) is subject to noisy variation. Excluding the typically negligible random errors in x yields^{4}
where the standard error expression for a linear combination of random variables was employed (Barford, 1985). The subscript ϵ represents a single instance of a random (noise) measurement in S, τ or M_{0}, and σ_{ϵ} is its zeromean standard deviation. ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}}}$ was replaced by σ_{τ} because no systematic variation was assumed in τ during the calibration period. ${\mathit{\u03f5}}_{{M}_{\mathrm{0}}}$ represents the difference between an individual star's M_{0} accuracy errors and the averaged M_{0} catalogue bias. The ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{{M}_{\mathrm{0}}}}$ term is specific for the use of multiple stars during the calibration.
The assumption of constant τ in time (t) and observational direction (expressed in terms of m) may be problematic over long observation periods and large air mass changes. It is a useful exercise to assess the average time period and air mass range over which a degree of τ constancy (sky stability) is maintained.
Variations of a sky instability parameter (σ_{δτ}) were analyzed using δτ differences for τ measurements acquired during the 2019–2020 season in Eureka. δτ values were placed into (a) fixed Δt bins to generate δτ histograms for high stars (where $\mathit{\delta}\mathit{\tau}={\mathit{\tau}}_{\mathrm{f}}{\mathit{\tau}}_{\mathrm{i}}$ is computed from a later time (f) relative to an earlier time (i)) and (b) fixed Δm bins from high to lowstar m pairs. Since δ values of each bin generally come from distinct periods, τ_{f} and τ_{i} are expected to be uncorrelated: the τ_{i} versus τ_{f} correlation coefficient was determined to be <0.25 when τ_{p}<0.1, Δt<1 h and ∼0.1 otherwise (see the legend of Fig. 1 for the definition of τ_{p}). This is negligible for the purposes of our analysis, and, accordingly, they can be considered as independent variables. The approximation ${\mathit{\sigma}}_{\mathit{\tau}}\cong {\mathit{\sigma}}_{\mathit{\delta}\mathit{\tau}}/\sqrt{\mathrm{2}}$ (Soch et al., 2021) can accordingly be employed for each Δm or Δt bin.
Those histograms often included anisotropic outliers typical of lognormal τ statistics (Sayer and Knobelspiesse, 2019). A median approach was chosen to render the statistics approximately independent of the outliers: the MAD (median absolute deviation) parameter was employed as a robust measure of histogram width (see Eq. 1.3 in Rousseeuw and Croux, 1993, for MAD details). In order to eventually convert the statistics to those of a normal distribution, an outlier cutoff of 4.5⋅MAD was defined^{5}. This particular cutoff is equivalent to the classical normal distribution outlier cutoff of 3σ since $\mathit{\sigma}=\mathrm{1.5}\cdot \text{MAD}$.
Figure 1 shows σ_{τ} (computed after the outlier cutoff and using the σ_{τ} approximation given above) as a function of (a) Δt and (b) Δm. It can be shown^{6} that a calibration period of 2 h, for which n≃46 at the standard sampling rate of starphotometer, yields ${\mathit{\sigma}}_{\mathit{\tau}}\simeq \mathrm{1.4}{\mathit{\sigma}}_{\widehat{C}}$. This means that the calibration error (${\mathit{\sigma}}_{\widehat{C}}$) is limited to <0.01 only if σ_{τ}<0.014. An 8 h observing period enables a more generous limit of σ_{τ}<0.028 to achieve the same calibration precision. Contour curves of σ_{τ}=0.014 and 0.028 are superimposed on Fig. 1.
Figure 2 shows the σ_{τ} variability estimation for the 2 h “fast” and the 8 h “long” calibration periods, as well as a third scenario with Δm=1 to 5. The three curves represent the standard deviation (after cutoff) of the corresponding rangeaggregated data. They tend to converge with decreasing τ_{p}: the 2 h and 8 h σ_{τ} values of 0.014 and 0.028 correspond to τ_{p} values of 0.13 and 0.15, respectively (vertical dashed blue and red lines defined by the intersection with the corresponding horizontal 0.014 and 0.028 lines). The cases τ_{p}≤0.13 and 0.15 were labelled as “clearsky” conditions because of their tendency to promote calibration stability. Their corresponding clearsky statistics are presented in Appendix A.
Many High Arctic stars are circumpolar (i.e. they never set), and thus their air mass range is limited. Figure 3 shows air mass variation as a function of time past the transit for our dataset of the 13 brightest (and stable) stars at Eureka.
A welldefined separation is notable between high stars (m(12 h)<3.1) and low stars (m(0 h)>2.2). A large air mass range is clearly only available for the low stars (i.e. about twothirds of our Eureka brightstar dataset). However, star vignetting, due to turbulenceinducing starspot expansion beyond the boundaries of the field of view (FOV), may affect the optical throughput of the Eureka system at m>5 (Ivănescu et al., 2021). This type of air mass constraint, combined with the lowstar constraints of Fig. 3, results in only moderate Δm excursions (at the expense of substantial Δt) if only a single star is employed in a Langleytype calibration. A multistar calibration can be exploited to mitigate such Δm and Δt limitations.
This type of calibration exploits a singular advantage of starphotometry over moonphotometry and sunphotometry: the capability of employing multiple extraterrestrial light sources in a relatively short period of time. In comparison with a Cdetermining Langley calibration using one star, the multistar approach enables a synergistic Langley calibration that employs several stars exhibiting a wide range of air mass values over a significantly shorter period of time.
One and multistar Langley calibrations acquired with the Eureka starphotometer on 7 December 2019 and 10 January 2020, respectively, are shown in Fig. 4. The observations for x>5 were carried out to highlight any vignetting effect due to the aforementioned starspot expansion. The onestar case (small black dots and their associated “1lin” regression line) shows the results for the low Procyon star (HR 2943, spectral type F5V). Its colder temperature ensures a nearinfrared (NIR) brightness that is larger than all the other bright stars of Fig. 3^{7}. That reason aside, it is also, arguably, the most optimal onestar Langleyregression choice since no other Fig. 3 bright star can duplicate its large and rapid air mass change (see the lowest black curve).
5.1 Calibration precision
The resulting one and multistar $\widehat{\mathit{\tau}}$ spectra (each spectral point representing a linearregression Langley slope) are shown in Fig. 5a. Their associated precision errors (${\mathit{\sigma}}_{\widehat{\mathit{\tau}}}$) of Eq. (7) are shown in Fig. 5b. One should note that the estimated multistar error is substantially and consistently smaller than that of the onestar calibration. The $\widehat{C}$ and ${\mathit{\sigma}}_{\widehat{C}}$ spectra from the Langley regressions are shown in Fig. 6a and b, respectively. The ${\mathit{\sigma}}_{\widehat{C}}$ values are, in the multistar case, significantly smaller and closer to the 0.01 target.
The generally smaller ${\mathit{\sigma}}_{\widehat{\mathit{\tau}}}$ values of the multistar case are partly attributable to the onestar case being limited to a relatively smaller x range (i.e. smaller σ_{x} in Eq. 7), while the smaller ${\mathit{\sigma}}_{\widehat{C}}$ values are partly attributable to the smaller ${\mathit{\sigma}}_{\widehat{\mathit{\tau}}}$ values and the lower values of $\stackrel{\mathrm{\u203e}}{{x}^{\mathrm{2}}}$ (see Eq. 7). The ${\mathit{\sigma}}_{\widehat{C}}$ increases in the ultraviolet (UV) and NIR are discussed in Sect. 6.2. The peak around 940 nm is likely associated with a faint and noisy star signal induced by strong attenuation in the water vapour absorption band, coupled with the nonlinear nature of the optical depth in that spectral region (PérezRamírez et al., 2012).
5.2 Repeatability
The robustness of the ${\mathit{\sigma}}_{\widehat{C}}$ spectra of Fig. 6b and the impact of potential systematic errors can be investigated with repeatability experiments. The $\widehat{C}$ spectra employed to produce the standard deviations^{8} shown in Fig. 7 were derived from three onestar and three multistar Langley calibrations that were well separated in time (i.e. they were optically independent in terms of any significant correlations between the τ_{p} variations of each period) and nearly satisfied the clearsky calibration constraints of Sect. 4. The Fig. 7 error spectra are, with the exception of larger differences in certain spectral regions, roughly coherent with the Fig. 6b spectra (including the fact that the onestar errors are significantly larger than the multistar errors).
6.1 Data processing
Figure 8 shows dualwavelength (400 and 1000 nm) regression tests for two of the three onestar calibrations of the previous section (two of the three dates given in the legend of Fig. 7 for the HR 2943 star) plus a third hotter star (HR 3982, spectral type B7) that was specifically chosen to better understand the influence of temperaturedriven spectral differences in the target star. The smaller regression slope and point dispersion about the HR 3982 regression line, compared with the two HR 2943 cases, are noticeable at both wavelengths (notably at 1000 nm) and are an indicator of generally clearer sky conditions.
The C values retrieved from linear regressions over an increasing x range in Fig. 8 (from the smallest x value to an artificial maximum of x_{max}) are plotted in Fig. 9. The damping out of regression noise and the asymptotic approach to the horizontal panx regression value as x_{max} increases can be readily observed in all three plots.
The corresponding slopederived τ_{p} spectra are shown in Fig. 10 for three x_{max} cases (the three coloured spectra were derived for x_{max} values corresponding to the matching colours of the three vertical lines in Figs. 8 and 9). The xdependent regression error dynamics are investigated in Appendix C. The next subsection describes potential competing causes of C variations and makes a link to τ_{p} errors^{9}.
6.2 Regression error interpretation
The sky instability plots of Fig. 1 show that the standard deviation of the optical depth increases with time and air mass separation between any two stars (this applies equally well to the variation between two positions of the same star). A systematic optical depth drift during the calibration leads to a commonsigned bias (positive or negative) of the regression slope and the calibration value, relative to driftfree conditions. Figure 10a and b show spectrumwide τ_{p} reduction as x_{max} and calibration time increase. This suggests spatial and/or temporal sky transparency instability during calibration. Such rapid and spectrally neutral variation is consistent with the domination of coarsemode (super µm) particles: a (post cloudscreened) mode that is mostly dominated by spatially homogeneous cloud particles at Eureka (O'Neill et al., 2016). The nearsuperposition of all τ_{p} spectra above 500 nm in Fig. 10c indicates stable transparency that is characteristic of a cloudfree atmosphere dominated by finemode (submicrometre) particles. A numberdensityinduced drift of similar finemode aerosol particles will generate spectrally independent variations in $\mathrm{\Delta}{\mathit{\tau}}_{\mathrm{p}}/{\mathit{\tau}}_{\mathrm{p}}$: the larger τ_{p} value (corresponding to the larger absolute difference in the blue/UV part of the spectrum) could explain the increasingly larger UV deviations (such as between the magenta and black/green curves in Fig. 10c).
The two bullet lists below summarize the specific processes that can lead to variations of calibration slope (τ_{p}) and intercept (C), traceable to real or apparent optical depth variations.
Instances of τ_{p} and C overestimation.

A systematic coarsemode τ_{p} increase (as described above) can have a dramatic spectrumwide effect: flagging and discarding such measurements is, accordingly, essential. A finemode τ_{p} increase will predominantly affect the UV/blue part of the spectrum.

Recent tests indicate that the optical collimation of the Eureka Celestron C11 telescope requires correction. Miscollimation is responsible for a significant part of the star spot size reported in Ivănescu et al. (2021). Correcting the attendant vignetting problem (whose consequence is a decreased star flux and apparent increase in τ_{p}) may enable reliable measurements at x values well above the limit of x≃5 reported by Ivănescu et al. (2021).

The angular star spot size (ω), being proportional to ${\mathit{\lambda}}^{\mathrm{1}/\mathrm{5}}{x}^{\mathrm{3}/\mathrm{5}}$ (Eqs. 4.24, 4.25 and 7.70” of Roddier, 1981), effectively leads to spectrally dependent vignetting (i.e. apparent τ_{p} and C increase) as a function of x: an increase in x from 7 to 9.5 would be equivalent to 20 % of ω increase for a spectral change from 400 to 1000 nm. This coupled spectral and air mass vignetting influence is consistent with Fig. C1 with the blue (0.4 µm) curve increasing at x≃7, while the increase of the red (1.0 µm) curve occurs only at x>9. This dynamic potentially dominates the large UV/blue errors seen in Figs. 7 and 10.

Noisier star spots, attributable to increased turbulence and scintillation at large x, may induce larger centring errors and exacerbate apparent increases in τ_{p} and C due to vignetting.
Instances of τ_{p} and C underestimation.

There is a systematic τ_{p} decrease during the calibration period (notably when the calibration starts at large τ_{p}).

Weak signals, usually at large x and notably for hot stars in the NIR, may lead to sensitivity loss due to ADC (analog to digital conversion) limitations and attendant slope and intercept (τ_{p} and C) reductions.
These factors contribute to Fig. 10 τ_{p} dynamics and likely relate to the onestar ${\mathit{\sigma}}_{\widehat{C}}$ spectra shown in Fig. 7. A very similar spectrum is indeed observed in the case of one faint star at large air mass (Fig. 11). Such spectral dynamics, possibly dominated by the aforementioned spectral influence of vignetting, are also likely related to the similar M_{0} bias spectra shown in Figs. 4 and 11 of Ivănescu et al. (2021). The identification of the M_{0} bias source is of paramount importance, as it may guide strategical observation choices made to improve the accuracy of future star catalogues. The error envelopes about the M_{0} bias (quantified in Fig. 12) add an additional, roughly flat spectral component (in spectral regions other than those that are dominated by Habsorption bands).
The smoother NIR errors in the onestar case (comparing the black onestar curve with the red multistar curve of Fig. 5a for λ>1050 nm) are likely due to the strong NIR signal of the much colder Procyon star. One can take advantage of this effect and develop an observing strategy that avoids using faint stars at large air mass in Eureka and still employ 12 catalogue stars at x<8 in a multistar calibration lasting 2.5 h (see Fig. A2). The star selection operation for a given multistar calibration should also include a random air mass selection to mitigate accuracy errors attributable to systematic optical depth variations (as an alternative to the Rufener, 1986, method). Mitigation of both starlight reduction impacts at large air mass and systematic optical depth variations is a singular advantage of the multistar vs. onestar calibration.
It was determined that no Eureka star movement satisfied an optimal skytransit scenario of maximum possible air mass range within the constraint of x<5. The solution to this intrinsic shortcoming of a High Arctic site is to perform multistar calibrations: this approach incorporates the fundamental advantage of reducing the calibration period and thus minimizing optical depth variability. It is, by its very nature, a calibration that enables the retrieval of a starindependent calibration parameter.
Multistar calibration repeatability errors (${\mathit{\sigma}}_{\widehat{C}}$) were systematically smaller than the singlestar errors and, in the central part of the spectrum, approached the target value of 0.01 for an observing period of 2.5 h. Those errors were partly affected by less than optimal clearsky conditions (notably in the presence of cloud), with τ_{p} slightly larger than the recommended “clearsky” value of 0.13: see Sect. 4 and Appendix A). Coarsemode filtering algorithms, that ideally eliminate all influences of coarsemode optical depth specifically in a calibration scenario, are necessary to ensure the best calibration^{10}. Large UV and NIR errors can be reduced by avoiding faint stars at large x and by improving the current telescope collimation. The mitigation of miscollimation problems can, in the short term, be affected by a constraint of x<7. This can be achieved at Eureka by employing 12 constrainedmagnitude stars over a 3 h calibration period (see Appendix A). A constraint of τ_{p}<0.13 may bring the calibration errors in the bluetored spectral range closer to the 0.01 target, with the remaining UV and NIR spectral regions being subject to the influence of M_{0} errors.
In summary, the advantages of multistar versus onestar calibration are starindependent calibration, faster coverage of larger air mass ranges, more calibration opportunities, and star selection capability for both mitigating the impact of starlight reduction with increasing air mass and systematic optical depth variations. These singular benefits were shown to override the drawbacks of specific star catalogue errors (i.e. the multistar calibration performs better than the onestar case, even if the former is uniquely affected by M_{0} errors). Further improvement will only be achieved by developing a more accurate extraterrestrial starmagnitude catalogue: their UV/blue errors, likely linked to largex spectral vignetting or finemode aerosol variations, are endemic to current groundbased star catalogues. This improvement may be affected from a spaceborne platform or at a highelevation observatory (the primary goal being to reduce turbulenceinduced starspot size and optical depth variability). The use of a large aperture telescope (limiting scintillation and lowstarlight measurement errors) and a larger FOV instrument (less prone to vignetting) will, in general, provide better results.
Figure A1a shows the τ_{p} histogram for data acquired during the 2019–2020 observing season at Eureka^{11}. The blue and red vertical lines respectively indicate the clearsky cutoff values of 0.13 and 0.15 determined in Sect. 4 for the 2 h and 8 h calibrations. Operational conditions occurred 37 % of the time (i.e. those periods of time when measurements were not impeded by persistent thick clouds or the performance of maintenance tasks). A frequency curve of clearsky periods (a period for which all τ_{p} values are less than the cutoff value) is presented in Fig. A1b. Measurements acquired during 2 h and 8 h clearsky periods represented, respectively, 35.5 % and 39 % of all measurements. These numbers, transformed into an estimation of clearsky fraction of the total measurement time, yield values of 13 % and 14 % of the total contiguous seasonal time (0.37⋅0.355 and 0.37⋅0.39, respectively). Since the measurement season is ∼160 d (or ∼5.3 months^{12}) and given that there were 246 clearsky periods of 2 h with τ_{p}<0.13, one may expect 46 such calibration periods per month. There were, on the other hand, 29 clearsky periods of 8 h with τ_{p}<0.15 (or ∼5.5 per month). If a calibration can be successfully completed in ∼2 h, then there is a significantly larger probabilityofoccurrence incentive for doing so.
The weakening of star signals with increasing air mass will progressively impact calibration quality. Figure A2 shows the availability of catalogue stars for a multistar calibration over Eureka as a function of calibration period and maximum air mass. A calibration can, for example, be carried out in 2 h with only 11 stars of our 13star dataset (Fig. 3). A 12star calibration can be carried out only if x<9.5 or if the calibration period is >2.5 h.
From Eqs. (7) and (8) one gets the error propagation into the τ Langley retrieval:
Error propagation into the calibration constant (C) retrieval is, in a similar fashion, expressed as
The coefficients k_{1}, k_{2} and k_{3} are displayed in Fig. B1b–d, respectively, for the x protocols of Fig. B1a. The blue curve shows uniformly distributed values of x, while the red curve shows a more realistic observing configuration of constant time intervals^{13}. In order to investigate more practical (smaller) ranges, the working range is incrementally truncated from both the right and left (the solid and dashed curves, respectively). A particular focus is placed on two x ranges: the solid red circles for which x<5, where k_{1}≃1.2, k_{2}≃5.3 and k_{3}≃23 (the k_{3} red curve flattens out for X>5 in Fig. B1d), and the open red circles for which x>5, where k_{1}≃0.5, k_{2}≃25 and k_{3}≃1250 (k_{3} being 50 times the “x<5” value). This strong largex weighting drives the standard error in C. ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{{S}_{z}}}$ (the zenith value of ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{S}}$) is typically ∼ σ_{τ}, and thus ∼ ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{{M}_{\mathrm{0}}}}$ (Ivănescu et al., 2021). Since ϵ_{S} depends on x^{14}, ${k}_{\mathrm{2}}{\mathit{\sigma}}_{{\stackrel{\mathrm{\u203e}}{\mathit{\u03f5}}}_{S}}^{\mathrm{2}}\simeq {k}_{\mathrm{3}}{\mathit{\sigma}}_{{\stackrel{\mathrm{\u203e}}{\mathit{\u03f5}}}_{{S}_{z}}}^{\mathrm{2}}$^{15}∼ ${k}_{\mathrm{3}}{\mathit{\sigma}}_{\stackrel{\mathrm{\u203e}}{\mathit{\tau}}}^{\mathrm{2}}$, both first terms of Eq. (B4) being driven by k_{3}, which flattens out for $x\in [\mathrm{1.086},X]$, with X>5. They will generally tend then to dominate ${k}_{\mathrm{2}}{\mathit{\sigma}}_{{\stackrel{\mathrm{\u203e}}{\mathit{\u03f5}}}_{{M}_{\mathrm{0}}}}^{\mathrm{2}}$ (unless N≪n), and thus the ${\mathit{\sigma}}_{\widehat{C}}$ calibration error, particularly for large x ranges.
The C values, derived from tangents applied to the Fig. 8 solid curves (the means of a Δx=1.5 sliding window), are plotted in Fig. C1. The objective of this plot is to highlight more robust (lower frequency) C variations (and thus C errors) as a function of x. The 400 nm C values are relatively stable up to x≃7 to 7.5 where they are subject to a large increase. The 1000 nm C pattern is similar with an increase beginning at ≃9 (observations that are roughly consistent with the vignetting arguments of Sect. 6.2).
Final MATLAB code and data employed in the generation of the figures are freely available (see https://doi.org/10.5281/zenodo.7975245, Ivănescu, 2023).
LI: conceptualization, methodology, data curation, software, formal analysis, investigation, writing original draft. NTO'N: validation, writing, review and editing, supervision, funding acquisition.
The contact author has declared that neither of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.
This work was supported by CANDAC (the Canadian Network for the Detection of Atmospheric Change) via the NSERC PAHA (Probing the Atmosphere of the High Arctic) project, by the NSERC CREATE Training programme in Arctic Atmospheric Science and by the Canadian Space Agency (CSA). Finally we also gratefully acknowledge the support of the CANDAC operations staff at Eureka for their numerous troubleshooting interventions.
This research has been supported by the Natural Sciences and Engineering Research Council of Canada (NSERC; grant nos. RGPCC4338422012 and CREATE 38499610) and by the Canadian Space Agency (CSA; grant nos. 21SUASACOA and 00353/15FASTA12).
This paper was edited by Daniel PerezRamirez and reviewed by two anonymous referees.
Alekseeva, G. A., Arkharov, A. A., Galkin, V. D., HagenThorn, E., Nikanorova, I., Novikov, V. V., Novopashenny, V., Pakhomov, V., Ruban, E., and Shchegolev, D.: The Pulkovo Spectrophotometric Catalog of Bright Stars in the Range from 320 TO 1080 NM, Balt. Astron., 5, 603–838, https://doi.org/10.1515/astro19970311, 1996. a
Baibakov, K., O'Neill, N. T., Ivanescu, L., Duck, T. J., Perro, C., Herber, A., Schulz, K.H., and Schrems, O.: Synchronous polar winter starphotometry and lidar measurements at a High Arctic station, Atmos. Meas. Tech., 8, 3789–3809, https://doi.org/10.5194/amt837892015, 2015. a
Barford, N.: Experimental measurements: precision, error and truth, 2nd edn., John Wiley and Sons Ltd., Chichester, https://www.worldcat.org/oclc/11622295 (last access: 4 December 2023), 1985. a
Dachs, J.: Lichtelektrische Sternphotometrie und Farbenindexmessungen durch Zählung von Photoelektronen, PhD thesis, Tübingen University, https://www.worldcat.org/title/249623335 (last access: 4 December 2023), 1960. a
Dachs, J.: Ein Photoelektronen zählendes Sternphotometer, Astron. Nachr., 289, 129–138, https://doi.org/10.1002/asna.19662890305, 1966. a
Dachs, J., Haug, U., and Pfleiderer, J.: Atmospheric extinction measurements by photoelectric star photometry, J. Atmos. Terr. Phys., 28, 637–649, https://doi.org/10.1016/00219169(66)900778, 1966. a
Goodman, L. A.: On the Exact Variance of Products, J. Am. Stat. Assoc., 55, 708–713, https://doi.org/10.1080/01621459.1960.10483369, 1960. a
Gröschke, A., Herber, A. B., Schrems, O., Schulz, K.H., and Gundermann, J.: Development and application of a new star photometer for measuring aerosol optical depth at harsh environments, in: EGU General Assembly 2009, 19–24 April 2009, Vienna, Austria, 10485, https://meetingorganizer.copernicus.org/EGU2009/EGU200910485.pdf (last access: 4 December 2023), 2009. a
Ivănescu, L.: Une application de la photométrie stellaire à l'observation de nuages optiquement minces à Eureka, NU, Master in science thesis, UQAM, http://www.archipel.uqam.ca/id/eprint/8417 (last access: 4 December 2023), 2015. a
Ivănescu, L.: Multistar calibration in starphotometry, code and data, Zenodo [data set], https://doi.org/10.5281/zenodo.7975245, 2023. a
Ivănescu, L., Baibakov, K., O'Neill, N. T., Blanchet, J.P., and Schulz, K.H.: Accuracy in starphotometry, Atmos. Meas. Tech., 14, 6561–6599, https://doi.org/10.5194/amt1465612021, 2021. a, b, c, d, e, f, g, h, i, j, k, l, m, n
Montgomery, D. C. and Runger, G. C.: Applied Statistics and Probability for Engineers, 5th edn., John Wiley and Sons, Inc., http://www.worldcat.org/oclc/28632932 (last access: 4 December 2023), 2011. a
Oh, Y.L.: Retrieval of Nighttime Aerosol Optical Thickness from Star Photometry, Atmosphere, 25, 521–528, https://doi.org/10.14191/Atmos.2015.25.3.521, 2015. a
O'Neill, N. T., Baibakov, K., Hesaraki, S., Ivanescu, L., Martin, R. V., Perro, C., Chaubey, J. P., Herber, A., and Duck, T. J.: Temporal and spectral cloud screening of polar winter aerosol optical depth (AOD): impact of homogeneous and inhomogeneous clouds and crystal layers on climatologicalscale AODs, Atmos. Chem. Phys., 16, 12753–12765, https://doi.org/10.5194/acp16127532016, 2016. a, b, c
Pérez Ramírez, D.: Caracterización del aerosol atmosférico en la ciudad de Granada mediante fotometría solar y estelar, PhD thesis, Granada, http://hdl.handle.net/10481/5628 (last access: 4 December 2023), 2010. a
PérezRamírez, D., Aceituno, J., Ruiz, B., Olmo, F. J., and AladosArboledas, L.: Development and calibration of a star photometer to measure the aerosol optical depth: Smoke observations at a high mountain site, Atmos. Environ., 42, 2733–2738, https://doi.org/10.1016/j.atmosenv.2007.06.009, 2008. a
PérezRamírez, D., Lyamani, H., Olmo, F. J., and AladosArboledas, L.: Improvements in star photometry for aerosol characterizations, J. Aerosol Sci., 42, 737–745, https://doi.org/10.1016/j.jaerosci.2011.06.010, 2011. a
PérezRamírez, D., NavasGuzmán, F., Lyamani, H., FernándezGálvez, J., Olmo, F. J., and AladosArboledas, L.: Retrievals of precipitable water vapor using star photometry: Assessment with Raman lidar and link to sun photometry, J. Geophys. Res.Atmos., 117, D05202, https://doi.org/10.1029/2011JD016450, 2012. a
Roddier, F.: The Effects of Atmospheric Turbulence in Optical Astronomy, in: Progress in Optics, chap. V, Elsevier, 281–376, https://doi.org/10.1016/S00796638(08)70204X, 1981. a
Rousseeuw, P. J. and Croux, C.: Alternatives to the Median Absolute Deviation, J. Am. Stat. Assoc., 88, 1273–1283, https://doi.org/10.1080/01621459.1993.10476408, 1993. a
Rufener, F.: Technique et réduction des mesures dans un nouveau système de photométrie stellaire, Publications de l'Observatoire de Genève, Série A: Astronomie, chronométrie, géophysique, 66, 413–464, http://www.worldcat.org/oclc/491819702 (last access: 4 December 2023), 1964. a
Rufener, F.: The evolution of atmospheric extinction at La Silla, Astron. Astrophys., 165, 275–286, http://adsabs.harvard.edu/abs/1986A&A...165..275R (lsat access: 4 December 2023), 1986. a, b
Sayer, A. M. and Knobelspiesse, K. D.: How should we aggregate data? Methods accounting for the numerical distributions, with an assessment of aerosol optical depth, Atmos. Chem. Phys., 19, 15023–15048, https://doi.org/10.5194/acp19150232019, 2019. a
Soch, J., Faulkenberry, T. J., Petrykowski, K., Allefeld, C., and McInerney, C. D.: The Book of Statistical Proofs, Open, Zenodo, https://doi.org/10.5281/zenodo.5820411, 2021. a
Théorêt, X.: AEROSTAR Conception d'un spectroradiomètre stellaire pour l'étude des aérosols noctures, PhD thesis, Sherbrooke, http://savoirs.usherbrooke.ca/handle/11143/2335 (last access: 4 December 2023), 2003. a
Young, A. T.: Observational Technique and Data Reduction, in: Methods in Experimental Physics, vol. 12, chap. 3, Elsevier, 123–192, https://doi.org/10.1016/S0076695X(08)604950, 1974. a
Made by Dr. Schulz & Partner GmbH (currently closed).
Such an error becomes part of the C value extracted from the Langley calibration of Eq. (4) and becomes part of the operational retrieval process when Eq. (4) is inverted to yield individual values of τ.
n>10, where $n=\sum {n}_{\mathrm{i}}$ (n_{i} being the number of observations associated with star i).
Where ${\mathit{\u03f5}}_{\mathit{\tau}x}={\mathit{\u03f5}}_{\mathit{\tau}}x+\mathit{\tau}{\mathit{\u03f5}}_{x}={\mathit{\u03f5}}_{\mathit{\tau}}x$ (since ϵ_{x}=0) and the variance of the ϵ_{τ}x product (Goodman, 1960) is ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}x}}^{\mathrm{2}}={\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}}x}^{\mathrm{2}}={\mathit{\sigma}}_{x}^{\mathrm{2}}{\stackrel{\mathrm{\u203e}}{\mathit{\u03f5}}}_{\mathit{\tau}}^{\mathrm{2}}+({\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}}}^{\mathrm{2}}{\stackrel{\mathrm{\u203e}}{x}}^{\mathrm{2}}+{\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}}}^{\mathrm{2}}{\mathit{\sigma}}_{x}^{\mathrm{2}})={\mathit{\sigma}}_{x}^{\mathrm{2}}{\stackrel{\mathrm{\u203e}}{\mathit{\u03f5}}}_{\mathit{\tau}}^{\mathrm{2}}+{\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}}}^{\mathrm{2}}\stackrel{\mathrm{\u203e}}{{x}^{\mathrm{2}}}={\mathit{\sigma}}_{\mathit{\tau}}^{\mathrm{2}}\stackrel{\mathrm{\u203e}}{{x}^{\mathrm{2}}}$, since ${\stackrel{\mathrm{\u203e}}{\mathit{\u03f5}}}_{\mathit{\tau}}=\mathrm{0}$, ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{\mathit{\tau}}}={\mathit{\sigma}}_{\mathit{\tau}}$ and ${\mathit{\sigma}}_{x}^{\mathrm{2}}=\stackrel{\mathrm{\u203e}}{{x}^{\mathrm{2}}}{\stackrel{\mathrm{\u203e}}{x}}^{\mathrm{2}}$.
A cutoff liberty that we availed ourselves of because one is free to choose the duration of the calibration period and/or to perform outlier filtering prior to Langley regressions.
Using ${\mathit{\sigma}}_{\mathit{\tau}}^{\mathrm{2}}\simeq {\mathit{\sigma}}_{\widehat{C}}^{\mathrm{2}}n/{k}_{\mathrm{3}}$ (obtained from Eq. B3), with the terms in S and M_{0} neglected, and inserting ${\mathit{\sigma}}_{\mathit{\tau}}/\sqrt{n}$ (i.e. ${\mathit{\sigma}}_{\stackrel{\mathrm{\u203e}}{\mathit{\tau}}}$) into Eq. (7) and noting that a typical range of $x\in [\mathrm{1.086},\mathrm{5}]$ yields k_{3}≃23 (see Fig. B1).
The other bright stars, being of similar A–B type (Ivănescu et al., 2021), exhibit lower signaltonoise ratios (SNRs) in the NIR.
Standard deviations that, we would argue, are also standard errors (each of the three $\widehat{C}$ spectra that were averaged were more akin to means).
The strong, positive correlation between C and τ_{p} and between their errors is the result of variations in the regression lines being effectively driven by rotations about a cluster of pivot points whose x position changes little.
Clouds are usually the dominant coarsemode component, but coarsemode aerosols can have diverse effects, which are typically, but not always, minor.
We could speculate that the two histogram peaks near τ_{p} values of 0.1 and 0.16 are associated with the background finemode optical depth and the enhanced finemode optical depth incited by the presence of windblown sea salt (O'Neill et al., 2016).
Which we pragmatically define as the number of nights for which reliable measurements can be carried out for ≥30 min.
Both conditions apply to a star crossing the meridian at zenith.
If we assume ${\mathit{\u03f5}}_{S}\simeq {\mathit{\u03f5}}_{{S}_{z}}x$, for a multistar calibration with n=n_{i}N (see details in Sect. 3.2, footnote 3), ${\mathit{\sigma}}_{{\mathit{\u03f5}}_{S}}^{\mathrm{2}}=\left({\sum}_{i=\mathrm{1}}^{n}{\mathit{\u03f5}}_{{S}_{i}}^{\mathrm{2}}\right)/n\simeq \left({\sum}_{j=\mathrm{1}}^{{n}_{i}}{\mathit{\u03f5}}_{{S}_{z,j}}^{\mathrm{2}}{\sum}_{k=\mathrm{1}}^{N}{x}_{k}^{\mathrm{2}}\right)\left./\left({n}_{i}N\right)\right)={\mathit{\sigma}}_{{\mathit{\u03f5}}_{{S}_{z}}}^{\mathrm{2}}\stackrel{\mathrm{\u203e}}{{x}^{\mathrm{2}}}$.
 Abstract
 Introduction
 Calibration methodology
 Calibration errors
 Observing conditions
 Multistar calibration
 Regression error discussion
 Conclusions
 Appendix A: Calibration opportunities
 Appendix B: Relative importance of component errors
 Appendix C: Error discussion supplement
 Code and data availability
 Author contributions
 Competing interests
 Disclaimer
 Acknowledgements
 Financial support
 Review statement
 References
 Abstract
 Introduction
 Calibration methodology
 Calibration errors
 Observing conditions
 Multistar calibration
 Regression error discussion
 Conclusions
 Appendix A: Calibration opportunities
 Appendix B: Relative importance of component errors
 Appendix C: Error discussion supplement
 Code and data availability
 Author contributions
 Competing interests
 Disclaimer
 Acknowledgements
 Financial support
 Review statement
 References