A camera model and associated automated calibration procedure for stationary
daytime sky imaging cameras is presented. The specific modeling and
calibration needs are motivated by remotely deployed cameras used to forecast
solar power production where cameras point skyward and use 180
The power output variability of renewable energy sources poses challenges to its integration into the electricity grid. Forecasting of renewable power generation (e.g., Monteiro et al., 2009; Perez et al., 2010; Kleissl, 2013) enables more economical and reliable scheduling and dispatch of all generation resources, including renewables, which in turn accommodates a larger amount of variable supply on the electricity grid. Specifically for solar power forecasting, a number of technologies are being applied: numerical weather prediction (e.g., Lorenz et al., 2009; Mathiesen and Kleissl, 2011; Perez et al., 2013); satellite image-based forecasting (e.g., Hammer et al., 1999; Perez and Hoff, 2013); and stochastic learning methods (e.g., Bacher et al., 2009; Marquez and Coimbra, 2011; Pedro and Coimbra, 2012). For very short term (15 min ahead) solar power forecasting on the kilometer scale, sky imaging from ground stations has demonstrated utility (Chow et al., 2011; Urquhart et al., 2013; Marquez and Coimbra, 2013; Yang et al., 2014).
Some of these sky imaging methods require the camera to be geometrically calibrated; i.e., each pixel must be associated with a corresponding view direction. Together with cloud height estimates, the view direction allows geolocation of clouds and their shadow projections such that their position is known relative to solar power plants. Geometric calibration is a common task in photogrammetry and computer vision, and calibration methods have been developed for a variety of applications. Some methods for calibrating a stationary camera require the use of calibration equipment or setups (Tsai, 1987; Weng et al., 1992; Heikkilä and Silvén, 1996; Shah and Aggarwal, 1996) or planar targets (Wei and Ma, 1993; Sturm and Maybank, 1999; Zhang, 2000). Geometric scene information can be used to calibrate the camera's internal parameters (Liebowitz and Zisserman, 1998) or estimate lens distortion (Brown, 1971; Devernay and Faugeras, 2001; Tardif et al., 2006). Scenes with parallel or perpendicular lines or primitive shapes are not generally available for skyward-pointing cameras, and thus there are no structures from the built environment around which to base a generic and automated calibration procedure.
Cameras used for solar power forecasting often employ fisheye lenses, which
require appropriate camera modeling and associated model parameter estimation
methods due to the large distortion required to achieve the approximately
180
Providing detailed and quantitative yet widely applicable specifications for geometric camera calibration in solar energy forecasting applications is difficult. The impact of geometric calibration errors depends on cloud size, cloud speed, forecast averaging interval, geometry of cloud and camera relative to the plant, etc. For an illustration of geometrical relationships and sensitivity to cloud height errors see Nguyen and Kleissl (2014).
The calibration approach taken here is sometimes referred to as stellar calibration, where the 3-D position of an object or set of objects is treated as known. In particular the sun position in the sky is treated as a known input which is used along with the corresponding measured sun position in an image to calibrate a stationary camera of fixed focal length. Sun position has been used previously for camera calibration. Lalonde et al. (2010) have used manual image annotation to select the sun position in a few images, and with this estimated the focal length, principle point, and two of the three rotational degrees of freedom (the camera horizontal axis was assumed parallel to the ground). The work presented here builds on this idea and extends it using a more generalized camera model and automated sun detection. The camera model here allows any pose, non-square pixels, and both radially symmetric and decentering distortion components.
The layout of this paper is as follows. Section 2 discusses the forward and backward camera model. Section 3 discusses the imaging equipment and solar position input used for the calibration process. Section 4 provides details of the calibration procedure: initialization, linear estimation, and nonlinear estimation. Section 5 provides results for both measured solar position input and synthetic data. Synthetic data are used to assess the uncertainty in calibration performance and parameter estimation as a function of measurement uncertainty.
The forward camera model projects points from a 3-D scene onto the image
plane. The backward camera model described in Sect.
The standard model for a camera without distortion is a 3-D to 2-D projective
transformation, mapping points
In summary, the model of a camera given by Eq. (1) contains six extrinsic (external) and five intrinsic (internal) camera parameter's and thus has 11 degrees of freedom. While the perspective projection camera model has been widely used, it does not account for lens distortion and assumes that the camera is a central projection camera. Since we seek to develop a model for use with a fisheye lens exhibiting a significant amount of distortion, the above model must be modified appropriately.
An equivalence class
The most common form of distortion in dioptric imaging systems with a photo
objective lens is radially symmetric distortion. Several adjustments to the
perspective projection model to account for radially symmetric distortion
have been proposed for small field of view lenses exhibiting moderate amounts
of pincushion or barrel distortion (e.g., Slama et al., 1980). In order to
generate a one-to-one mapping of hemispherical radiance (180
Fisheye lens designers generally strive to meet one of the above projections,
but due to manufacturing and assembly tolerances, the standard projections
(Eqs. 8b–e) only approximate a particular lens-camera system. In order to
model wide-angle and fisheye lenses more accurately, a number of models have
been proposed (e.g., Kannala et al., 2006; Shah and Aggarwal, 1996). Here,
instead of modeling the radially symmetric distortion using a polynomial in
In addition to radially symmetric distortion, lenses exhibit tangential
distortion. This deviation from the radial alignment constraint (Tsai, 1987)
causes the measured azimuth of a point
The Brown–Conrady distortion model (Brown, 1971) formulates the radial
Expanding the aberrations due to decentering as developed by Conrady (1919) as in Eq. (10),
one finds that the second and third order terms in
The use of the Brown–Conrady decentering distortion model for a fisheye lens
should only be considered as an expedient for model fitting, and not as
a physical description of optical distortion. Conrady derived the decentering
formulae following a paraxial method he devised to analytically obtain the
five classical Seidel aberrations (Conrady, 1918). Equations 10a and 10b are therefore only valid under the small angle approximation
The radial and tangential decentering distortion can be converted to the corresponding Cartesian components as
Summarizing the results of this section, the forward projection of a 3-D
space point to 2-D pixel coordinates consists of the following four steps.
Euclidean transformation Cartesian to spherical coordinates Lens-camera projection with distortion Affine transformation
The full camera model is represented by a nonlinear vector-valued function
In many cases, one is given points
The inversion of mapping
After converting to calibrated inhomogeneous image coordinates using
The residual radially symmetric distortion polynomial (Eq. 9) is reformulated
as a function of
USI 1.8 in the instrument field at the Department of Energy, Atmospheric Radiation Measurement Program, Southern Great Plains Climate Research Facility.
The University of California, San Diego (UCSD) sky imager (USI) camera system
was developed for the purpose of solar power forecasting (Urquhart
et al., 2013). The camera is an Allied Vision GE-2040C camera which contains
a
The USI used in this work was deployed at the Department of Energy,
Atmospheric Radiation Measurement (ARM) Program, Southern Great Plains (SGP)
Climate Research Facility from 11 March to 4 November 2013 at a longitude,
latitude, altitude of 97.484856
The inputs used to calibrate the camera model (i.e., fitting the camera model
parameters) are the angular position of the sun
Solar position measurements on
Measurement data consist of automated detection of the sun's position
The detection process for a single image
The sun position is detected in a series of images collected from sunrise to sunset, yielding over 1400 calibration points per day. The set of points collected throughout a single (clear) day nominally forms a smooth arc. To evaluate the camera model and calibration performance under different solar arc input possibilities, 5 input cases were tested: (1) a single solar arc, (2) 2 solar arcs on consecutive days, (3) 4 solar arcs, (4) 10 solar arcs with measurement noise due to occasional clouds, (5) a single solar arc with noise due to clouds (Table 1). The solar arcs for cases 1, 4 and 5 are shown in Fig. 2. Case 1 would be preferred in practice as it requires only one – admittedly perfectly clear – day of data. However, limitations in sun position availability during one day may not provide sufficient data to accurately fit the camera model. The improvement associated with adding more days is evaluated in cases 2 and 3. Cases 4 and 5 were designed to provide more realistic and noisy data that would be found in climates without completely clear days.
Calibration test cases. The days included in each test case are
given along with the number of sun position points and an estimate of
measurement standard deviation
The sequence of sun position detections forms an arc that should be a smooth
curve. The detection process, however, is associated with errors, especially
when clouds are present. The deviation of the measured data from a smooth arc
can be used to quantify the calibration input error. Separately for each day,
a 9th order polynomial is fit to the
The calibration procedure is a three-step process: (1) generate a rough estimate of the intrinsic parameters; (2) estimate the camera pose (rotation and translation), assuming one of the projections in Eq. (8); and (3) perform a three-stage nonlinear parameter estimation using the Levenberg–Marquardt (LM) algorithm to obtain the final intrinsic and extrinsic parameters. Steps 1 and 2 will be described in Sect. 4.1 and step 3 will be discussed in Sect. 4.2. Calibration results are given in Sect. 5.
In order to apply the LM algorithm to estimate the model parameters, the
parameter vector
In whole sky imagery, the entire sky hemisphere is visible and forms an
ellipse on the image plane with eccentricity near unity (e.g., Figs. 2 or 3).
This enables a simple automated estimation approach. A Hough circle transform
is used to obtain the approximate center
The camera pose is estimated by computing the linear transformation between
the inhomogeneous camera coordinates
Synthetic data set point distribution. The 1673 points are generated from taking the solar position every 30 s from sunrise to sunset on 13 May 2013, and projecting onto the image plane using a set of ground truth camera model parameters. The points shown are ground truth with no noise added. Background image (for visual reference only) is from 3 May 2013.
The calibrated perspective projection matrix
Due to imperfect data, the left
Using Eq. (13) we define an error function
The calibration is performed in three successive stages: (1) take
It was found necessary to enforce additional constraints in the model fitting
process to ensure consistent and physically significant results. The LM
algorithm is a type of unconstrained optimization, so enforcement of
constraints is implemented using a penalty vector
Camera model parameters (excluding distortion terms) determined from
the five test cases of solar input data. The mean and standard deviation are
also given. The units denoted [pixels
To ensure that the residual and nominal radially symmetric distortion are
orthogonal functions over the field of view, the following constraint was
used:
The root mean square error (RMSE), mean absolute error (MAE) and standard
deviation (SD) are computed as
The results of calibrating the USI for the five different solar arc cases is
shown in Table 2. In the cases using more than one solar arc (cases 2–4),
the principle point is consistent to within 0.60
Calibration error metrics for each case: root mean square error
(RMSE); mean absolute error (MAE); standard deviation (SD); and measurement
root mean square difference (
The performance of camera calibration using solar position is given in Table 3 along with the estimated measurement error of the sun position. Calibration error for 1 or more clear days was around 1 pixel (0.94–1.24 pixels). Including cloudy days increased the error to 2.9 and 6.3 pixels for the two cloudy cases tested. Including measurement data with more dispersion, as is the case with cloudy days in this study, will always increase the calibration error. This is because for a given set of model parameters, the projection of sun position will form a smooth arc, while the measured sun position will have some dispersion around this arc. Larger dispersion will yield larger calibration error values, which is why it is important to consider calibration error in the context of measurement error.
While not a true lower bound on calibration accuracy, the measurement errors
given here can be used to assess the calibration accuracy relative to the
estimated accuracy of the input data. The polynomial fit to the measurement
data (Eq. 20) does not have the same constraints as fitting the camera model
parameters to the measurement data, thus the measurement standard deviation
(
Root mean square calibration error distribution (Eq. 28) as
a function of simulated measurement error standard deviation
Distributions of
As with any image detection algorithm, there are errors in the position of
the sun obtained from the detection algorithm (Tables 1 and 3). Depending on
the content of each image, such as the possibility of thin clouds veiling
a still visible sun, or more opaque clouds passing near or occluding the sun,
the magnitude of the detection error will vary. A Monte Carlo method was used
to assess the uncertainty in model performance and parameter estimation as
a function of measurement error. A ground truth synthetic calibration data set
was constructed with
The distribution of true root mean square calibration error (RMSE) for
Distributions of parameter estimation for four of the intrinsic parameters
are shown in Fig. 5. For both
The increasing use of stationary daytime sky imagery instruments for solar
forecasting applications has motivated the need to develop automatic
geometric camera calibration methods and an associated general camera model.
The camera model presented is not specific to fisheye lenses, and is
generally applicable to most fixed focal length dioptric camera systems with
a photo objective lens. We have proposed a method to automatically detect and
use the sun position over a sequence of images to calibrate the proposed
camera model. Calibration performance on clear days ranged from 0.94 to
1.24
The sky imagery data used in this work were collected as part of the UCSD Sky Imager
Cloud Position Study (
The sun is only detected for images with solar zenith angles
As the sun approaches the horizon, both refraction and the lens projection
cause a distortion of the solar disk. In both cases, the image of the sun is
compressed in the radial direction. When the sun is near the horizon and the
sun's pixels are not saturated, refraction causes an estimated 0.008 pixel
radial shift of the sun's centroid toward the image center, and thus
refraction effects on sun shape distortion can safely be ignored. When the
sun is not near the horizon, the saturated sun region is enlarged due to
scattering. The algorithm proposed here assumes this region is circular,
which due to lens distortion introduces a small radial shift of the measured
position of the sun's centroid toward the image center. The larger the
saturated region and the closer it is to
Both the red–green–blue (RGB) and hue–saturation–value (HSV) color spaces
were used for detection, and each color image matrix will be referred to as
an X-image, e.g., the R-image (the red image). The approximate diameter of the
sun Ø is detected by constructing a binary image by thresholding the
V-image at the 99.99th percentile (applicable for our 3.1
The columns containing the vertical smear (Fig. 2) are detected by extracting
the first row of the V-image and the sum of the first row for the R,G and
B-images (i.e., 3 times the first row of the equal weight grayscale
image). A measure of the local mean is subtracted from each row separately
using a 100
The detection processes described yields four row-column pairs (three from the
circular kernel convolutions and a fourth from the Förstner operator),
and the detection of maximum smear gives a fifth column estimate for a total
of 4 detected rows
It should be noted that the sun detection method is purely empirical and was not designed to have the fastest performance. In practice, any reasonable algorithm can be used for the sun position detection. If the position errors are zero mean and normally distributed, then the uncertainty analysis in Sect. 5.3 can be used as a guide for expectations of calibration accuracy. The detection method described here is one of many that can be used, and the authors expect that other superior algorithms could be constructed. Since small calibration errors were obtained, the present algorithm is sufficient to demonstrate the calibration methodology.
The calibration of backwards projection model parameters was performed with
a single stage. The parameter vector used in the LM algorithm consisted only
of
We would like to acknowledge the hardware development team for the USI which consisted of over 25 student volunteers (see Urquhart et al., 2015, for names). In particular we would like to thank Elliot Dahlin for organizing the development team and Mohamed Ghonima for assisting in the deployment of the USI at the Atmospheric Radiation Measurement (ARM) Program site. We appreciate the funding provided by the US Department of Energy ARM Program to the Southern Great Plains (SGP) site to support this experiment. We are very grateful to the SGP site staff for providing consistent and prompt support throughout our campaign: Rod Soper, John Schatz, Ken Teske, Chris Martin, Tim Grove, David Swank, Pat Dowell. We thank the UCSD von Liebig Center, the California Energy Commission, and the California Public Utilities Commission California Solar Initiative for providing the authors funding during the preparation of this article. Edited by: J. Joiner