Aircraft-based stereographic reconstruction of 3-D cloud geometry

This work describes a method to retrieve the location and geometry of clouds using RGB images from a video camera on an aircraft and data from the aircraft’s navigation system. Opposed to ordinary stereo methods for which two cameras with fixed relative position at a certain distance are used to match images taken at the exact same moment, this method uses only a single camera and the aircraft’s movement to provide the needed parallax. Advantages of this approach include a relatively simple installation on a (research) aircraft and the possibility to use different image offsets that are even larger than the size of the aircraft. Detrimental effects are the evolution of observed clouds during the time offset between two images as well as the background wind. However we will show that some wind information can also be recovered and subsequently used for the physics-based filtering of outliers. Our method allows the derivation of cloud top geometry which can be used, e.g., to provide location and distance information for other passive cloud remote sensing products. In addition it can also improve retrieval methods by providing cloud geometry information useful for the correction of 3-D illumination effects. We show that this method works as intended through comparison to data from a simultaneously operated lidar system. The stereo method provides lower heights than the lidar method; the median difference is 126 m. This behavior is expected as the lidar method has a lower detection limit (leading to greater cloud top heights for the downward view), while the stereo method also retrieves data points on cloud sides and lower cloud layers (leading to lower cloud heights). Systematic errors across the measurement swath are less than 50 m.


Introduction
As implied by the name of remote sensing, the observer is located at a position different from the observed objects.Accordingly, the location of a cloud is not trivially known in cloud remote sensing applications.Thus, cloud detection, cloud location and cloud geometry are parameters of high importance for all consecutive retrieval products.These parameters themselves govern characteristics like cloud mass or temperature and subsequently thermal radiation budget and thermodynamic phase.Typically passive remote sensing using spectral information is used to retrieve cloud properties including cloud optical thickness, effective droplet radius, thermodynamic phase or liquid water content.However, these methods cannot directly measure the cloud's location.To put the results of such retrieval methods into context, the location must be obtained from another source.
Additional to a missing spatial context, unknown cloud location and geometry are the central reason for uncertainties in microphysical retrievals because of the complex impact of 3-D structures on radiative transport (e.g., Várnai and Marshak, 2003;Zinner and Mayer, 2006).The classic method of handling complex, inhomogeneous parts of the atmosphere (e.g., typical Moderate Resolution Imaging Spectroradiometer (MODIS) retrievals) is to exclude these parts from further processing.This of course can severely limit the applicability of such a method.As shown by Ewald (2016) and Ewald et al. (2018) the local cloud surface orientation affects retrieval results.In particular, Ewald (2016) and Ewald et al. (2018) have shown that changes in surface orientation and changes in droplet effective radius produce a very similar spectral response.Thus an independent measurement of cloud surface orientation would very likely improve retrieval results on droplet effective radius. T.

geometry reconstruction
As location and geometry information is of such a great importance, a couple of different approaches to get this information can be found.Among these are active methods using lidar or radar.Fielding et al. (2014) and Ewald et al. (2015) show how 3-D distributions of droplet sizes and liquid water content of clouds can be obtained through the use of a scanning radar.Ewald et al. (2015) even visually demonstrate the quality of their results by providing simulated images using the retrieved 3-D distributions as input and comparing them to actual photographs.A major downside of this approach is the limited scanning speed.Consequently these methods are especially difficult to employ on fast-moving platforms.For this reason, the typical implementations of cloud radar and lidar on aircraft only provide data directly below the aircraft.
Passive methods are often less accurate but can cover much larger observation areas in shorter measurement times.They typically either use spectral features of the signal or use observations from multiple directions.MODIS cloud top height, for example, uses thermal infrared images to derive cloud top brightness temperatures (Strabala et al., 1994).Using assumed cloud emissivity and atmospheric temperature profiles, cloud top heights can be calculated.Várnai and Marshak (2002) used gradients in the MODIS brightness temperature to further classify observed clouds into "illuminated" and "shadowy" clouds.Another spectral approach has been demonstrated amongst others by Fischer et al. (1991) and Zinner et al. (2018) using oxygen absorption features to estimate the traveled distance of the observed light through the atmosphere.Assuming most of the light gets reflected at or around the cloud surface, this information can be used to calculate the location of the cloud's surface.
Other experiments (e.g., Beekmans et al., 2016;Crispel and Roberts, 2018;Romps and Öktem, 2018) use multiple ground-based all-sky cameras and apply stereophotogrammetry techniques to georeference cloud fields.Due to the use of multiple cameras, it is possible to capture all images at the same time; therefore cloud evolution and motion do not affect the 3-D reconstruction.
Spaceborne stereographic methods have been employed, e.g., for the Multi-angle Imaging SpectroRadiometer (MISR) (Moroney et al., 2002) and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) (Seiz et al., 2006).MISR features nine different viewing angles which are captured during 7 min of flight time.During the long time period of about 1 min between two subsequent images, the scene can change substantially.Clouds in particular are transported and deformed by wind, which adds extra complexity to stereographic retrievals.The method by Moroney et al. (2002) addresses this problem by tracking clouds from all perspectives and by deriving a coarse wind field at a resolution of about 70 km.ASTER comes with only two viewing angles but still takes about 64 s to complete one image pair.Consequently, the method by Seiz et al. (2006) uses other sources of wind data (e.g., MISR or geostationary satel-lite data) to correct for cloud motion during the capturing period.
Parts of the Introduction refer to the cloud surface, a term which comes with some amount of intuition but is hard to define in precise terms.This difficulty arises because a cloud has no universally defined boundaries but rather changes gradually between lower and higher concentrations of hydrometeors.Yet, there are many uses for a defined cloud boundary.Horizontal cloud boundary surfaces are commonly denoted as cloud base height and cloud top height, which, through their correspondence to the atmospheric temperature profile and subsequent thermal radiation, largely affect the energy balance of clouds.Another such quantity, namely the cloud fraction, is often used, for example, in atmospheric models to improve the parametrization of cloud-radiation interaction.Still, defining a cloud fraction requires discrimination between areas of clouds and no clouds, introducing vertical cloud boundary surfaces.Stevens et al. (2019) illustrate what Slingo and Slingo (1988) already said: cloud amount is "a notoriously difficult quantity to determine accurately from observations".Besides the difficulties in defining a thing like the cloud surface, it is a very useful tool to describe how clouds interact with radiation.This in turn allows us to do a little trick: we define the cloud's surface as the visible boundary of a cloud in 3-D space.This may or may not correspond with gradients of microphysical properties but clearly captures a boundary of interaction between clouds and radiation.This ensures that the chosen surface is relevant, both to improve microphysical retrievals which are based on radiation from a similar spectral region and to use it in investigating cloud-radiation interaction.Additionally, by definition, the cloud surface is located where an image discriminates between cloud and no cloud, which is a perfect fit for the observation with a camera.
In this work, we present a stereographic method which uses 2-D images taken from a moving aircraft at different times to find the georeferenced location of points located on the cloud surface facing the observer.This method neither depends on estimates of the atmospheric state, nor does it depend on assumptions on the cloud shape.In contrast to spaceborne methods, our method only takes 1 s for one image pair.Due to the relatively low operating altitude of an aircraft compared to a satellite, the observation angle changes rapidly enough to use two successive images without the application of a wind correction method.As we employ a 2-D imager with a wide field of view, each cloud is captured from many different perspectives (up to about 100 different angles, depending on the distance between aircraft and cloud).Due to the high number of viewing angles, it is possible to derive geometry information of partly occluded clouds.Furthermore, this allows us to simultaneously derive an estimate of the 3-D wind field and use it to improve the retrieval result.
We demonstrate the application of our method to data obtained in the NARVAL-II and NAWDEX field campaigns (Stevens et al., 2019;Schäfler et al., 2018).In these field cam-paigns, the hyperspectral imaging system specMACS was flown on the HALO aircraft (Ewald et al., 2016;Krautstrunk and Giez, 2012).The deployment of specMACS, together with other active and passive instrumentation, aimed at a better understanding of cloud physics including water content, droplet growth, cloud distribution and cloud geometry.The main component of the specMACS system is two hyperspectral line cameras.Depending on the particular measurement purpose, additional imagers are added.The hyperspectral imagers operate in the wavelength range of 400-1000 nm and 1000-2500 nm at a spectral resolution of a few nanometers.Further details are described by Ewald et al. (2016).During the measurement campaigns discussed in this work, the two sensors were looking in the nadir perspective and were accompanied by a 2-D RGB imager with about twice the spatial resolution and field of view.In this work, we focus on data from the 2-D imager because it allows the same cloud to be observed from different angles.
In Sect. 2 we briefly explain the measurement setup.Section 3 introduces the 3-D reconstruction method, and Sect. 4 presents a verification of our method.For geometric calibration of the camera we use a common approach of analyzing multiple images of a known chessboard pattern to resolve unknown parameters of an analytic distortion model.Nonetheless, as the geometry reconstruction method is very sensitive to calibration errors, we provide a short summary of our calibration process in Appendix A. We used the OpenCV library (Bradski, 2000) for important parts of this work.Details are listed in Appendix B.

Measurement setup
During the NARVAL-II and NAWDEX measurement campaigns, specMACS was deployed on board the HALO aircraft.As opposed to Ewald et al. (2016), the cameras were installed in a nadir-looking perspective.The additional 2-D imager (Basler acA2040-180kc camera with Kowa LM8HC objective) was set up to provide a full field of view of approximately 70 • , with 2000 by 2000 pixels and data acquisition frequency at 1 Hz.To cope with the varying brightness during and between flights, the camera's internal exposure control system was used.
Additionally, the WALES lidar system (Wirth et al., 2009), the HALO Microwave Package HAMP (Mech et al., 2014), the Spectral Modular Airborne Radiation measurement sys-Tem SMART (Wendisch et al., 2001) and an AVAPS dropsonde system (Hock and Franklin, 1999) were part of the campaign-specific aircraft instrumentation.The WALES instrument is able to provide an accurate cloud top height and allows us to directly validate our stereo method as described in Sect. 4.

3-D reconstruction
The goal of our 3-D reconstruction method is to find georeferenced points which are part of a cloud surface at a specific time in an automated manner.Input data are geometrically calibrated images from a 2-D camera fixed to the aircraft.As the aircraft flies, pictures taken at successive points in time show the same clouds from different perspectives.A schematic of this geometry is shown in Fig. 1.The geometric calibration of the camera and the rigid mounting on the aircraft allows us to associate each sensor pixel with a viewing direction in the aircraft's frame of reference.The orientation of the camera with respect to the aircraft's frame of reference was determined by aligning images taken on multiple flights to landmarks also visible in satellite images.Using the aircraft's navigation system, all relevant distances and directions can be transformed into a geocentric reference frame in which most of the following calculations are performed.The reconstruction method contains several constants which are tuned to optimize its performance.Their values are listed in Table 1.
In order to perform stereo positioning, a location on a cloud must be identified in multiple successive images.A location outside of a cloud is invisible to the camera, as it contains clear air, which barely interacts with radiation in the observed spectral range.Locations enclosed by the cloud surface do not produce strong contrasts in the image, as the observed radiation is likely scattered again before reaching the sensor.Thus, a visible contrast on a cloud very likely originates from a location on or close to the cloud surface, as defined in the Introduction.This method starts by identifying such contrasts.If such a contrast is only present in one direction of the image (basically, we observe a line), this pattern is not suitable for tracking to the next image due to the aperture problem (Wallach, 1935).We thus search one image for pixels of which the surroundings show a strong contrast in two independent directions.This corresponds to two large eigenvalues (λ 1 and λ 2 ) of the Hessian matrix of the image intensity.This approach has already been formulated by Shi and Tomasi (1994): interesting points are defined as points with min(λ 1 , λ 2 ) > λ, with λ being some threshold.We use a slightly different variant and interpret min(λ 1 , λ 2 ) as a quality measure for each pixel.In order to obtain a more homogeneous distribution of tracking points over the image, candidate points are sorted by quality.Points which have better candidates at a distance of less then r min are removed from the list, and the remaining best N points are taken.For these initial points, matches in the following image are sought using the optical flow algorithm described by Lucas and Kanade (1981).In particular, we use a pyramidal implementation of this algorithm as introduced by Bouguet (2000).If no match can be found, the point is rejected.
The locations of the two matching pixels define the viewing directions v 1 and v 2 in Fig. 1.The distance traveled by the aircraft between two images is indicated by d.Under the assumption that the aircraft travels much faster than the observed clouds, an equation system for the position of the point on the cloud's surface P CS can be found.In principle, P CS is located at the intersection of the two viewing rays along v 1 and v 2 , but as opposed to 2-D space in 3-D space there is not necessarily an intersection, especially in the presence of inevitable alignment errors.We relax this condition by searching for the shortest distance between the viewing rays.The shortest distance between two lines can be found by introducing a line segment which is perpendicular to both lines.This is the mis-pointing vector m.The point on the cloud's surface P CS is now defined at the center of this vector.If for further processing a single point for the observer location is needed, the point P ref at the center of both aircraft locations is used.
This way, many points potentially located on a cloud's surface are found.Still, these points contain a number of false correspondences between two images.During turbulent parts of the flight, errors in synchronization between the aircraft navigation system and the camera will lead to errors in the calculated viewing directions.To reject these errors, a set of filtering criteria is applied (the threshold values can be found in Table 1).Based on features of a single P CS , the following points are removed: -P CS position is behind the camera or below ground.
-Relative mis-pointing |m|/|d AC | > m rel .Figure 2 shows long tracks corresponding to a location on the cloud surface.These tracks follow the relative cloud position through up to 30 captured images.The tracks are generated from image pairs by repeated tracking steps originating at the t 2 pixel position of the previous image pair.Using these tracks, additional physics-based filtering criteria can be defined.
Each of these tracks contains many P CS points which should all describe the same part of the cloud.As clouds move with the wind, the P CS points do not necessarily have to refer to the same geocentric location but should be transported with the local cloud motion.For successfully tracked points, it can indeed be observed that the displacement of the P CS points in a 3-D geocentric coordinate system roughly follows a preferred direction instead of jumping around randomly, which would be expected if the apparent movement were just caused by measurement errors.The arrows in Fig. 2 show the average movement of the P CS of each track, reprojected into camera coordinates.
For the observation period (up to 30 s) it is assumed that the wind moves parts of a cloud on almost straight lines at a relatively constant velocity (which may be different for different parts of the cloud).Then, sets of P CS can be filtered for unphysical movements.The filtering criteria are as follows: -Velocity jumps.The fraction of maximum to median velocity of a track must be less than v jump .
-Count.The number of calculated P CS in a track must be above a given minimum N min .
-Distance uncertainty.The distance d AC between aircraft and cloud may not vary more than d abs or the relative distance variation with respect to the average distance of a track must be less than d rel .
During measurements close to the Equator, typical during the NARVAL-II campaign, the sun is frequently located close to the zenith.In this case, specular reflection of the sunlight at the sea surface produces bright spots, known as sunglint and illustrated in Fig. 3. Due to waves on the ocean surface, these regions of the image also produce strong contrasts.It turns out that such contrasts are preferred by the Shi and Tomasi algorithm for feature selection but are useless in order to estimate the cloud surface geometry.To prevent the algorithm from tracking these points, the image area in which bright sunglint is to be expected is estimated using the bidirectional reflectance distribution function (BRDF) by Cox and Munk (1954) included in the libRadtran package (Mayer and Kylling, 2005;Emde et al., 2016).The resulting area (indicated by a red line in Fig. 3) is masked out of all images before any tracking is performed.Masking out such a large area from the camera image seems to be a wasteful approach.In fact, this is acceptable: due to the large viewing angle of the camera, all masked-out clouds are almost certainly visible at a different time in another part of the image.Therefore, these clouds can still be tracked using parts of the sensor which are not affected by sunglint, even if a large part of the sensor is obstructed by sunglint.
After filtering, a final mean cloud surface point P CS is derived from each track as the centroid of all contributing cloud surface points.The collection of all P CS forms a point cloud in a Cartesian 3-D reference coordinate frame which is defined relative to a point on the earth's surface (Fig. 4).This point cloud can be used on its own, serve as a reference for other distance measurement techniques (e.g., oxygen absorption methods as in Zinner et al., 2018 and deriving distances by a method according to Barker et al., 2011) or allow for a 3-D surface reconstruction.
A precise camera calibration (relative viewing angles on the order of 0.01 • ) is crucial to this method, which can be achieved through the calibration process as described in Appendix A. A permanent time synchronization between the aircraft position sensors and the cameras, accurate on the order of tens of milliseconds, is indispensable as well.It should be noted that this does involve time stamping each individual image to cope with inter-frame jitter as well as disabling any image stabilization inside the camera.As this involves generating data which are only available during the measurement, this must be considered prior to the system deployment.For the system described in this work, we used the network time protocol (Mills et al., 2010) with an update interval of 5 min.

Across-track stability and signal spread
Errors in the sensor calibration could lead to systematic errors in the retrieved cloud height with respect to lateral horizontal distance relative to the aircraft (perpendicular to flight track).In order to assess these errors, data from a stratiform cloud deck observed between 09:01:25 and 09:09:00 UTC during NAWDEX flight RF11 on 14 October 2016 were sorted by average across-track pixel position.While the cloud deck features a lot of small-scale variation, it is expected to be almost horizontal on average.Note that as the orientation of the camera with respect to the aircraft has been determined independently using landmarks, deviations from the assumption of a horizontal cloud deck should be visible  in the corresponding data and are counted as additional retrieval uncertainty in this analysis.During the investigated time frame, 260 360 data points were collected using the stereo method.The vertical standard deviation of all points is 47.3 m, which includes small-scale cloud height variation and measurement error.Figure 5 shows a 2-D histogram of all collected data points.From visual inspection of the histogram, apart from about 50 px at the sensor's borders, no significant trend can be observed.To further investigate the errors, a second-order polynomial has been fitted to the retrieved heights.This polynomial is chosen to cover the most likely effect of sensor misalignment which should contribute to a linear term and distortions in the optical path which should contribute to a quadratic term.The difference between the left and the right side of the sensor of 21 m corresponds to less than 0.1 • of absolute camera misalignment, and the curvature of the fit is also small compared to the overall dimensions of the observed clouds.

Lidar comparison
Cloud top height information derived from the WALES lidar (Wirth et al., 2009) is used to verify the bias of the described method.While the stereo method provides P CS at arbitrary positions in space, the lidar data are defined on a fixed grid ("curtain") beneath the aircraft.To match lidar measurements  The parabolic fit shows that a small systematic variation can be found beneath the noise (which is due to small-scale cloud height variations and measurement uncertainties).Compared to the overall dimensions of the observed cloud (≈ 14 km) and the uncertainty of the method, these variations are small.It may still be noted that data from the edges of the sensor (≈ 50 px on each side) should be taken with care.to related stereo data points, we collect all stereo points which are horizontally close to a lidar measurement.This can be accomplished by defining a vertical cylinder around the lidar beam with 150 m radius.Every stereo-derived point which falls into this cylinder with a time difference of less than 10 s is considered as stereo point related to the lidar measurement.As the (almost) nadir-pointing lidar observes cloud top heights only, we use the highest stereo point inside the collection cylinder.The size of the cylinder is rather arbitrary, but the particular choice has reasons: the aircraft moves at a speed of approximately 200 m s −1 , and the data of the lidar system are available at 1 Hz and are averaged over this period.Any comparison between both systems should therefore be on the order of 200 m horizontal resolution.Furthermore, data derived from the stereo method are only available for when the method is confident that it worked.Thus not every lidar data point has a corresponding stereo data point.
Increasing the size of the cylinder increases the count of data pairs but also increases false correspondences.The general picture however remains unchanged.Figure 6 compares the measured cloud top height from the WALES lidar and the stereo method, visually showing a good agreement.However its quantification in an automated manner and without manual (potentially biased) filtering proves to be difficult.Part of this difficulty is due to the cloud fraction problem, which is explained by Stevens et al. (2019), basically stating that different measurement methods or resolutions will always detect different clouds.This is also indicated in Fig. 6 on the right: the stereo method detects the lower cumulus cloud layer due to larger contrasts, while the lidar observes the higher cirrus layer, leading to wrong cloud height correspondences though both methods are supposedly correct.Filtering the data for high lidar cloud top height and low stereo height reveals that the lower right part of the comparison can be attributed almost exclusively to similar scenes.Further comparison difficulties arise from collecting corresponding stereo points out of a volume which might in fact include multiple (small) clouds.Considering all these sources of inconsistency, only a very conservative estimate of the deviation of lidar and stereo values can be derived from this unfiltered comparison.The median bias between the lidar and the stereo method is approximately 126 m for all compared flights, indicating lower heights for the stereo method.As the lidar detects cloud top heights with high sensitivity and the stereo method relies on image contrast which is predominantly present at cloud sides, this direction is expected.
Further manual filtering indicates that the real median offset is likely on the order of 50 to 80 m; however this cannot be shown reliably.Quantifying the spread between lidar and stereo method yields no meaningful results for the same reasons.

Wind data comparison
An important criterion that we use to identify reliable tracking points is based on the assumption that the observed movement of the points can be explained by smooth transport due to a background wind field.The thresholds for this test are very tolerant, so the requirements for the accuracy of the retrieved wind field are rather low.However, a clear positive correlation between the observed point motion and the actual background wind would underpin this assumption substantially.In order to do this, we compare the stereo wind against a reanalysis of the European Centre for Medium-Range Weather Forecasts (ECMWF) in a layer in which many stereo points have been found.
In the following, the displacement vectors of every track have been binned in time intervals of 1 min along the flight track and 200 m bin in the vertical.To reduce the number of outliers, bins with fewer than 100 entries were discarded.Inside the bins, the upper and lower 20 % of the wind vec- tors (counted by magnitude) were dropped.All remaining data were averaged to produce one mean wind vector per bin.In Fig. 7 the horizontal component of these vectors is compared to ECMWF reanalysis data at about 2000 m above ground with horizontal sampling of 0.5 • .The comparison shows overall good agreement according to our goal to consider a quantity in the stereo matching process which roughly behaves like the wind.The general features of wind direction and magnitude are captured.Deviations may originate from multiple sources including the time difference between reanalysis and measurement, representativity errors and uncertainties of the measurement principle.These results corroborate the assumption that the observed point motion is related to the background wind, and filtering criteria based on this assumption can be applied.

Conclusions
The 3-D cloud geometry reconstruction method described in this work is able to produce an accurate set of reference points on the observed surface of clouds.This has been verified through comparison to nadir-pointing active remote sensing.Using data from the observation of a stratiform cloud field, we could verify that no significant systematic errors are introduced by looking in off-nadir directions.Even for sunglint conditions cloud top heights can be derived: as clouds move through the image while the sunglint stays relatively stable, we can choose to observe clouds when they are in unobstructed parts of the sensor.Because of the wide field of view of the sensor, there are always viewing directions to each cloud which are not affected by the sunglint.
As a visible contrast suited for point matching is a central requirement of the method, it is able to provide positional information at many but not all points of a cloud.Especially flat cloud tops can show very little contrast and are hard to analyze using our method.In the future, we will integrate other position datasets like the distance measurement technique using O 2 A-band absorption as described by Zinner et al. (2018), which is expected to work best in these situations.In combining multiple datasets, the low bias and angular variability of the stereo method can even help to improve uncertainties of other methods.
While the wind information derived as part of the stereo method constituted a byproduct of this work, the results look promising.After some further investigations about its quality and possibly additional filtering, this product might further add valuable information to the campaign dataset.
During the development of this method, it became clear that a precise camera calibration (relative viewing angles on the order of 0.01 • ) is crucial to this method.A permanent time synchronization between the aircraft position sensors and the cameras, accurate on the order of tens of milliseconds, is indispensable as well.It should be noted that this does involve time stamping each individual image to cope with inter-frame jitter as well as disabling any image stabilization inside the camera.For upcoming measurement campaigns, improvements may be achieved by optimizing the automatic exposure of the camera for bright cloud surfaces instead of relying on the built-in exposure system.Furthermore, it would be useful to reinvestigate the proposed method with a camera system operating in the near-infrared, which would most likely profit from higher image contrasts due to lower Rayleigh scattering in this spectral region.

Figure 1 .
Figure 1.Schematic drawing of the stereographic geometry.Images of clouds are taken at two different times from a fast-moving aircraft.Using aircraft location and viewing geometry, a point P CS on the clouds surface can be calculated.Note that the drawing is not to scale: d is typically around 200 m, d AC is on the order of 5 km and m denotes the mis-pointing vector and is on the order of only a few meters.

Figure 2 .
Figure 2. Image point tracking.Every line in this image represents a cloud feature which has been tracked along up to 30 images.The images used were taken on NAWDEX flight RF07 (6 October 2016, 09:32:15 UTC; location indicated in Fig. 7) in an interval of 1 s.Transparency of the tracks indicates time difference to the image.Color indicates retrieved height above WGS84, revealing that the larger clouds on the left belong to a lower layer than the thin clouds on the right.The arrows indicate estimated cloud movement.Due to the wind speed at the aircraft location, its course differs significantly from the heading and the tracks are tilted accordingly.The number of points shown has been reduced to include at maximum one point per 20 px radius in the image.Tracks are only shown for every fifth point.

Figure 3 .
Figure 3.At low latitudes, close to local noon as on the NARVAL-II flight RF07 (19 August 2016, 15:06:13 UTC), the specular reflection of the sun on the ocean surface (sunglint) produces bright spots and high contrasts on the waves tails.While the bright spots can visually hide clouds, the contrasts create useless initial tracking points.The latter are mitigated by calculating the region of a potential sunglint (shown as red contour) and masking that region before the images are processed.

Figure 4 .
Figure 4.The collection of all P CS forms a point cloud.Here, a scene from the second half of the NARVAL-II flight RF07 is shown.The colors indicate the point's height above the WGS84 reference ellipsoid (indicated as blue surface).Below, a part of the scene is shown magnified, displaying two main cloud layers: one at about 800 m in yellow and the other at about 3200 m in orange.On the right, a small patch of even higher clouds is visible at 5200 m.The gray dots are a projection of the points onto the surface to improve visual perception.

Figure 5 .
Figure 5.In the time from 09:01:25 to 09:09:00 UTC during NAWDEX flight RF11 on 14 October 2016, a stratiform cloud deck was observed.The parabolic fit shows that a small systematic variation can be found beneath the noise (which is due to small-scale cloud height variations and measurement uncertainties).Compared to the overall dimensions of the observed cloud (≈ 14 km) and the uncertainty of the method, these variations are small.It may still be noted that data from the edges of the sensor (≈ 50 px on each side) should be taken with care.

Figure 6 .
Figure 6.Comparison of cloud top height (CTH), measured with the WALES lidar and the stereo method.The most prominent outliers, present in the region of high lidar cloud top height and low stereo height, can be attributed to thin, mostly transparent cirrus layers and cumulus clouds below, illustrated by a scene from NAWDEX RF10 (13 October 2016, 10:32:10 UTC).While the lidar detects the ice clouds, the stereo method retrieves the height of the cumulus layer below.

Figure 7 .
Figure 7. Horizontal wind at about 2000 m above ground.Comparison between ECMWF reanalysis (blue) and stereo-derived wind (orange).Comparing grid points with co-located stereo data, the mean horizontal wind magnitude is 15.1 m s −1 in ECMWF and 13.4 m s −1 in the stereo dataset.This amounts to a difference of 1.7±4.5 m s −1 in magnitude and 6.0±33 • in direction.The shown deviations are standard deviations over all grid points with colocated data.The gray dot and arrow mark the location and flight course corresponding to Fig. 2.