Fast reconstruction of hyperspectral radiative transfer simulations by using small spectral subsets : application to the oxygen A band

Hyperspectral radiative transfer simulations are a versatile tool in remote sensing but can pose a major computational burden. We describe a simple method to construct hyperspectral simulation results by using only a small spectral subsample of the simulated wavelength range, thus leading to major speedups in such simulations. This is achieved by computing principal components for a small number of representative hyperspectral spectra and then deriving a reconstruction matrix for a specific spectral subset of channels to compute the hyperspectral data. The method is applied and discussed in detail using the example of top-of-atmosphere radiances in the oxygen A band, leading to speedups in the range of one to two orders of magnitude when compared to radiative transfer simulations at full spectral resolution.


Introduction
Radiative transfer simulations are a key tool for the development of remote sensing algorithms in the field of earth observation.Depending on the spectral domain, a variety of techniques are used to solve the radiative transfer equation (RTE).Such simulated radiances can be compared to measurements and thus used for the inversion for the actual physical state of the atmosphere-surface or atmosphere-ocean (e.g., Thomas and Stamnes, 2008, and references therein).
Most radiative transfer models for the ultraviolet (UV) to short-wavelength infrared (SWIR) region that we are aware of solve a monochromatic version of the RTE and are used for channel-based radiative transfer simulations.Even if models are capable of treating problems with inelastic scattering, monochromatic RTE solvers with additional radiance sinks and sources are generally used (e.g., Landgraf et al., 2004, and their treatment of atmospheric Raman scattering).
Therefore, for problems involving hyperspectrally resolved radiance measurements, the obvious approach is to perform large numbers of independent simulations.Various techniques for the increase of the computational efficiency of these radiative transfer simulations have been developed (e.g., Kokhanovsky, 2013, chapter 10 by Vijay Natraj).Two main approaches can be distinguished.Firstly, by relaxing the constraints on the accuracy of the solution of the radiative transfer equation, the computational time needed for each individual radiative transfer simulation is reduced.Examples are the use of two stream methods (e.g., Meador and Weaver, 1980), reduced order of scatterings (e.g., Natraj and Spurr, 2007), replacing line-by-line absorption calculations by exponential sum fitting techniques (ESFTs) (e.g., Wiscombe and Evans, 1977) and their advancements (e.g., Lacis and Oinas, 1991;Bennartz and Fischer, 2000;Doppler et al., 2013), or by relying on pre-computed databases (e.g., Wang et al., 2013).The error with respect to more rigorous solutions of the RTE in general depends on the scene and spectral band and can be estimated by carrying out rigorous simulations.Secondly, by making use of the inherent redundancy of line-by-line calculations by using data reduction techniques such as principal component analysis (e.g., Efremenko et al., 2013;Jolliffe, 2002), the number of individual radiative transfer simulations is reduced.Approaches have been published for the IR spectral region by Liu et al. (2006) and the VIS to SWIR region by Natraj et al. (2010Natraj et al. ( , 2005) ) and Lindstrot and Preusker (2012).
In this paper, we propose a simple method in which only a small subset of the spectra is simulated and used to generate a Published by Copernicus Publications on behalf of the European Geosciences Union.
reconstruction matrix, based on principal component analysis, the expansion coefficients of the data, and the pseudoinverse of a spectral subsample of the data.Then, for the bulk of the spectra to be simulated, only a relatively small spectral subset of the simulations is carried out, and hyperspectral results are constructed using multiplication with the reconstruction matrix.The method is explained and tested using the oxygen A band as test case.We provide numbers on the accuracy of the method vs. the achievable speedup.The method is independent of the underlying radiative transfer model and simple enough to be easily applied to existing radiative transfer schemes.The method is entirely based on post-processing techniques, so that the RT scheme itself does not have to be modified.
The oxygen A band was chosen since it allows a broad range of applications that could benefit from faster and more accurate radiative transfer simulations.Among others, possible applications are the retrieval of cloud-top pressure (e.g., Koelemeijer et al., 2001;Fischer and Grassl, 1991;Preusker and Lindstrot, 2009) and aerosol vertical distribution (e.g., Dubuisson et al., 2009;Sanghavi et al., 2012).Also, more and more hyperspectral radiance measurements will become available with current and future sensors such as TANSO-FTS on GOSAT (Suto et al., 2010), GOSAT-2 (Nakajima et al., 2012), TROPOMI on S-5 P (Veefkind et al., 2012), or OCO2 (e.g., Boesch et al., 2011), to name a few.

Notation
Throughout this paper a convenient matrix notation is employed, which allows denoting matrix elements and subsets of matrix elements.A real a × b matrix is defined as M ∈ R a×b , and M i,j ∈ R, 0 < i, j ≤ a, b identifies a single element of M. Row and columnar subsets are accessed using a − sign: M i,− = (M i,1 , . . ., M i,b ) ∈ R b , as well as M −,j = (M 1,j , . . ., M a,j ) ∈ R a .To access subsets of elements in rows and columns, a notation of index sets is introduced: s n,l = (s 1 , . . ., s l ) with s 1 > 0, s i < s i+1 , s l ≤ n, s i ∈ N, and n, l ∈ N.This notation denotes an ordered list of l unique elements between 1 and n.A specific index set is defined when its elements (s 1 , . . ., s l ) are set.With the introduced matrix notation, such an index set can be used to access subsets of rows and columns:

Principal components as data reduction technique
The method was developed keeping in mind a radiative transfer forward operator based on a lookup table, although an application to other usage scenarios (e.g., forward radiative transfer simulations) is straightforward.Here we assume a forward radiative transfer operator (RT), which relates the state vector x ∈ R n x to the top-of-atmosphere radiance spectrum y ∈ R n λ , using a spectral calibration vector λ = (λ 1 , . . ., λ n λ ) ∈ R n λ with ∀i, j, i = j : λ i = λ j : y = RT(x, λ). (1) For simplicity, the radiative transfer simulations are stored in a two-dimensional lookup table matrix L ∈ R n λ ×n L , with n L = n x j =1 xj , where • denotes the number of elements of a vector, and the vectors xj = ( xj 1 , . . ., xj xj ) contain grid points for each state vector dimension.The corresponding parameter states are stored in the parameter matrix X ∈ (R) n x ×n L such that for all rows i, (2) The matrices X and L could be used to construct a fast forward operator based on multivariate interpolation.A main problem of this approach is that even for small parameter spaces the matrix L can become large, especially if a very high spectral resolution is needed.This naturally leads to the employment of data reduction techniques like principal component analysis.The main idea is to replace the large matrix L with two much smaller matrices P ∈ R n λ ×n P and C ∈ R n P ×n L with n p n λ , such that each spectrum L −,i can be expressed as The matrix P contains the principal components up to the order n P , and the matrix C contains the expansion coefficients, which express the original spectra in the n P dimensional space spanned by the principal components: The principal components are orthogonal by construction, and thus P is a semi-orthogonal matrix with P T = P −1 , which simplifies the computation of the expansion coefficients C −,i .
The reconstruction of a spectrum L −,i is associated with an error O[n P ], which in general decreases with increasing n P .For a given problem, n P has to be chosen such that the residual O[n P ] is below the user requirement.The memory requirement for the matrix C is much smaller than for L, and interpolation can be implemented much faster.There are two main downsides to this approach.Firstly, the singular vector decomposition of L becomes increasingly expensive with respect to computational time with increasing n L .Secondly, to compute the matrix C, the complete matrix L must be computed and stored in advance, which in general is timeconsuming and costly in terms of storage and backup.
Both of these downsides are discussed in detail in this paper using the oxygen A band as an example.Strictly speaking, the presented analysis is thus only valid for this part of the spectrum, but we do not see any substantial obstacles to applying the technique to other parts of the spectrum.
4 Constructing the principal components from small parameter state space subsets of L The computation of P becomes numerically more expensive with increasing size of the lookup table L and increasing spectral resolution.For actual computations we used the principal component analysis algorithm provided by the Python package Scikit-learn (Pedregosa et al., 2011).Throughout this paper a moderately small lookup table is discussed with the parameter state space sampling given in Table 1.The variability of the database is illustrated in Fig. 1.The lookup table was computed using the Matrix Operator Model (MOMO) radiative transfer model (see Fell and Fischer, 2001;Hollstein and Fischer, 2012), which is a matrix operator model widely used at Freie Universität Berlin.
The gaseous absorption was computed using line parameters from the HITRAN spectral database (Rothman et al., 2009) and a modified scheme to compute the K distribution (Bennartz and Fischer, 2000).As shown in Table 1, the variation of the parameter space includes surface pressure, aerosol optical thickness, aerosol mean height, aerosol type, surface reflectance1 , and the viewing geometry.Aerosol optical models were implemented according to Levy et al. (2007).These models are also used by the MODIS aerosol retrieval and were specifically designed to fit observations for different locations on the globe.From the published optical properties, the urban, neutral, dust, continental, and absorbing types were implemented, and Mie calculations using the implementation provided by Wiscombe (1980) were used to compute phase functions, extinction, and single-scattering albedo.This state vector variability was set up to reproduce the variability of a clear-sky scene over land.The temperature profile was explicitly excluded from the state vector to keep it simple but realistic.As shown by Lindstrot and Preusker (2012), the spectral variability due to the temperature profile of the atmosphere can be accounted for by performing radiative transfer simulations only for a set of principal temperature components.The radiance spectrum can then be constructed as linear superposition of the simulated spectra for the principal temperature profile components, by using the expansion coefficients of the actual temperature profile in the space spanned by the principal temperature profile components.
The oxygen A band was simulated with a spectral sampling of 0.005 nm, which led to 4500 spectral channels.The parameter sampling as stated in Table 1 led to 121 500 different parameter states.
Numerical experiments show that, for this particular data set, the principal component matrix P can be computed without using the complete matrix L, but by using a much smaller subset of the state space L −,r n L ,nr , where r n L ,n r is an index set with n r randomly chosen members from the n L = 121 500 available states.The reconstruction signal-to-noise ratio (SNR) is used as simple scalar figure to test the reconstruction quality of each spectrum: (5) It is computed as the ratio of the mean value over all channels for a spectrum labeled as truth t and the standard deviation of the differences of t and a reconstructed spectrum r.
Figure 2 shows the mean and standard deviation of this value for all spectra in the lookup table L with respect to the size of the random sample that was used to compute the principal component matrix P. Randomly selected parameter states are used to compute n P = 15 principal components with n P kept constant throughout this paper.The number of used principal components in general controls the reconstruction quality, where more components lead to a better reconstruction.Then, the reconstruction SNR is computed for all spectra in the lookup table, and these values are reduced to their mean value and standard deviation.This procedure was repeated 50 times to remove the effects of a particular realization of a random state sample and mean values are shown.
The data clearly show a SNR convergence to ≈ 2500 for randomly chosen sets larger then a few hundred.The standard deviation of the reconstruction SNR also converges to a finite number.This can be understood by analyzing the underlying histograms of the reconstruction SNR for various sample sizes, which is shown in Fig. 3.The analysis shows that the histograms for a sample size of 500 and 5000 are very similar.For these, the wide range of the reconstruction SNR is entirely controlled by the number of principal components and not the sample size that was used to construct the matrix P. The case with a sample size of only 5 is clearly below that range and shows much smaller SNR values, which is caused by the small sample size.
Which of the state vectors in the lookup table are associated with the largest reconstruction errors is probably of Table 1.State vector parameters of the discussed lookup table database.The state space contains physical parameters as well as the viewing geometry of a hypothetical instrument.The nadir viewing geometry is constrained to 0 • -40 • where the solar zenith angle and the relative viewing azimuth are less constrained.The aerosol optical models were implemented according to Levy et al. (2007).Here, a rather coarse resolution for all parameter states was chosen to simplify the presented analysis.The total number of 3  5 for reference.).The reconstruction SNR was defined as the ratio of the standard f the residual and the mean value of the truth where the residual was defined as the difference uth and reconstruction.The considered index set was randomly chosen and for each step, ns were repeated 50 times and the mean value is shown.Shown is the mean reconstruction is the mean value over the reconstruction SNR for all cases in the lookup table.The grey a shows the corresponding standard deviation.A histogram for the whole dataset is shown in 20 Fig. 2. Reconstruction SNR for a lookup table of top-of-atmosphere radiances in the oxygen A band (see Table 1 and Fig. 5 for reference).The reconstruction SNR was defined as the ratio of the standard deviation of the residual and the mean value of the truth where the residual was defined as the difference between truth and reconstruction.The considered index set was randomly chosen.For each step, computations were repeated 50 times, and the mean value is shown.Shown is the mean reconstruction SNR, which is the mean value over the reconstruction SNR for all cases in the lookup table.The grey shaded area shows the corresponding standard deviation.A histogram for the whole data set is shown in Fig. 3. interest for users,.Figure 4 shows a histogram of state vector element values of those 10 % of the state vectors exhibiting the smallest reconstruction SNR, that is, the largest reconstruction errors.For this figure, 500 randomly selected states were used to compute the principal components.The states are clearly not equally distributed within this sample, but the over-and under-representation of some states over others is approximately within a factor of two.This fraction is dominated by cases with lower reflectivity where the reconstruction SNR is naturally smaller.This is clearly shown by the fact that surface reflectivity values of 0.7 are not represented within this sample and that the 0.1 case is largely over-represented.In a similar manner, this behavior is shown by the frequency of occurrence of the different viewing angles.The different aerosol types are almost equally distributed within this sample since their effect on the absolute reflectance is much smaller then the effect from the viewing geometry or surface reflectance.
Although not explicitly shown here, the states that have the smallest reconstruction error are likely among the set that was used to compute the principal components in the first place.This emphasizes the fact that these states should be representative for the total data set to achieve almost uniform distribution of the reconstruction error.
This analysis shows that a fairly small subset of states within L is sufficient to construct the principal components for the whole data set.This fact immediately raises the question of whether the complete lookup table L is needed to compute the coefficient matrix C as defined in Eq. ( 4), which is discussed in the next section.component matrix P is already known.In the previous section, it was shown that only a relatively small subset of the states within L is needed to compute the principal component matrix P. In this section, it will be discussed how the expansion coefficients can be computed without requiring the knowledge of the total spectrum, such that far fewer radiative transfer simulations have to be carried out.Our approach is to assume that spectral subsamples λ n = λ n λ ,n , n n λ exist, which are sufficient to compute the expansion coefficients up to an error O[λ n ]: where P(λ n ) ∈ R n×n P acts as an effective principal component matrix for a spectral subset, which, after multiplication with the simulations at the selected spectral subset points, produces the coefficients of the original expansion coefficients C −,i .In that respect, P is a function of a particular spectral subset, and the channel subset λ n ideally identifies the channels carrying the information sufficient to reconstruct the complete spectrum.Such an idea of identifying the channels that carry the most information has been used in the past, for example, to specify the channel layout in the oxygen A band of the spaceborne remote sensing instrument MERIS on board the Envisat platform (e.g., Kollewe and Fischer, 1994).This led to the definition of three channels with moderate spectral width between 3.75 nm and 15 nm.Here, we focus on reconstruction of the complete hyperspectral data set using the channels that carry sufficient information.

By assuming that O[λ n
] vanishes, P can be computed as the matrix product of the coefficient matrix and the pseudoinverse of the lookup table matrix for the chosen subsets of states and spectral channels:   Here, s n s = s n L ,n s is used as a shorthand notation for the state subset that was used to construct P. Necessary for P to exist is the existence of the pseudo-inverse of the lookup table with respect to both the spectral and the state subset.
Equations ( 7) and ( 6) are the two main equations within this paper, since they define the effective principal component matrix P and show how it can be used to compute the hyperspectral expansion coefficients.
As a consequence, two points must be discussed.Firstly, how can spectral subsets be chosen optimally?And secondly, how many channels according to a selection scheme are required in order to reduce the reconstruction error O[λ n ] to a level well below the user requirements?
Three different methods for the selection of spectral subsets are discussed and compared within the next subsections.Other, potentially more efficient methods might exist, as the following discussion is based entirely on heuristic approaches.

Equal sampling
Probably the most simple, while still reasonable, method to construct spectral subsamples is to define the size of the sample and spread the channels evenly throughout the spectral band.The reasoning behind this approach is to try to cover as much variability in the spectral band while also employing the high correlation of channels within a spectral interval.
Figure 5 shows this approach for three test cases that correspond to the selection of every 500th, 200th, and 100th channel from the original simulations.The resulting first principal component and the three first effective principle components are shown in Fig. 6.

Optimization based on random walks
While an equal sampling selection is simple and intuitive, better results might be achieved using optimization techniques.These can be distinguished by the minimized or maximized cost function and the technique employed for the optimization.While for this case it is straightforward to define various cost functions, choosing the optimization technique is more difficult.Commonly used techniques such as the Newton or Levenberg-Marquardt algorithm are based on computing first and second orders of partial derivatives of the optimized cost function.Here, the position of selected channels is to be optimized, and how to apply the standard techniques to this problem does not seem straightforward.The computation of derivatives is not necessary for random walk techniques, which we apply to this problem using two different cost functions.
The random walk starts from a selection of evenly distributed sampling points, and the random step is achieved by adding a randomly chosen step to each position of a selected channel.The maximum range of a random step is chosen to be half the difference of the initial distribution.If a random step leads to an improvement in the cost function, it is chosen as the basis for the next step; otherwise the previous state is used again for the next step.This procedure is repeated  several times, and a maximum number of possible step attempts is prescribed.
Two cost functions appear naturally in this framework: 1. maximization of the l 2 condition of L −1 λ n ,s P with respect to λ n : max λ n (κ(L −1 λ n ,s P )), where the l 2 condition κ of a matrix A is defined as κ(A) = A −1 2 • A 2 , with A 2 being the l 2 norm of A, which is defined as the square root of the largest eigenvalue of the matrix product of the conjugate transpose of A and A itself; 2. minimization of the sum of squares of the difference between the original coefficients C −,S P and the reconstructed ones: min Resulting spectral subsets using the minimization of C matrix residuals are shown in Fig. 7 and the resulting principal components in Fig. 8, while results for all three techniques are shown in Fig. 9. siduals between original and reconstructed spectra.The black line with the grey shaded area residual between the original spectra and the one reconstructed using the full spectral informaprincipal components.The red line shows the residual using only 9 spectral bands which were ing the equal sampling technique.The green line shows similar results, but the optimization x residuals was used to select the spectral sample.

Discussion of the reconstruction SNR
28 Fig. 10.Residuals between original and reconstructed spectra.The black line with the grey shaded area shows the residual between the original spectra and the one reconstructed using the full spectral information and 15 principal components.The red line shows the residual using only 9 spectral bands, which were selected using the equal sampling technique.The green line shows similar results, but the optimization of C matrix residuals was used to select the spectral sample.
on, the techniques show only minor differences and are hence equally well suited to solve the problem.Furthermore, the achieved mean reconstruction SNR is equal to the results when using all available spectral bands (compare with Fig. 2).This shows that the highly correlated spectra, when represented with n P = 15 principal components, can be equally well represented with using only 30 selected channels with already known hyperspectral principal components.This behavior is similar to that shown in Fig. 3. There, the number of randomly selected state vector samples approaches a number above which no increase of the reconstruction SNR could be gained by increasing the sample size.
In that respect, the state vector sample and the spectral sample show quite similar behavior with respect to their sample size.
For sample sizes below 30, the techniques show differences, with the least-squares minimization of the C matrix residuals showing the best results.If one is willing to accept a larger error by choosing a smaller spectral subsample, the optimization methods are preferred.Using these techniques, similar ranges of the reconstruction SNR can be achieved by using only half of the spectral sample sizes.
One should note that the result for the equal sample selection technique could be optimized by using more suited borders of the spectral interval, but as a matter of fact the results of the optimization techniques are quite similar and might be of smaller importance for the solution of the overall problem.
The effect of choosing different selection techniques is demonstrated in Figs. 10 and       slightly smaller errors elsewhere than the C matrix residual minimization technique.This can be nicely explained with the chosen spectral sampling (compare Figs. 5 7).The optimization technique has an additional channel moved to that spectral area to compensate for such errors.Reconstruction SNR histograms for the discussed techniques are shown in Fig. 12.The optimum histogram is reached for the case using 45 channels for both techniques.A good representation of the optimal histogram is achieved by using only 23 channels.For both cases the selection techniques show similar results.Only for the case of using only 9 spectral bands, the optimized selection technique shows much better results.
The main goal of this technique is to reduce the number of radiative transfer calculations needed for a certain problem, which is clearly fulfilled.The achievable speedup however strongly depends on the problem.For a lookup-tablelike problem as discussed here, the speedup depends on the number of states in the lookup table n L , the number of states used for the computation of the principal components n s , the number of original wavelengths n λ , and the size of the spectral subsample n.A speedup (sp) can be computed: 20.000 30.0 00 50 .000 7 0 .00 0 1 0 0 .00 0 1 3 0 .00 0 2 0 0 .0 0 0 3 0 0 .0 0 0 hown are lines of constant speedup using the presented technique for the lookup table like defined in Table 1.The speedup is presented as a function of the size of the chosen spectral sub the size of the state space sub sample used for the computation of the principal components.was used to create the figure and the speedup for n = 30 and n s = 200 is highlighted.Fig. 13.Shown are lines of constant speedup using the presented technique for the lookup-table-like problem as defined in Table 1.The speedup is presented as a function of the size of the chosen spectral subsample and the size of the state space subsample used for the computation of the principal components.Equation ( 8) was used to create the figure, and the speedup for n = 30 and n s = 200 is highlighted.
The speedup for the discussed lookup table is shown in Fig. 13.It becomes evident that speedups of two orders of magnitude are easily possible using this technique.A special point in this figure unfolds if the additional error introduced by this method is negligible.Figure 2 shows that, from a selection of several hundred spectra on, the mean reconstruction error reaches the limit controlled by the number of principal components.Furthermore, Fig. 9 shows that, from 30 channels on, the reconstruction error reaches its optimum.For the discussed application of a lookup table, a speedup of 120 comes at almost no cost of additional error.Further speedup can be easily achieved, but introduces errors, which have to be checked with the user requirements.
If the radiative transfer is used as a forward model in an inverse problem and the time spent for the computations of the hyperspectral principal is neglected, the speedup with almost no additional error is given by the ratio of the original spectral resolution and the number of channels needed to reconstruct it; here n λ /n = 4500/30 = 150.

Conclusions
The presented analysis shows that the presented spectral subsampling technique could be employed to achieve major speedups for hyperspectral radiative transfer simulations.Its application to the oxygen A band showed that about 30 spectral channels are sufficient to reproduce the full hyperspectral data in a space spanned by 15 principal components.This number will in general depend on the spectral domain at which this method is applied, and for other domains the validity of the method has to be proven again.However, this is not a major drawback of the method.As it was shown, a number of hyperspectral simulations have to be performed to produce the principal components of the data set.From there, one can establish the validity of the method and benefit from major speedups.If the validity cannot be established, one has to proceed using different techniques.
We want to highlight two of the main impacts of this technique for problems involving hyperspectral radiative transfer.Firstly, formerly used techniques to gain speedups can be revised.Such techniques could involve the reduction of the vertical resolution of the model atmosphere, neglecting polarization, or only including certain orders of scattering.This method could be used to avoid many of the simplifications made with respect to strict solutions of the radiative transfer equation, and thus to increase the radiative transfer calculation accuracy.
Secondly, this method enables a path to facilitate computationally costly radiative transfer simulations such as full 3-D simulations including polarization.These models have even more complexity in their state vector, including the description of a horizontally and vertically heterogeneous scene, and they are much more demanding in terms of computation time as compared to traditional 1-D methods.Hence, this method could be used to employ 3-D radiative transfer for applications that depend on fast hyperspectral radiative transfer.
As a last point, we want to highlight that this method is entirely based on post-processing techniques and requires no changes in the used radiative transfer code.These are often complex in their implementation and thus practically inaccessible for many users.The method is also simple enough to be implemented with ease using modern interpreted languages and high-level functions for the computation of pseudo-inverses and principal component analysis.
Edited by: S. Schmidt

Fig. 1 .
Fig. 1.The left panel illustrates the variability of the used spectral database by showing some randomly selected spectra.The right panel shares the reflectance axis with the left panel and shows logarithmic histograms for four selected wavelengths.

Fig. 3 .
Fig. 3. Histograms of the reconstruction SNR for various sample sizes.

5Fig. 4 .
Fig. 4. Normalized histograms for parameter state occurrence in the lowest 10 % fraction of a reconstruction SNR data set based on 500 samples.The occurring states are scaled to a range of [0, 1] such that they can be shown on the same axes.The corresponding values are shown in the legend.The occurrences are normalized by the sample size and multiplied by the number of unique parameter values (e.g., three for surface pressure and five for the aerosol type).A value of one indicates equal representation of this particular parameter value within the data set, while larger and smaller values represent overand under-representation.

Fig. 6 .Fig. 6 .
Fig. 6.The grey line shows the first principal component for the complete spectral resolutio while the colored lines show the corresponding rows for P for three sample sizes.The equal di sampling technique was used.

Fig. 7 .
Fig. 7. Similar to Fig. 5, but the positions of the spectral channels were chosen with the optimization technique based on the minimization of C matrix residuals.

Figure 9
Figure9shows a comparison of the mean reconstruction SNR for the whole lookup table for the three discussed selection techniques.From the selection of 30 spectral bands 11, showing spectral residuals for the used reconstruction techniques.If 23 spectral bands are used, the results are similar, which corresponds to the results shown in Fig.9.However, if fewer spectral channels are used, significant differences are apparent.The equal sample technique shows much larger residuals in the 761 nm and

Fig. 12 .
Fig.12.The grey area shows the reconstruction SNR histograms for the original (i.e., using the full spectral information) reconstruction and two of the discussed reconstruction techniques using small spectral subsamples.The dashed lines show results for the equal sampling technique, while the lines show results for the selection based on the minimization of C matrix residuals.The colors indicate the different size of the spectral subsamples.
121 500 physical cases is considered.The spectral sampling is 0.005 nm, and 4500 channels within the O 2 A band were simulated.Altogether this table represents 121 500 • 4500 ≈ 5.5 × 10 8 radiative transfer simulations.
The grey line shows an example spectrum from the lookup table.The red,green and blu the result if only every 500th, 200th, or100th channel of the original spectrum is considered.The grey line shows an example spectrum from the lookup table.The red, green and blue lines are the result if only every 500th, 200th, or 100th channel of the original spectrum is considered.