07 May 2021
07 May 2021
A minimum curvature algorithm for tomographic reconstruction of atmospheric chemicals based on optical remote sensing
 Department of Mechanical and Manufacturing Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
 Department of Mechanical and Manufacturing Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
Abstract. Optical remote sensing (ORS) combined with computerized tomography (CT) technique is a powerful tool to retrieve a twodimensional concentration map over the area under investigation. But unlike the medical CT, the beam number used in ORSCT is usually dozens comparing to up to hundreds of thousands in the former, which severely limits the spatial resolution and the quality of the reconstructed map. This situation makes the “smoothness” a priori information especially necessary for ORSCT. Algorithms which produce smooth reconstructions include smooth basis function minimization (SBFM), grid translation and multiple grid (GTMG), and low third derivative (LTD), among which the LTD algorithm is a promising one with fast speed and simple realization. But its characteristics and the theory basis are not clear. Moreover, the computation efficiency and the reconstruction quality need to be improved for practical applications. This paper employs two theories, i.e., Tikhonov regularization and spatial interpolation, to produce a smooth reconstruction by ORSCT. Within the two theories’ frameworks, new algorithms can be explored in order to improve the performance. For example, we propose a new minimum curvature (MC) algorithm based on the variational approach in the theory of the spatial interpolation, which reduces the number of linear equations by half comparing to that in the LTD algorithm using the biharmonic equation instead of the smoothness seminorm. We compared our MC algorithm with the nonnegative least square (NNLS), GTMG, and LTD algorithms using multiple test maps. The MC and the LTD algorithms have similar performance on the reconstruction quality. But the MC algorithm needs only about 65 % computation time of the LTD algorithm. It is much simpler in realization than the GTMG algorithm by using highresolution grids directly during the reconstruction process to generate a highresolution map immediately after one reconstruction process is done. Comparing to the traditional NNLS algorithm, it shows better performance in three aspects: (1) the nearness of reconstructed maps is improved by more than 50 %; (2) the peak location accuracy is improved by 1–2 m; and (3) the exposure error is improved by more than 10 times. The testing results show the effectiveness of the new algorithm based on the spatial interpolation theory. Similarly, other algorithms may also be formulated to address problems such as the oversmooth issue in order to further improve the reconstruction equality. The studies will promote the practical application of the ORSCT mapping of atmospheric chemicals.
Sheng Li and Ke Du
Status: open (until 04 Jul 2021)

RC1: 'Comment on amt2021122', Anonymous Referee #1, 27 May 2021
reply
General comments
The paper faces the problem to introduce the “smoothness” a priori information in the tomographic reconstruction of atmospheric chemicals based on optical remote sensing. In particular, a new minimum curvature (MC) algorithm is proposed and applied to multiple test maps. The performance of the new algorithm is compared with that of other existing algorithms. The MC algorithm shows almost the same performance as the low third derivative (LTD) algorithm but with significantly less computation time.
I think that the subject is correctly presented in the introduction and sufficiently put in the context of the existing literature on the argument; instead, I find that the description of the method is not given in all needed details. I suggest to improve the description of the method and below I give some suggestions.
I think that the paper deserves the publication on AMT after that the following issues are considered.
Specific comments:
 In the Tikhonov approach, an important issue is the choice of the value that is given to the regularization parameter, because this value determines how much a priori information goes into the results. In the paper, it is specified only “the regularization parameter is set to be inversely proportional to the grid length”. I suggest describing the criterion that it has been followed for the choice of this parameter.
 It would be interesting to know if the algorithm is able also to produce a diagnostics of the results. Generally, a procedure that solves an inverse problem provides also an estimation of the errors (more in general of the covariance matrix) of the products. Furthermore, it would be useful to have quantities (such as the averaging kernel matrices obtained in the case of retrieval of atmospheric vertical profiles) that provide the sensitivity of the result to the true state, which are useful also to estimate the spatial resolution of the result.
 Line 43: I suggest to put a reference for the Radon transformation.
 Line 141149: In the description of the LTD algorithm it is not clear which are the equations of the system that has to be solved. I understand that for each cell we have two equations obtained setting to zero the third derivatives (in both direction x and y, I suppose, but it is not specified). Then, which are the other equations? Those obtained to look for the minimum of Eq. (3)? Please explain in detail which are the equations of the system that has to be solved.
 Line 159160: From Eq. (7) I understand that the seminorm is a number relative to the whole field, therefore, I do not understand the meaning that “the seminorm can be calculated at each pixel”. Then, which is the summation mentioned in the text? I think that a more clear explanation is needed.
Technical corrections:
The authors introduce many acronyms, but not all of them are then used. I suggest introducing only the acronyms that are used several times in the paper.
Line 26: equality > quality
Line 85: necessary > need
Line 136: what is the superscript 21 after “problem”?
Line: 174: wellposted > wellposed.
Line 212: It > it
Line 242: increase > increases
Line 286: equality > quality

AC1: 'Reply on RC1', Sheng LI, 18 Jun 2021
reply
Response to comments of reviewer 1
We thank the reviewer for the helpful comments and suggestions regarding our manuscript. Listed below are our itemized responses, with the original comment/question displayed in italics. Please also find the revised manuscript with track changes.
General comments
The paper faces the problem to introduce the “smoothness” a priori information in the tomographic reconstruction of atmospheric chemicals based on optical remote sensing. In particular, a new minimum curvature (MC) algorithm is proposed and applied to multiple test maps. The performance of the new algorithm is compared with that of other existing algorithms. The MC algorithm shows almost the same performance as the low third derivative (LTD) algorithm but with significantly less computation time.
I think that the subject is correctly presented in the introduction and sufficiently put in the context of the existing literature on the argument; instead, I find that the description of the method is not given in all needed details. I suggest to improve the description of the method and below I give some suggestions.
I think that the paper deserves the publication on AMT after that the following issues are considered.
Thank you for this comment. We have improved the description according to the suggestions below.Specific comments:
(1) In the Tikhonov approach, an important issue is the choice of the value that is given to the regularization parameter, because this value determines how much a priori information goes into the results. In the paper, it is specified only “the regularization parameter is set to be inversely proportional to the grid length”. I suggest describing the criterion that it has been followed for the choice of this parameter.
This is a very good suggestion. We have rewritten the Sec. 2.2 to give a detailed description of the determination of the weights and regularization parameter. To summarize, equations are first assigned weights which are inversely proportional to path/grid length, in order to make sure that equations with different lengths have equal weights. In practice, the weights for the laser paths are set to be the same value (one), and weights for the derivative equations (regularization parameter) are set to be another value, which is to be determined. Then the regularization parameter is determined based on the commonly used discrepancy principle. For computational efficiency purpose, the regularization parameter can be selected from several widely varying values due to the fact that the reconstructions vary only slowly with the regularization parameters. Finally, the one produces the smallest discrepancy is used.
(2) It would be interesting to know if the algorithm is able also to produce a diagnostics of the results. Generally, a procedure that solves an inverse problem provides also an estimation of the errors (more in general of the covariance matrix) of the products. Furthermore, it would be useful to have quantities (such as the averaging kernel matrices obtained in the case of retrieval of atmospheric vertical profiles) that provide the sensitivity of the result to the true state, which are useful also to estimate the spatial resolution of the result.
This is a good idea. The inverse problem in this manuscript is an overdetermined system of linear equations, which can be analyzed using the conventional measures. However, the original inverse problem based on coarse grids is transformed to be a regularized problem with fine grids. In this case, only very limited number of grids are passed through by the laser paths. Others are restricted by the smoothness information around local neighbors. As a result, the residuals for each laser path can be easily minimized to be very small (1e4), even though the results might be unrealistic. Therefore, traditional measures like goodnessoffit or covariance matrix are not that helpful as usual situation. Meanwhile, the averaging kernel matrix (parameter resolution matrix) is a perfect identity matrix (we have checked this from the reconstruction results).
We think that this kind of inverse problems is different in some ways with the retrieval of vertical distribution of an atmospheric parameter from remote sensing measurements. In the latter application, the priori information of the state vector (a priori profile) is usually used. Then the averaging kernel matrix is an important measure to characterize the solution of the retrieval, including the retrieval resolution effects and a priori effects. But in the application of twodimensional gas mapping, there is no a priori profile of the concentration distribution used. And the reconstruction is based on the highresolution discrete grid pixels instead of weighting a set of profiles. Therefore, measures like nearness and exposure error are generally used to estimate the quality of the reconstruction in the field of computerized tomography or imaging,(3) Line 43: I suggest to put a reference for the Radon transformation.
The reference has been added.(4) Line 141149: In the description of the LTD algorithm it is not clear which are the equations of the system that has to be solved. I understand that for each cell we have two equations obtained setting to zero the third derivatives (in both direction x and y, I suppose, but it is not specified). Then, which are the other equations? Those obtained to look for the minimum of Eq. (3)? Please explain in detail which are the equations of the system that has to be solved.
The reviewer is correct. At each grid pixel, two additional linear equations are appended. These generated equations for all the grids are then combined with the original linear equations (Eq. 2) of the laser paths, resulting in a new large system of linear equations, which is overdetermined. The new system of equation is solved to find the concentrations. We have rewritten the contents describing LTD algorithm in Sec. 2.2.(5) Line 159160: From Eq. (7) I understand that the seminorm is a number relative to the whole field, therefore, I do not understand the meaning that “the seminorm can be calculated at each pixel”. Then, which is the summation mentioned in the text? I think that a more clear explanation is needed.
Thank you for pointing this out. The reviewer is correct that the seminorm, which is the total squared curvature, is defined on the whole field. The total square curvature is the summation of the squared curvature at each grid pixel. To minimize the seminorm, we need its partial derivative with respect to the concentration at each grid pixel to be equal to zero. Thus, we get a difference equation at each grid, which is appended into the original linear equation to form a new system of linear equations. A clearer description including the derivation process has been added to the Sec. 2.3.Technical corrections:
(1) The authors introduce many acronyms, but not all of them are then used. I suggest introducing only the acronyms that are used several times in the paper.
Thanks for this good suggestion. We have checked the acronyms and removed those used only once.(2) Line 26: equality > quality
Corrected.(3) Line 85: necessary > need
Corrected.(4) Line 136: what is the superscript 21 after “problem”?
The number has been replaced with a reference.(5) Line: 174: wellposted > wellposed.
Corrected.(6) Line 212: It > it
Corrected.(7) Line 242: increase > increases
Corrected.(8) Line 286: equality > quality
Corrected.

RC2: 'Comment on amt2021122', Anonymous Referee #2, 22 Jun 2021
reply
GENERAL COMMENTS================
The paper describes a minimum curvature based regularization schemed forderiving 2D trace gas concentrations from optical remote sensing andtomography. The chosen regularization scheme is sensible and the method seems tobe an improvement over the state of the art in the field.
The topic fits the journal.
The textual description is severely lacking and a rewrite to better guide thereader through the numerous methods is necessary. The description of thecompared methods is severly lacking mathematical rigour and precise definitionscausing the research to be not replicable in its current state.
I believe that the paper can only be published after a major revision andrestructuring. See below for some general guidelines and a number of specificissues.
MAJOR COMMENTS==============
Precise Mathematical description
The problem requires a much more precise mathematical introduction with cleardefinitions of employed terms. The paper gives a wide overview over severalmethods from literature and introduces many of these and related terms withoutclear definitions. At least those discussed later should be introduced wellenough to follow the paper without further referencing.
The continuous and discrete view of the problem needs to be separated and therelationship clarified (see specific comments). Very often it is useful tospecify the formulas for the continuous case and then "simply" discretize theresulting integrals and derivatives. In this case one achieves results that areless dependent on the chosen discretization/gridding. This is particularly trueas the sampling distance is (wrongly) used in the regularization strengthinstead of the integrals itself.
Motivation for minimum curvatureThis chosen regularization is criticized for producing oversmoothened results.The major question is, what kind of regularization term would best describe the apriori information. The Laplacian is an obvious choice due to its relationshipwith the PoissonEquation. If diffusion is the major process than a norm relatedto an exponential covariance would be very useful (see "Inverse Problem Theory"by Tarantola); also here, the Laplacian pops up at least for the 3D exponentialcovariance. It would be interesting to motivate the choice of regularization formthe underlying physics.
There is also a host of literature with respect to regularization for opticalremote sensing methods from nadir and limb sounding satellites. It would be veryinteresting to put this method into this context and/or discuss the statisticalangle.
DiagnosisDue to the choice of a grid with more unknowns than measurements, diagnosticsbecome more important. This can be done in a simple fashion with "resolution"measures. Rodgers' "Inverse Methods for Atmospheric Sounding" shows in greatdetail what kind of diagnostic quantities are relevant (resolution, measurementcontribution, smoothing error, uncertainties...)
SPECIFIC COMMENTS=================
line 35
"which can detect a large area in situ and provide near realtime information"Maybe cover? Also "in situ" may be an unconventional use for a remote sensinginstrument, depending on the community.
line 43
To be precise both an infinite number of beams and beams of infinite length arerequired. One typically assumes a zero signal outside the reconstruction domain,which, for your problem, is a very reasonable assumption and at least alleviatesthe latter condition. How does the finite number of beams affect the solution?
line 44
"Seriesexpansionbased methods". Unclear what is meant here. Cite needed.The explanation sounds like a simple discretization, transforming the "continuousproblem" of finding a function over L2 to a discrete problem of identifyinga number of samples.
line 48
Also medical reconstruction techniques employ discrete samples and basis functions.With many more samples, obviously.
line 49
I do not understand the difference between a pixel based approach and a basis functionbased. A pixel is a basis function with rectangular, nonoverlapping support.
line 51"best" in what sense?
line 52
Typically a basis is a basis of a (vector) space. Which space is spanned here?Is the full space spanned or only a subspace thereof?
line 53
What parameters? Typically one derives prefactors of normed basis functions.
line 54
What equations? Why are those equations nonlinear? Typically this problem would belinear even for nontrivial basis functions.
line 55
Best in what sense?
line 56
Too general. Fits to nearly any problem. What error function based on which criteria?
line 58
Define illposed. Define very large (millions?).
line 61
Many classes of nonlinear problem can be solved efficiently by deterministicmethods. Particularly convex optimization problems such as this.
line 65
Exploiting previous (a priori) knowledge of a problem is almost always key ininverse problems. Doesn't dispersion/diffusion suggest a Laplacian asregularizing term? Is there a physical relationship between the dispersionprocesses and the minimum curvature?
line 69
This is the first time NNLS is mentioned in the main text and a cite should beplaced here with more detail. The EPA cite does not detail the NNLS algorithm.
line 74
If the number of unknowns is smaller than the number of measurements, such aproblem may still be solved by using pseudoinverses and or regularizationtechniques, which are computationally cheap.
line 88
"But the theory basis of the LTD algorithm was not clearly given". By whom?This paper is so far not helping in this regard.
line 92
What is regularization?
line 95
Interpolation theory typically deals within interpolation of (mostly discrete)data. How does that relate to the problem at hand?
line 98
"The solution to this problem is a set of spline functions." Please be preciseabout this. The algorithm derives, necessarily a vector. This vector can beinterpreted in a various of ways. Of particular import is how it is interpretedby the "forward model", because that determines what is fit. Thisinterpretation may differ from the interpretation for the regularisation term,but this introduces necessarily an error. One should be clear about that.Typically one sees the regularization term as an approximation: as thecomputation of derivatives by finite differences is inherently approximate. Inthe case that the gridding is very fine, the approximation error becomes small,and the point is moot, but this discussion is missing here. The discretizationerror has not been discussed and thus cannot be neglected. It is sensible torepresent the 2D field to be reconstructed here as a 2D spline both in the forwardmodel and the regularization term. This would remove approximation errors at thecost of a more complicated algorithms. Either way, the distinction and usedassumptions must be made explicit and errors discussed.
line 102
Maybe a bit more of the theory should be described to make this more obvious tothe reader.
line 104
Please properly and mathematically introduce the corresponding biharmonicequation and the smoothness seminorm.
line 106:
Why does it half the number of equations?
line 107:
If the number of *grid points* increases what does that mean for the amount ofinformation contained in the results and to what degree are the resulting"pixels" correlated? I.e. how well is the result resolved?
line 137
A very important interpretation of this approach is the statistical one (optimalestimation), where R^TR can be interpreted as precision matrix codifying apriori information about the given distribution (i.e. smoothness). Please discuss.
line 139
In what sense is this an approximation? The given formula is discrete already,as such R_i is not a derivative, but a finite difference operator.
line 142
The nomenclature is highly unusual. Typically derivatives are defined forcontinuous functions. c was so far a vector. This is an *approximation* of thethirdorder derivative of a function in ydirection by finite differences.And even then, the division by the griddistance is missing. In this form, theregularisation is griddependent and would change in strength for different gridsizes, which requires a retuning of regularisation strength for every change ofgrid size. Please take the grid size into account.
line 148
Why is the regularization parameter set to 1 over grid length? To compensate forthe missing factor in R_3, the power three is missing. There is a host ofliterature discussing optimal choice of this parameter (Lcurve, optimalestimation, etc.). Practically, it is a tuning parameter which often requiresmanual adjustments unless both measurements and a priori are very wellunderstood.
line 148
What is meant by "setting the derivative to zero"? Formula (3) minimizes theexpression and thus allows for nonzero derivatives unless the factor \mu ischosen to be very large.
line 152
c is typically the absolute value of c. To describe more complicateregularization terms, one often uses a more general function Phi(c) mapping R^nto R^+ or a norm with a subscription like c_\phi^2. Are you refering toSobolevNorms?
line 155
The solution to the problem is a discrete vector c, whereby each element of cdefines the concentration in one pixel (see (1)). A spline is something verydifferent, as it is a continuous. Your problem is set up to be noncontinuous bedefinition. Please specify your model precisely and be careful with thedistinction between the continuous and discrete view.
line 158
c was defined as a vector, not as a continuous function.
line 159What items in which summation? There is an integral in (7).
line 160f:
"This is how the LTD algorithm does to add additional(sic!) equations?" What does this mean?
line 161:
Is such a complicated equation really more efficient computationally than twomuch simpler ones? What are the involved algorithmic complexities?
line 165:
It is unclear why this biharmonic equation is necessary. You are alreadyminimizing a cost function in (6). Equation (7) gives you an immediate way tocalculate c required by (6). Computing discretized ddc/ddx+ddc/ddy andcomputing the Euclidian norm should give you (6) without the need for higherderivatives in the definition of the problem. To efficiently solve (6) one mightneed higher derivatives depending on the chosen algorithm (e.g. GaussNewton),but your paper does not detail this part very well.
Please describe in detail by which algorithm (6) is solved and how (9) plays arole in that.
line 169:
Again, the regularization weight typically depends on diffusion coefficientsand measurement errors and is often a tuning parameter. The grid size should directlybe implemented in the finite difference equations.
line 180:
What interpolation applied after the reconstruction process? The pixelbasedalgorithm assumes constant values over constant pixels. There is no smoothinginterpolation, which would not deteriorate the fit to the measurements, i.e.deteriorate the results.
line 186
Here c is defined, for the first time, as a continuous function! Please properlydistinguish the "different" c's.
line 189
What source number? (10) defines only a single source. If you use multiplesources, please accommodate this in (10).
line 189
You state that the peak width was set randomly. Was it chosen randomly from thelisted peak width of line 187f?
line 195
You were using c_i,j above for the 2D fields, why now only c_i?
line 201
How was the location of the highest peak located?
line 210
Why did you not apply the other algorithm on your fields for bettercomparability?
line 215
While it seems to work, the pixel based algorithm derives pixels, not acontinuous field. It is straightforward to derive a spline interpolated fielddirectly, if desired for the higher accuracy. One simply needs to compute theintegrals over the spline interpolated field for the coefficients when computingthe error to the measurements. This can be accomplished by a linear matrixmultiplication. I expect this to deliver similar results as the other methods atmaybe even faster speed due to the smaller number of involved equations. Pleasediscuss the choice of your simpler forward model.
line 243
Why does the necessary computation time scale with the number of sources?Shouldn't it be proportional to the problem size?
line 283
"oversmooth issue"  necessarily, the amount of information cannot increasebetween the measurements and the solution. Due to the chosen regularization, theresult will be necessarily smooth. If it is "oversmooth" depends on whether thea priori assumption of smooth fields is correct or not.
In case that this assumption does not hold, "better" (less smooth) results canbe achieved by Totalvariation minimization (isotropic or anisotropic) andprimal dual methods, e.g. SplitBregman. I doublt this would fit better to yourproblem, though.
MINOR REMARKS=============
line 55
"question". This is called an "inverse problem".
line 74
grids > grid points?
line 85
necessary > necessity
line 144
> "thirdorder forward difference operator"
line 160
Which "multiple items"?
line 174:
"For pixelbased">"For conventional pixelbased", posted>posed 
RC3: 'Comment on amt2021122', Anonymous Referee #3, 22 Jun 2021
reply
Anonymous referee report
The paper from Sheng Li and Ke Du proposes a new minimum curvature (MC) algorithm to apply smoothness constraints in the tomographic inversion of optical remote sensing measurements, to reconstruct the spatial distribution of atmospheric chemicals in a given domain (a 40 x 40 m square area in the example given). The authors compare the performance of their new proposed method to that of other existing methods, such as the nonnegative least squares (NNLS) and the low third derivative (LTD). The performance is assessed on the basis of a few test maps containing one or several (up to five) bivariate Gaussian sources. Apparently, the MC algorithm performs significantly better than the NNLS method, and shows almost the same performance of the LTD algorithm in terms of reconstruction accuracy. Compared to this latter, however, the MC algorithm allows to save from 27 to 35% computation time, depending on the number of sources in the domain.
The subject of the paper is interesting, comprehensively presented in the introduction and put in the context of the existing literature on the topic. The method used for the assessment, however, is not sufficiently general and could be improved. The presentation of the algorithms assessed is not sufficient, the actual equations used should be included in the paper. Regarding the language of the text, I am not native English speaker, thus I cannot provide a reliable feedback. However, the language sounds a bit “strange” to me at several instances. Therefore I recommend a review by a language Editor. Also, I would suggest to avoid flooding the text with acronyms. Several of them are not really necessary and make reading the paper uncomfortable.
In conclusion, I am very sorry but I can recommend this paper for publication in AMT only after major improvements, as outlined in the comments below.
General comments
My main concern is that the authors have compared the field reconstruction errors of the NNLS, LTD, MC (and GTMG) algorithms on the basis of a set of only five test distribution maps. I would say that they have verified some necessary conditions which, however, are not sufficient to assess the relative efficiency of the considered methods. Each solution depends both on the measurements and on the constraints applied. Here it is not clear whether the LTD and MC solutions perform better than the NNLS because of the smarter applied constraints or because of the specific experimental distributions (bivariate Gaussians) considered in the examples given.
Error covariance matrices and averaging kernels (see e.g. Rodgers, 2000) are broadly used tools in the atmospheric remote sensing community to characterize the recovered spatial distribution (yes, also 2D distributions ...) from the point of view of the retrieval error, and of the spatial resolution (width of the Point Spread Function) of the measurement chain (measuring plus inversion systems). Applying these tools to the inversion methods considered in the paper is possible, thus the authors should use them. For example, from the analysis reported in the paper, it is not self evident that the spatial resolution of their measuring system changes strongly depending on the x,y position within the squared field considered: there are gridelements which are crossed by 2 or 3 beams, and others (near the sides of the field) which are not sounded at all. Thus, the spatial resolution must be very poor near the sides of the squared field and much better near the center. I believe this feature would be selfevident from maps of the diagonal elements of the 2dimensional averaging kernels of the different solutions considered (see e.g. von Clarmann et al, 2009).
Specific comments
Lines 4244: please include references for the mentioned techniques. They are not standard for the whole atmospheric remote sensing community.
Lines 7374: not only, I guess. The chosen pixels should be crossed at least by one beam, otherwise the NNLSF is illposed.
Line 122: I would name the small squares as “pixels” instead of “grids”.
Line 127: if the PIC is measured at the retroreflectors, then you have only 16 measurements, or 20 measurements if retroreflectors are installed also at the corners of the square. Instead, I guess that for the NNLSF you need at least 36 measurements. Please make an effort to describe more thoroughly the experimental setup.
Section 2.1: it would be interesting if the authors could explain the details of the experimental setup, I could not understand which is exactly the measured quantity. This would be useful also to understand to which degree the linear formulas (1) and (2) are accurate.
Equation (3) assumes that all the measurements have the same precision, which may not be the case if the signals observed are very different in intensity (e.g. because of the different absorption paths). Couldyou please add a comment?
Line 139: With equation (3) you require a solution with “small” L_{i} c norm, whereas, usually one requires a small L_{i} (c – c_{a}) norm, where c_{a} is some prior estimate of c. Please explain the rationale behind your choice of c_{a} = 0.
Line 148: If the regularization parameter μ of eq. (3) is griddependent, then I would expect it to appear in some vector form in equation (3) rather than as a scalar. How do you establish the actual value of μ? Which is the solution of the LTD algorithm? Please specify the equation.
Line 149: In principle, the NNLS algorithm does not use constraints, correct? Here you are explaining the LTD algorithm, so it cannot be solved with the NNLS approach. Maybe you refer to the Newton method? Please explain.
Lines 158165: Up to eq.(6) and later also in eq. (9), c is a vector. In eq.s (7) and (8) “c” seems a function. Please improve the notation, it would be difficult to implement your MC algorithm based on your description.
Line 167: I have understood that you are finally using eq.(9), that is the discretized form of eq.(8). Is eq.(9) more or less equivalent to eq.(7)? This description is very confusing.
Line 169: the same comment I made for μ (line 148) here applies to ω. Which constant for the inverse proportionality did you use?
Line 187: If c(x,y) is a concentration, then Q cannot be measured in ppm (that is a mixing ratio). Eq.(10) does not contain σ, it contains σ_{x} and σ_{y}…
Lines 212213: this sentence is not clear to me.
Line 220: “The smaller the number of sources, the better the reconstruction quality”. I think this is intrinsic to the definition of the nearness quantifier. Please comment.
Lines 231233: do you mean that the "peaklocation" quantifier could mistake peaks with similar amplitudes? In this case it would be advisable to refine eq.(12) or to apply it with some caveats.
Line 233: I think that, as it is, eq.(12) is reliable only if the peaks to be reconstructed have amplitudes that differ from each other by much more that the error with which they are retrieved. Why the “source number” counts so much?
Line 239240: I am skeptical about your general statement regarding the NNLS performance. I would suggest adding some details regarding how you actually computed the NNLS solution.
Technical corrections
Line 50: summarising ??
Line 85: necessary ?
Line 136: what is the superscript “21” ?
Line 174: “wellposed”
Line 212: it (?)
Line 224: maybe “complexity” ?
Line 228: are slightly better ...
Line 241: derivation ??
References
Rodgers, C. D.: Inverse Methods for Atmospheric Sounding: Theory and Practice, in: Series on Atmospheric, Oceanic and Planetary Physics, Vol. 2, edited by: Taylor, F. W., World Scientific, 2000.
von Clarmann, T., De Clercq, C., Ridolfi, M., Hoepfner, M., and Lambert, J.C.: The horizontal resolution of MIPAS, Atmos. Meas. Tech., 2, 47–54, https://doi.org/10.5194/amt2472009, 2009.
Sheng Li and Ke Du
Sheng Li and Ke Du
Viewed
HTML  XML  Total  BibTeX  EndNote  

164  43  7  214  2  3 
 HTML: 164
 PDF: 43
 XML: 7
 Total: 214
 BibTeX: 2
 EndNote: 3
Viewed (geographical distribution)
Country  #  Views  % 

Total:  0 
HTML:  0 
PDF:  0 
XML:  0 
 1