Truth and Uncertainty. A critical discussion of the error concept versus the uncertainty concept

Contrary to the statements put forward in “Evaluation of measurement data – Guide to the expression of uncertainty in measurement”(GUM08 : , :::::: edition ::::: 2008 :::::::::: (GUM-2008), issued by the Joint Committee for Guides in Metrology, the error concept and the uncertainty concept are the same. Arguments in favour of the contrary have been analyzed and were found not compelling. Neither was any evidence presented in GUM08 ::::::::: GUM-2008 : that “errors” and “uncertainties” define a different 5 relation between the measured and the true value of the variable of interest, nor does this document refer to a Bayesian account of uncertainty beyond the mere endorsement of a degree-of-belief-type conception of probability.

2 Recapitulation of the concept of indirect measurements ::::: Error :::: and :::::::::: Uncertainty In the case of indirect measurements, e.g., remote sensing, the measurand, i.e., the quantity of interest, x, and the measured signal y are linked via a function f as where b represents the parameters of f representing physical side conditions and is the actual measurement error in the y-domain (Rodgers, 2000) :::::::::: GUM-2008 ::::::: endorses :: a ::: new ::::::::::: terminology :::::::: compared :: to :::: that ::: of ::::::::: traditional :::: error :::::::: analysis. In the case of remote sensing of the atmosphere f is the radiative transfer function. We use vector notation because in remote sensing typically x involves a conclusion from the effect to the cause. More often than not, this inverse problem is ill-posed in the sense of Hadamard (1902), and the direct inversion is impossible and some kind of workaround is employed. Candidate workarounds are least-squares solutions, regularized solutions and so forth. von Clarmann et al. (2020) summarize the most common methods to solve this kind of problem, along with related error estimation schemes. Here we call this substitute for the genuine 115 inversionF −1 . Here F is a function representing the true radiative transfer function to the best of our knowledge, i. e., the descriptive radiative transfer law as opposed to the governing but unknown law.
Some complication arises because the true atmospheric state can be represented only by spatially continuous functions while we work with vectors of finite dimension. Here we avoid related difficulties by assuming that the measurand x represents a discretized atmosphere, i.e., it does not represent a point value but some kind of spatio-temporal average.

215
One of the first major documents, where the term 'error' has been used with this statistical connotation is, to the best of our knowledge, "Theoria Motus Corporum Celestium" by C. G. Gauss (1809). Since then, the term 'error' has commonly been used to signify a statistical estimate of the size of the difference between the measured and the true value of the measurand. Seminal books such as "Statistical Methods For Research Workers" by R. A. 220 Fisher (1925)or "Inverse Methods For Atmospheric Sounding -Theory and Practice" by C.
3 :::: Other :::::: estimates :: are ::: also :::: used, ::: e.g., :::: robust :::: ones ::: like :: the :::::::: interquartile :::: range. mated error is understood as a measure of the width of a distribution around the measured (or estimated) value which tells the data user the probability -or the likelihood, depending 225 on the statistical framework used -density of a certain value to be measured or estimated if the value actually measured or estimated was the true value. Counterintuitively, in general it does not always provide the probability density that the measured value is the true value.

305
::::::::: GUM-2008 : does not only present traditional error analysis in a revised language but suggests that there is more to it. That is to say, the entire concept is claimed to be replaced . ::: (see, ::::: e.g., :::::::::: GUM-2008, :::: Sect ::::: 3.2.2., ::::: Note ::: 2). We understand that GUM08 ::::::::: GUM-2008 : grants that the classical concept of error analysis deals with statistical quantities, but these are statistical estimates of the difference between the measured or estimated value and the true value. We take GUM :::::::::: GUM-2008 310 to be saying that the reference of even this statistical quantity to the true value poses certain problems, because the true value is unknown and unknowable. As a solution of this problem, the uncertainty concept is introduced which allegedly makes no reference to the true value of the measurand and is thus hoped to avoid related problems. GUM08 :::::::::: GUM-2008 (particularly Section 2.2.4) unfortunately leaves room for multiple interpretations, but our reading is that an error distribution is understood 315 :: by :::::::::: GUM-2008 as a distribution whose spread :::::::: dispersion : is the estimated statistical error and whose expectation value is the true value, while an uncertainty distribution is understood as a distribution whose spread :::::::: dispersion is the estimated uncertainty and whose expectation value is the measured or estimated value. GUM08 :::::::::: GUM-2008 (p.5) characterizes error as "an idealized concept" and states that "errors 320 cannot be known exactly". This is certainly true but it has never been claimed that errors can be known exactly. Since not all relevant error sources are necessarily known, any error estimate remains fallible but still it is and has always been the goal of error analysis to provide error estimates as realistic as possible. To use the statistical conception of 'error' and conceding the fallibility of its estimated value, it is not necessary to know the true value. It is only necessary to know the chief 325 mechanisms which can make the measured value deviate from the true value and to have estimates available on the uncertainties of the input values to these mechanisms. Some GUM08 :::::::::: GUM-2008 endorsers (e.g., Kacker et al., 2007) try to draw a borderline between error analysis and uncertainty assessment in a way that they associate error analysis with frequentist statistics while uncertainty is placed in the context of Bayesian statistics. Frequentist statistics, we 330 understand, is a concept where the term 'probability' is defined via the limit of frequencies for a sample size approaching infinity. This definition is untenable ::::::::: challenged because it involves a circularity: It is based on the large number theorem, according to which :::::: (strong ::::::: version) a frequency distribution will almost certainly converge towards its limit. This limit is then associated with the probability. 'Almost certainly' means 'with probability 1'. The circularity is given by the fact that 335 the definiendum appears in the definiens (See, e.g., Stegmüller, 1973, pp. 27). :::: Also ::: the :::: weak ::::::: version :: of ::: the :::: large ::::::: number ::::::: theorem :::::::: involves ::: the :::::: concept ::: of ::::::::: probability :::: and :::: thus ::::: poses : a :::::: similar :::::::: problem :: to ::: the :::::::: definition ::: of ::: the :::: term :::::::::::: 'probability'. : We concede that many estimators in error estimation rely on frequency distributions. It is, however, a serious misconception to conclude from this that error analysis is based on a frequentist definition of 'probability'. This is simply a non sequitur.

340
Frequency-based estimators are consistent with any of the established definitions of probability, and their use does not allow any conclusion on ::::: about the definition of 'probability' in use.

355
One of the major purposes of making scientific observations, besides triggering ideas on possible relations between quantities, is to test predictions based on theories on the real world (Popper, 1935). To decide if an observation corroborates or refutes a hypothesis, it is necessary to have an estimate how well the observation represents the true state, because it must be decided how well any discrepancy between the prediction and the observation can be explained by the observational error 360 (e.g., Mayo, 1996). Any concept of uncertainty which ::: that : is not related to the true state cannot serve this purpose.
Inserting this definition in the GUM08 :::::::::: GUM-2008 definition of uncertainty yields that, through the 370 back door, uncertainty still refers to the true value. Thus it is not clear what the difference between the traditional concept of error analysis and the uncertainty concept is. Further, it is stated that systematic effects can contribute to the uncertainty. GUM08 ::::::::: GUM-2008 : falls short of clarifying how a systematic effect be understood other than a systematic deviation between the measurement and the true value , the concept GUM08 ::: that ::: the :::::: concept :::::::::: GUM-2008 : apparently tries to avoid. In order 375 to justify the attribution of an uncertainty distribution to the systematic effects without relying on frequentist statistics, they invoke the concept of subjective probability. With this it becomes possible to assign an uncertainty distribution to the combined random and systematic uncertainty but still it is not clear how the systematic effect is defined without reference to the unknown truth.
when it comes in handy and to deny it when it would solve the conflict between the error and the uncertainty concepts.
Our skepticism about the possibility of dispensing with the concept of the true value is shared by, e.g., Ehrlich (2014), Grégis (2015), and Mari and Giordani (2014). Note that in the International Vo- The use of the term 'uncertainty' in GUM08 seems inconsistent: The general GUM08 concept seems to be that the 'error' has to include all error sources and thus cannot be known; 'uncertainty' is weaker, it is only an estimate of quantifiable errors, excluding the unknown components. This The only other possible reading is that they want to say that, since due to unknown (unrecognized and/or recognized but not quantified 6 ) error sources, error estimation will always be incomplete 3.09). Although these two traditional concepts are valid as ideals, they focus on unknowable quantities: the "error" of the result of a measurement and the "true value" of the measurand (in contrast to the estimated value), respectively. Nevertheless, whichever concept of uncertainty is adopted, an uncertainty component is always evaluated using the same data and related information..." ::::::::: (emphases :: in :: the :::::::: original). It remains unclear how the concepts can, on the one hand, be consistent, while, on the 435 other hand, it is claimed that the error approach and the uncertainty approach are actually conceptu-6 Rigorously speaking, within the concept of subjective probability recognized but unquantified uncertainties should not exist. 6 It is not clear how this can be achieved without explicit consideration of the Bayes theorem.
13 ally different and not only with respect to terminology. In GUM08, p.5 it reads "In this Guide, great care is taken to distinguish between the terms "error" and "uncertainty". They are not synonyms, but represent completely different concepts; they should not be confused with one another or misused" Since both concepts, however, are consistent, it is not clear in what the difference of the concepts 440 consists.
Again, we come back to Kacker et al. (2007) who claim that error estimation and uncertainty analysis are best distinguished in the sense that the former relies on frequentist statistics while the latter is founded on Bayesian statistics. Here the following remarks are in order: (1)  The methodology proposed in GUM is uncertainty propagation. This is a mere forward (or direct) problem: given that x true is the true value, and a measurement procedure with some error distribution, it returns a probability distribution for values x measured that might be measured. However, GUM08's definition of uncertainty "parameter, associated with the result of a measurement, that characterizes 455 the dispersion of the values that could reasonably be attributed to the measurand" (emphasis added by us), seems associated with another meaning: given a measured value x measured ("result of a measurement") and a measurement procedure with some error distribution, what is the probability distribution x true ("values that could reasonably be attributed to the measurand"). This is an inverse problem, for which Bayes theorem is applicable rather than uncertainty propagation.

460
Interestingly enough, early documents of the history of GUM (Kaarls, 1980; Bureau International des Poids et Mésures) provide evidence that the terminological turn from 'error' to 'uncertainty' was triggered only by linguistic arguments, based upon the fact that in common language the term 'uncertainty' is often associated with "doubt, vagueness, indeterminacy, ignorance, imperfect knowledge". These early documents provide no evidence that 'error' and 'uncertainty' were conceived as 465 two different technical terms connotating :::::::: connoting different concepts. Any re-interpretation of the terms 'error' and 'uncertainty' as frequentist versus Bayesian terms or operational versus idealistic concepts came later.
In summary, it appears that the uncertainty concept is not essentially different from the error concept. We do, however, not claim that the terms 'error' and 'uncertainty' are fully equivalent; 470 even in pre-GUM language there might be some subtle linguistic differences.We perfectly know our measurement (even if it is erroneous) and are ignorant with respect to and have imperfect knowledge only about the true value. This suggests that the uncertainty is an attribute of the true value while the error is associated with a measurement or an estimate. Because of the measurement error there is an uncertainty as to what the true value is. The uncertainty thus describes the degree of ignorance 475 about the true value while the estimated error describes to which degree the measurement is thought to deviate from the true value. In this use of language both terms still relate to the same concept. This notion seems, as far as we can judge, to be consistent with the language widely used in the pre-GUM literature, but this issue deserves a more thorough linguistic assessment that is beyond the scope of this paper. The introduction of the term 'uncertainty of measurement' seems to us a mere linguistic 480 revision of an established terminology which does not connect to any further insights.
In summary, we have to distinguish between two questions; first, whether the terms 'error' and 'uncertainty' have the same connotation, and second, whether the underlying concepts are indeed different. The first question is ::: The ::::::: answer :: to ::: the :::::::::::: terminological ::::::::: differences :::: was ::::: found :: to :: be : contingent upon the underlying stipulation, and ::: that : any statement about their equivalence or difference 485 without reference to a definition is a futile pseudo-statement. The answer to the second question ::::::: question :: of ::::::::: conceptual ::::::::: differences : is less trivial and deserves some ::::: deeper : scientific discussion. The main question still seems to be how the true value, the error or uncertainty, and the measured value are related with each other. This question will be addressed in the following sections :::::: section.
3 The unknown true value of the measurand 490 The alleged key problem of the error concept is, in our reading of GUM08 ::::::::: GUM-2008, that the value of the true value of the measurand is not known, and that this true value must appear neither in the definition of any term nor in the recipes to estimate it. To better understand this key problem, we decompose it into four sub-problems.
1. Quantities of which ::::: whose : the value cannot :: be : determined must not appear in definitions. 495 2. The error distribution must not be conceived as a probability density distribution of a value to be the true value.

520
Intuitively, we conceive the definition of a quantity and the assignment of the value to a quantity as quite different things.
In GUM08 ::::::::: GUM-2008 : it is claimed that the definition of 'uncertainty' is an operational one (p. 2). An operational definition defines a quantity by stipulating a procedure by which a value is assigned to this quantity. The concept of operational definitions was suggested by Bridgeman

525
(1927) in order to give terms in science a clear-cut meaning. This operationalism, at least a narrow conception of it, has its own problems, has received considerable criticism and has led to deep philosophical discussions (see, e.g., Chang, 2019). To summarize these is beyond the scope of this paper and for here it must suffice to mention that there are alternatives, such as theoretical definitions or reduction of the definiendum to previously defined terms. 530 GUM08 ::::::::: GUM-2008's claim that the uncertainty concept is based on an operational definition leads to two further inconsistencies. First, no unambiguous operation is stipulated on which the definition can be based, but multiple operations are proposed, which might give different uncertainty estimates.

535
The other problem with the operational definition is the following: In GUM08 ::::::::: GUM-2008, pp 2-3, it is claimed that the uncertainty concept is not inconsistent with the error concept, and a few lines later it reads "an uncertainty component is always evaluated using the same data and related information." : " ::::::::: (emphasis :: in ::: the :::::::: original). The latter suggests that within the error concept the same operations are used as within the uncertainty concept. Since the operations define the term and the 540 related concept, the uncertainty concept and the error concept must be the same. 7 :: We ::: owe ::: this ::::::: illustrative :::::: example : to ::::::::::: Possolo (2021).
In summary, the fact that the true value of the measurand is unknowable is a problem for the definition of the term 'error' and its statistical estimates only if we commit ourselves to the doctrine of that only operational definitions must be used. If we abandon this dogma, there is nothing wrong with conceiving the estimated error as a statistical estimate between the measured or estimated and 545 the true value, and the problem is restricted to the assignment of a value to this quantity. Related issues are investigated in the following.

585
::: The : non-consideration of the Bayes theorem goes under the name of 'base rate fallacy'. 50% of people suffering Covid-19 have fever (Robert Koch Institut, 2020), but this does not imply that the probability :: is :::: 50% that a person suffering fever to have ::: with ::::: fever ::: has Covid-19is 50%. To estimate the latter probability requires knowledge of the percentage of people being infected with the Corona virus, and the probability that a person suffers fever for any reason. In metrology the situation is 590 quite analogous. If we define the true value to be x, the ideally measured value f (x) = y ideal , and the estimated measurement error in terms of the standard deviation σ y , then the probability density of a certain value y to be measured is given by a pdf with y ideal mean and σ y spread. This, however, does not imply that, if we measure y with an uncertainty of σ y , and propagate σ y through the inversion procedure to get the uncertainty ofx, namely, σ x , that the probability of some x being the true value 595 of the measurand is given by the pdf with meanx and σ x spread. Again, it is the a priori probability distribution 8 which is missing. There are three ways out of ::::::: possible :::::::: solutions :: to ::::: cope :::: with : this problem. For now we will defer the problem of a possibly incomplete error budget to Section 3.4 and assume that the error buget is complete.
The first solution is to apply a Bayesian retrieval scheme . Indeed in many cases, the solution of the 600 inverse problemF −1 employs :::::: retrieval ::::::: scheme ::: that :: is ::::: based ::: on a Bayesian estimator. Examples are found, e.g., in Rodgers (2000) or von Clarmann et al. (2020). On the supposition that the error budget is complete, the interpretation of the error bar as the spread :::::::: dispersion : of a distribution representing the probability density that a certain value is the true value is correct.

Gauss (1809) solves this problem by
::: The :::::: second :::::::: solution :: is ::: the : application of the indifference principle . ::::::: principle ::: of ::::::::::: indifference, :: as ::::::: applied, :::: e.g., :: by :::::::::::: Gauss (1809). : That is, the same a priori probability is assigned to all possible values of the measurand. With this, e.g., in the application to a linear inverse problem and normal 18 function of the true value of the measurand. This concept of 'non-informative a priori', however, has its own problems. Even if we ignore some more trivial problems for the moment, e.g., that 615 some quantities cannot, by definition, take negative values, this concept can lead to absurdities: If we assume that we have no knowledge on, say, the volume density of small-particle aerosols in the atmosphere, and describe this missing knowledge by an equidistribution of probabilities, this would correspond to a non-equidistribution of the surface densities, due to the non-linear relationship between surface and volume. It strikes us as absurd that information can be generated just by such a 620 simple transformation from one domain into another. The principle of indifference, upon which the concept of non-informative priors is built, is critically :: but :::: still :::::::: favorably : discussed, e.g., by Keynes (1921, Chapter IV). The concept of non-informative priors is still criticized even in the Bayesian community (e.g. D' Agostini, 2003).
The third solution is the likelihood interpretation, which has been introduced by Fisher (1922).

Nonlinearity issues
The uncertainty concept relies on the possibility of evaluating uncertainties caused by measurement errors and "systematic effects" without knowledge of the true value. This is certainly granted 700 for linear problems. The resulting uncertainty inx, namely σ x , generated by a measurement error statistically characterized by its standard deviation σ y or by a systematic effect, e.g., a not exactly known value of a constant b will be the same for eachx, and no specific relation between the estimatê x and the true :::: Here ::: the :::::::::: uncertainty :::::::: estimates :: do :::: not :::::: depend ::: on ::: the : value of the measurandx is required. This is because in the linear case Gaussian error propagation holds, and where S x,noise , S x,b and S b are the covariance matrices and G and G b the matrices of partial derivatives ∂xn ∂ym and ∂xn ∂b k , respectivly 8 .

Incompleteness of the error budget 745
The arguments put forward above are based on the supposition that the error budget is complete.
The precision of a measurement is a well-behaved quantity in a sense that it is testable in a straight 755 forward way: From :::: from :: at :::: least : three sets of collocated measurements of the same quantity, where each set is homogeneous with respect to the expected precision of its measurements, the variances of the differences provide unambigous precision estimates . :::::::::: unambiguous :::::::: precision :::::::: estimates ::::: (see, A falsificationist (Popper, 1935) approach is more promising. It follows the rationale that it will 765 never be possible to prove that our assumptions on the bias of a measurement system is correct.
Instead, we estimate the bias as well as we can, and use it as a best estimate of the bias until some test provides evidence that the estimate is incorrect. Such a test typically consists of the intercomparison of data sets from different measurement systems. If the bias between these data sets is larger than the combined systematic error estimates, at least one of the systematic error estimates is too low 770 and has to be refuted. Further work is then needed to find out which of the measurement systems is most likely to underestimate its systematic error. Conversely, as long as the mean difference of the measurements of the same measurand can be explained by the combined estimate of the systematic errors of both measurement systems, the systematic error estimates can be maintained, although this is, admittedly, no proof of the correctness of the error estimates. But as long as severe tests as 775 described above are executed and the error estimates cannot be refuted, it is rational to believe that they are sufficiently complete.
We have mentioned above that the uncertainty concept depends on the acceptance of the subjective probability in the sense of degree of rational belief. Without that, an error budget including systematic effects would make no sense because systematic effects cannot easily be conceived as 780 probabilistic in a frequentist sense; that is to say, the resulting error cannot be conceived as a random variable in a frequentist sense. Being forced to adopt the concept of probability as a degree of rational belief, it makes perfectly sense to conceive, after consideration of the Bayes theorem (see Section ??) the distribution with expectationx and covariance σ x,total as the probability distribution which tells the rational agent the probability of any value to be the true value. In this Section we identify issues where GUM08 ::::::::: GUM-2008 clashes with the needs of error or uncertainty estimation in the field of remote sensing of atmospheric constituents and temperature. These issues are (1) since the atmospheric state varies quasi-continuously in space and time, the measurand is not well defined; ::: and (2) there are applications of atmospheric data where the total uncertainty 790 estimate alone does not help; (3) Eq 11 in GUM08 is in conflict with the causal arrow, and (4) some GUM08 interpretations commit one to Bayesianism but some assumptions Bayesianism is based on cannot be logically inferred from generally accepted axioms nor traced back to observations. .  Rodgers, 2000 for detail). Since this type of measurements is apparently out of the scope of GUM08 ::::::::: GUM-2008, the latter is very :::: quite silent with respect to solutions to the problem of the characterization of measurements of quantities that are not well defined. Broadening the 810 scope and applicability of the GUM08 ::::::::: GUM-2008 : framework to include less than ideally defined measurands and measurements that demand inverse methods would significantly increase the value and utility of GUM08 ::::::::: GUM-2008 : approach. Relevant recommendations on data characterization developed within the TUNER activity (von Clarmann et al., 2020) aim at helping to reach this goal.

The combined error 815
One of the positive aspects of GUM08 :::::::::: GUM-2008 is that it breaks with the misled concept of characterizing systematic errors with 'safe bounds' (Kaarls, 1980;Kacker et al., 2007;Bich, 2012). This concept was sometimes endorsed by error statisticians subscribing to frequentism. Within a frequentist concept of probability, a probabilistic treatment of systematic errors was not easily possible because due to its systematic nature a systematic error cannot easily 9 be characterized by a fre-820 quency or probability distribution. The concept of subjective probability solves this problem. With the subjectivist's toolbox, it is no longer a problem to assign probability density functions, standard deviations and so forth when characterizing systematic errors. This possibility is a precondition for aggregating systematic and random errors to give the total error. GUM08 ::::::::: GUM-2008, however, goes a step further and even denies the necessity to report random and systematic errors independently.

825
Here we have to urge severe objections.
Thus it cannot be interpreted in a way that measurement errors are a purely frequentist concept.
In the community of remote sensing, both maximum likelihood and Bayesian retrieval schemes 890 are in use. Depending on the measurement type and the anticipated use of the data both have their pro's and con's. In order to avoid to make the rift between the Bayesian and non-Bayesian 11 part of the community even worse, the TUNER consortium has decided not to make a recommendation as to which of these retrieval schemes is thought to be superior. It was considered as more important to provide an adequate scheme for error or uncertainty estimation for any of these retrieval approaches.

895
As a consequence, it is not considered as adequate to custom-taylor uncertainty reporting to the Bayesian philosophy.
E.g., a flat, thus apparently non-informative velocity distribution goes along with a non-flat, thus informative distribution of kinetic energy. Similarly an equidistribution of droplet diameters goes 910 along with a non-flat, thus informative, distribution of droplet volumes, etc. This is considered by some as an absurdity brought about by the concept of non-informative priors.
More generally speaking, the Bayesian philosophy relies on a couple of unwarranted assumptions, e.g., the likelihood principle and the indifference principle. The proof of the former has been challenged (Evans, 2013;Mayo, 2013, quoted after White, 2007, and the latter has been criticized 915 11 We challenge the dichotomy 'Bayesian vs. frequentist'. Not every non-frequentist is a whole-hearted Bayesian; not all objective probabilities are frequentist (see, e.g. Popper, 1959). Not everybody who endorses a subjective concept of probability accepts Bayesian tenets on confirmation theory and test theory. Further, subjectivist and objectivist probability concepts are not necessarily in contradiction but can be bridged (Lewis, 1980).

27
as not deducible from any accepted axioms. Thus a pro or contra Bayesian decision is a purely philosophical decision, and it does not seem adequate to make such a decision generally binding.
While it is fully agreeable that the concept of error reporting has long relied and still relies on a subjective, i.e., information-dependent concept of probability 11 , this does not commit one to accept Bayesianism in full.

940
We concede that Bayesians and frequentists may use the error or uncertainty estimates in a different way. In situations where a hypothesis is to be tested on the basis of measurement data, the frequentist would rely on Fisherian p-values or Pearsonian rejection limits or a mixture of these approaches, while the Bayesian would assign a total probability to the hypothesis. The underlying error or uncertainty estimates, however, are required to support both approaches. We 945 think that a quantity for characterizing the error or uncertainty of a direct or indirect measurement which commits the user to either a frequentist or a Bayesian use of the measurements is of little use. Reference to Bayesianism alone cannot explain the claimed difference between 'error' and 'uncertainty'.
The denial that a valid connotation of the term 'error' is a statistical characterization between a 950 measured or estimated and the true value of the measurand would be an attempt to brush away centuries of scientific literature. This is, however, a matter of stipulation or convention and thus beyond the reach of a scientific argument. We thus take GUM08 :::::::::: GUM-2008 to be conceding that both the concepts, error analysis and uncertainty assessment, aim at providing a statistical characteristic of the imperfectness of a measurement or an estimate. We understand GUM08 ::::::::: GUM-2008 : in a sense 955 that the problem of the error concept is that it conceives the estimated error as a statistical measure of the difference between the measured or estimated value and the true value. Since the true value is unknowable, according to GUM08 :::::::::: GUM-2008 the term 'error' can neither be defined nor can its value be known.
It has been shown that the problem of the unknown true value of the measurand is a problem for 960 the definition of terms like 'error' or 'uncertainty' only if the concept of an operational definition is persued ::::::: pursued. This concept, however, has its own problems and is by no means without alternative.
As soon as the concept of an operational definition is given up, problems associated with defining the estimated error as a statistical estimate of the difference between the measurement or estimate and the true value of the measurand disappear, and the problem remaining is only one of assigning a 965 reasonable value to this now well-defined quantity.
Since GUM08 ::::::::: GUM-2008 : did not provide many reasons why, in the context of indirect measurements, the error allegedly cannot be estimated without knowledge of the true value, or why an uncertainty distribution does not tell us anything about the true value, we list the most obvious ones one could put forward to bolster this claim. These are the problem of the base rate fallacy, 970 the problem of non-linearity, and the problem that one can never know that the error budget is complete. The problem of the base rate fallacy can be solved by either performing a Bayesian inversion, or by conceiving the resulting distribution as a likelihood distribution. Astonishingly enough, the GUM08 ::::::::: GUM-2008's "dispersion or range of values that could be reasonably attributed to the measurand" is determined without explicit consideration of prior probabilities and thus cannot be 975 interpreted in terms of posterior probability. The problem of nonlinearity can be solved by the error scientist either by assuming that the estimate is close enough to the true value and linearizing around this poing ::::: point or by Monte-Carlo-like studies. The uncertainty scientist :: A :::::::::::: GUM-oriented :::::::: scientist, who has to avoid referring to the true value : , is at a loss in the case of nonlinearity because any estimate of the uncertainty of the estimate will be correct only when evaluated at the true value or an 980 approximation of it. The problem of the unknown completeness of the error budget can be tackled by performing comparisons between measurement systems. While this will never provide a positive proof of the completeness of the error budget, it still justifies rational belief in its completeness, and if error or uncertainty distributions are conceived as subjective probabilities in the sense of degrees of rational belief, this is good enough. In summary, if (a) our reading of GUM08 :::::::::: GUM-2008 is correct 985 in the sense that the traditional error analysis can connotate :::: deal :::: with a statistical quantity, and that the key difference between the 'error' and 'uncertainty' concepts is their relation to the true value of the target quantity and (b), that our list of arguments against the error concept is complete, and finally, if (c) our refutation of these arguments is conclusive, then the claim that the 'error' concept and the 'uncertainty' concepts are fundamentally different is untenable 11 ..