Articles | Volume 7, issue 7
Research article
29 Jul 2014
Research article |  | 29 Jul 2014

Towards a consistent eddy-covariance processing: an intercomparison of EddyPro and TK3

G. Fratini and M. Mauder

Abstract. A comparison of two popular eddy-covariance software packages is presented, namely, EddyPro and TK3. Two approximately 1-month long test data sets were processed, representing typical instrumental setups (i.e., CSAT3/LI-7500 above grassland and Solent R3/LI-6262 above a forest). The resulting fluxes and quality flags were compared. Achieving a satisfying agreement and understanding residual discrepancies required several iterations and interventions of different nature, spanning from simple software reconfiguration to actual code manipulations. In this paper, we document our comparison exercise and show that the two software packages can provide utterly satisfying agreement when properly configured. Our main aim, however, is to stress the complexity of performing a rigorous comparison of eddy-covariance software. We show that discriminating actual discrepancies in the results from inconsistencies in the software configuration requires deep knowledge of both software packages and of the eddy-covariance method. In some instances, it may be even beyond the possibility of the investigator who does not have access to and full knowledge of the source code. Being the developers of EddyPro and TK3, we could discuss the comparison at all levels of details and this proved necessary to achieve a full understanding. As a result, we suggest that researchers are more likely to get comparable results when using EddyPro (v5.1.1) and TK3 (v3.11) – at least with the setting presented in this paper – than they are when using any other pair of EC software which did not undergo a similar cross-validation.

As a further consequence, we also suggest that, to the aim of assuring consistency and comparability of centralized flux databases, and for a confident use of eddy fluxes in synthesis studies on the regional, continental and global scale, researchers only rely on software that have been extensively validated in documented intercomparisons.