Journal cover Journal topic
Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union
Journal topic

Journal metrics

IF value: 5.153
IF5.153
IF 5-year value: 5.460
IF 5-year
5.460
CiteScore value: 7.8
CiteScore
7.8
SNIP value: 1.623
SNIP1.623
IPP value: 4.91
IPP4.91
SJR value: 2.092
SJR2.092
Scimago H <br class='widget-line-break'>index value: 123
Scimago H
index
123
h5-index value: 65
h5-index65
Volume 14, issue 12
Hydrol. Earth Syst. Sci., 14, 2545–2558, 2010
https://doi.org/10.5194/hess-14-2545-2010
© Author(s) 2010. This work is distributed under
the Creative Commons Attribution 3.0 License.

Special issue: Advances in statistical hydrology

Hydrol. Earth Syst. Sci., 14, 2545–2558, 2010
https://doi.org/10.5194/hess-14-2545-2010
© Author(s) 2010. This work is distributed under
the Creative Commons Attribution 3.0 License.

Research article 13 Dec 2010

Research article | 13 Dec 2010

Why hydrological predictions should be evaluated using information theory

S. V. Weijs, G. Schoups, and N. van de Giesen S. V. Weijs et al.
  • Section Water Resources, Delft University of Technology, Delft, The Netherlands

Abstract. Probabilistic predictions are becoming increasingly popular in hydrology. Equally important are methods to test such predictions, given the topical debate on uncertainty analysis in hydrology. Also in the special case of hydrological forecasting, there is still discussion about which scores to use for their evaluation. In this paper, we propose to use information theory as the central framework to evaluate predictions. From this perspective, we hope to shed some light on what verification scores measure and should measure. We start from the ''divergence score'', a relative entropy measure that was recently found to be an appropriate measure for forecast quality. An interpretation of a decomposition of this measure provides insight in additive relations between climatological uncertainty, correct information, wrong information and remaining uncertainty. When the score is applied to deterministic forecasts, it follows that these increase uncertainty to infinity. In practice, however, deterministic forecasts tend to be judged far more mildly and are widely used. We resolve this paradoxical result by proposing that deterministic forecasts either are implicitly probabilistic or are implicitly evaluated with an underlying decision problem or utility in mind. We further propose that calibration of models representing a hydrological system should be the based on information-theoretical scores, because this allows extracting all information from the observations and avoids learning from information that is not there. Calibration based on maximizing utility for society trains an implicit decision model rather than the forecasting system itself. This inevitably results in a loss or distortion of information in the data and more risk of overfitting, possibly leading to less valuable and informative forecasts. We also show this in an example. The final conclusion is that models should preferably be explicitly probabilistic and calibrated to maximize the information they provide.

Publications Copernicus
Download
Citation