the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A data-centric perspective on the information needed for hydrological uncertainty predictions
Abstract. Uncertainty estimates are fundamental to assess the reliability of predictive models in hydrology. We use the framework of Conformal Prediction to investigate the impact of temporal and spatial information on uncertainty estimates within hydrological predictions. Integrating recent information significantly enhances overall uncertainty predictions, even with substantial gaps between updates. While local information yields good results on average, it proves insufficient for peak flow predictions. Incorporating global information improves the accuracy of peak flow bounds, corroborating findings from related studies. Overall, the study underscores the importance of continuous data updates and the integration of global information for robust and efficient uncertainty estimation.
- Preprint
(1932 KB) - Metadata XML
- BibTeX
- EndNote
Status: open (until 09 May 2024)
-
RC1: 'Comment on hess-2024-64', Carlo Albert, 11 Apr 2024
reply
This is a very interesting contribution, which has the potential of setting a new standard for error modelling in hydrology. I only have a few suggestions how to improve the presentation (see below). Apart from that, I have a conceptual question: The authors suggest to split the data into training data for the LSTM model and calibration data for the error model. Apart from the fact that I find this terminology a bit strange and potentially confusing in this context, I wonder whether it could be possible to train the parameters of both the model and the error model jointly? In traditional, likelihood-based probabilistic modeling this is common practice and just called "calibration".
Another interesting question could be: How do the error bars respond to changes in input variables? E.g. what happens when I increase the rain while keeping all the other variables fixed? This would not only be a sanity check of the data-driven model (do the errors respond in a reasonable way?) but it could also lead to a better understanding of what drives the errors.
Regarding the presentation, I think that a bit more details and context would make the manuscript more accessible, especially for people who have never heard of these techniques before. For instance, in eq. (1), one is left to wonder what the dimensions of the objects are. Furthermore, it is not clear how the training works until one reads on to eq. (4). It would also be interesting to relate the MHN approach to the very popular attention mechanism of transformers and provide a bit more context as to why this is a reasonable approach. When it comes to the presentation of the results, some typical hydrographs with error bands would offer some visual support to the summary tables.
Citation: https://doi.org/10.5194/hess-2024-64-RC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
336 | 140 | 16 | 492 | 16 | 16 |
- HTML: 336
- PDF: 140
- XML: 16
- Total: 492
- BibTeX: 16
- EndNote: 16
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1