Articles | Volume 29, issue 23
https://doi.org/10.5194/hess-29-6811-2025
https://doi.org/10.5194/hess-29-6811-2025
Research article
 | Highlight paper
 | 
01 Dec 2025
Research article | Highlight paper |  | 01 Dec 2025

From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction

Jiangtao Liu, Chaopeng Shen, Fearghal O'Donncha, Yalan Song, Wei Zhi, Hylke E. Beck, Tadd Bindas, Nicholas Kraabel, and Kathryn Lawson

Viewed

Total article views: 2,552 (including HTML, PDF, and XML)
HTML PDF XML Total Supplement BibTeX EndNote
2,079 440 33 2,552 118 41 51
  • HTML: 2,079
  • PDF: 440
  • XML: 33
  • Total: 2,552
  • Supplement: 118
  • BibTeX: 41
  • EndNote: 51
Views and downloads (calculated since 25 Apr 2025)
Cumulative views and downloads (calculated since 25 Apr 2025)

Viewed (geographical distribution)

Total article views: 2,552 (including HTML, PDF, and XML) Thereof 2,552 with geography defined and 0 with unknown origin.
Country # Views %
  • 1
1
 
 
 
 
Latest update: 19 Dec 2025
Download
Executive editor
Machine learning is used widely in hydrological research nowadays, but benchmarking them for various applications was lacking. This paper addresses the question which machine learning model to be used for which application and why.
Short summary
Using global and regional datasets, we compared attention-based models and Long Short-Term Memory (LSTM) models to predict hydrologic variables. Our results show LSTM models perform better in simpler tasks, whereas attention-based models perform better in complex scenarios, offering insights for improved water resource management.
Share