Articles | Volume 29, issue 23
https://doi.org/10.5194/hess-29-6811-2025
https://doi.org/10.5194/hess-29-6811-2025
Research article
 | Highlight paper
 | 
01 Dec 2025
Research article | Highlight paper |  | 01 Dec 2025

From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction

Jiangtao Liu, Chaopeng Shen, Fearghal O'Donncha, Yalan Song, Wei Zhi, Hylke E. Beck, Tadd Bindas, Nicholas Kraabel, and Kathryn Lawson

Model code and software

From RNNs to Transformers: benchmarking deep learning architectures for hydrologic prediction J. Liu and C. Shen https://doi.org/10.5281/zenodo.15852145

Download
Executive editor
Machine learning is used widely in hydrological research nowadays, but benchmarking them for various applications was lacking. This paper addresses the question which machine learning model to be used for which application and why.
Short summary
Using global and regional datasets, we compared attention-based models and Long Short-Term Memory (LSTM) models to predict hydrologic variables. Our results show LSTM models perform better in simpler tasks, whereas attention-based models perform better in complex scenarios, offering insights for improved water resource management.
Share