Articles | Volume 29, issue 19
https://doi.org/10.5194/hess-29-4761-2025
https://doi.org/10.5194/hess-29-4761-2025
Research article
 | 
30 Sep 2025
Research article |  | 30 Sep 2025

Can causal discovery lead to a more robust prediction model for runoff signatures?

Hossein Abbasizadeh, Petr Maca, Martin Hanel, Mads Troldborg, and Amir AghaKouchak

Download

Interactive discussion

Status: closed

Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor | : Report abuse
  • RC1: 'Comment on hess-2024-297', Anonymous Referee #1, 05 Dec 2024
  • RC2: 'Comment on hess-2024-297', Anonymous Referee #2, 03 Jan 2025

Peer review completion

AR: Author's response | RR: Referee report | ED: Editor decision | EF: Editorial file upload
ED: Reconsider after major revisions (further review by editor and referees) (18 Feb 2025) by Manuela Irene Brunner
AR by Hossein Abbasizadeh on behalf of the Authors (28 Mar 2025)  Author's response   Author's tracked changes   Manuscript 
ED: Referee Nomination & Report Request started (31 Mar 2025) by Manuela Irene Brunner
RR by Anonymous Referee #1 (21 Apr 2025)
RR by Anonymous Referee #3 (28 Apr 2025)
ED: Reconsider after major revisions (further review by editor and referees) (05 May 2025) by Manuela Irene Brunner
AR by Hossein Abbasizadeh on behalf of the Authors (10 Jun 2025)  Author's response   Author's tracked changes   Manuscript 
ED: Referee Nomination & Report Request started (17 Jun 2025) by Manuela Irene Brunner
RR by Anonymous Referee #3 (16 Jul 2025)
ED: Publish as is (18 Jul 2025) by Manuela Irene Brunner
AR by Hossein Abbasizadeh on behalf of the Authors (21 Jul 2025)
Download
Short summary
Here, we represented catchments as networks of variables connected by cause-and-effect relationships. By comparing the performance of statistical and machine learning methods with and without incorporating causal information to predict runoff properties, we showed that causal information can enhance models' robustness by reducing the accuracy drop between the training and testing phases, improving the model's interpretability, and mitigating overfitting issues, especially with small training samples.
Share