Articles | Volume 26, issue 1
© Author(s) 2022. This work is distributed underthe Creative Commons Attribution 4.0 License.
Parsimonious statistical learning models for low-flow estimation
- Final revised paper (published on 12 Jan 2022)
- Preprint (discussion started on 22 Sep 2021)
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor |
: Report abuse
RC1: 'Comment on hess-2021-481', Anonymous Referee #1, 15 Oct 2021
- AC1: 'Reply on RC1', Johannes Laimighofer, 08 Nov 2021
RC2: 'Comment on hess-2021-481', Kolbjorn Engeland, 29 Oct 2021
- AC2: 'Reply on RC2', Johannes Laimighofer, 08 Nov 2021
Peer review completion
AR: Author's response | RR: Referee report | ED: Editor decision
ED: Publish subject to minor revisions (further review by editor) (19 Nov 2021) by Rohini Kumar
AR by Johannes Laimighofer on behalf of the Authors (26 Nov 2021)  Author's response Author's tracked changes Manuscript
ED: Publish as is (27 Nov 2021) by Rohini Kumar
I have read with interest the manuscript, which is comprehensive, clearly written and straightforward to implement. The technical part is correct. I have some minor comments:
- Throughout the manuscript, the term parsimonious is used for characterizing the statistical learning models due to reducing the number of predictor variables to include the important ones. In statistical theory, the term parsimonious mainly refers to the number of parameters of the model, which is still large, e.g. in a random forest or boosting-based model with few predictor variables.
- It is self-contradicting to state that non-linear relationship can be captured by linear learning models (lines 14, 15). Actually, the term “non-linear relationship” cannot be defined. Furthermore, a more accurate classification of models considers their flexibility and interpretability, while the more flexible models can model better relationships, albeit this costs to their interpretability.
- The classification “statistical” vs “machine” learning models is not clear either. For instance, Support Vector Regression or RF can be considered statistical learning models also.
- Some claims related to the fact that studies mostly claim that non-linear models outperform linear ones could be relaxed. The literature includes studies claiming the opposite. Some of them can be found in the references list of the manuscript.