the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
How can observational data be used to improve the modeling of human-managed reservoirs in large-scale hydrological models?
Abstract. Human-managed reservoirs alter water flows and storage, impacting the hydrological cycle. However, modeling reservoir outflow and storage is challenging because it depends on human decisions, and there is often limited access to data on inflows, outflows, storage, or operational rules. Consequently, large-scale hydrological models either exclude reservoir operations or use calibration-free algorithms for modeling reservoir dynamics. Nowadays, remotely-sensed information on reservoir storage anomalies is a potential source for calibrating reservoir operation algorithms. However, it is not yet clear what impact calibration against storage anomalies has on simulated reservoir outflow and absolute storage. In this study, we introduce two reservoir operation algorithms that require calibration: the Scaling Algorithm (SA) and the Weighting Algorithm (WA). These algorithms were implemented in the global hydrological model WaterGAP and compared with the widely used Hanasaki algorithm with both default (DH) and calibrated (CH) parameter values. We calibrated all three algorithms against outflow, storage, storage anomalies, and estimated storage (based on storage changes and reservoir capacity) observed for 100 reservoirs in the USA to understand the information content of the observation variables. As expected, calibration against all three types of storage-related variables improved the storage simulation. Storage simulation using DH resulted in only 16 (15) skillful simulations (where the Kling–Gupta Efficiency with a trend component > -0.73) out of 100 reservoirs. In contrast, calibration against storage anomalies resulted in 64 (39), 68 (45), and 66 (45) skillful storage simulations for CH, SA, and WA, respectively, during the calibration (validation) period. However, calibration against storage-related variables barely improved the performance of the outflow simulation, which strongly depends on the accuracy of the simulated inflow. In fact, using observed inflow instead of simulated inflow has a more significant effect on improving outflow simulation than calibration, whereas the opposite is true for storage simulation. We found that the default parameters of the Hanasaki algorithm rarely matched the calibrated parameters, highlighting the benefit of calibration. Moreover, taking into downstream water demand in the reservoir operation algorithm does not necessarily improve modeling performance due to high uncertainty in demand estimation. Overall, the SA algorithm outperforms the other algorithms. Therefore, to improve the modeling of reservoir storage and outflow, we recommend calibrating the SA reservoir operation algorithm against remote sensing-based storage anomalies and improving reservoir inflow simulation.
- Preprint
(2513 KB) - Metadata XML
-
Supplement
(22710 KB) - BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on hess-2024-291', Anonymous Referee #1, 02 Dec 2024
This paper investigates whether storage anomalies, derived from remotely sensed data, can be used as data to enable the calibration of reservoir modules within large-scale hydrological models. This would enable improving the representation of reservoir within hydrological models for places where storage time series are not readily available (note this is never said explicitly in abstract or intro).
It uses 4 simple release rules (3 of which can be calibrated) from well-known global hydrological model WaterGAP, and 100 reservoirs from a comprehensive database of reservoirs from the United States, and for which observed storage is also available. It compares storage anomalies as the basis for calibration, with three other candidate time series: observed inflows, observed outflows, and estimated storage.
The idea is worthwhile and the paper meets its main objective (TLDR: yes one can use storage anomalies). Yet, the messaging and presentation make it difficult to follow from end to end, and in particular, the focus on this main contribution is uneven throughout. The beginning of Sections 4 and 5 suggest instead the focus was really on comparing reservoir release rules… but this in itself is a weaker contribution because there are many other release rules out there, why should we focus on these? To better reach the hydrological community beyond WaterGAP users, it would be best to instead show how the setup (different rules, observed vs. simulated inflows) shows that storage anomalies are a good choice of data, and the ability to get these for most reservoirs worldwide means rules presented in the paper can indeed be calibrated, and are of value (or in other words, the rules are basic and simple to calibrate, the existence of storage anomalies to carry out that calibration gives them value vs. the rest of the literature that they would not have otherwise). This will warrant rewriting quite a few bits.
Key result: the key result really is whether storage anomalies fare well vs. the other data the reservoir model can be calibrated / validated against, under different conditions (e.g., observed or simulated inflows). However, this simple key statement is absent from the abstract, and this means the other statements on results seem disconnected from the purported aim.
Description of methods. Several points there.
First, there is no clear and concise explanation of how storage anomalies datasets are constructed. Similarly, it is never clear what the point of estimated storage is: it is constructed from monthly storage observations, is this something as readily available as storage anomalies? This should be added to 2.3 and 2.4, along with examples (maybe from the same 3 reservoirs from the results?) of how observed storage, storage anomalies and estimated storage compare for the U.S.
Second, the rationale for selecting 100 reservoirs is not super clear: why use geographical spacing on a 0.5 x 0.5 degree grid? That does not guarantee we have reservoirs that are not on upstream / downstream of one another.
Third, the explanation for the release rule would warrant separate paragraphs / sub-sections for each. Things are abstract and quite difficult to follow as such. SA and WA should reference the original paper(s) that introduced them. To clarify, do these rules use demand estimates to adjust releases the way H06 does? I would also urge authors to better explain the practical difference between the rules, e.g., with diagrams showing release as a function of storage.
Four, a little bit more on calibration would be great. In practice, do you simulate N parameter sets and select the one with highest KGE? If so, how many parameter sets do you try? If not, what do you do?
Results. Several points to consider.
SA vs. WA, the evidence for SA being better doesn’t seem very robust, as (unless I have missed it) there’s little evidence that the small nRMSE difference in favor of SA is statistically significant. I would instead, present SA and WA as equivalent throughout the paper (starting with the abstract).
Figures 3 and 4: whilst it is great to see examples of reservoirs, the size of the figures and the choice of color / line style make this very difficult to read and understand. Please (1) use full page width (and even better, landscape page orientation) for each panel, and (2) adjust line width and line style, rather than using color codes that are not inclusive.
Citation: https://doi.org/10.5194/hess-2024-291-RC1 - AC1: 'Reply on RC1', Seyed-Mohammad Hosseini-Moghari, 06 Feb 2025
-
RC2: 'Comment on hess-2024-291', Anonymous Referee #2, 04 Dec 2024
The paper's goal is to improve reservoir representation in global hydrological models, specifically the WaterGAP model. It presents a comparison of different reservoir operation algorithms calibrated with different variants. One reservoir algorithm (DH, CH) uses the “Hanasaki” approach, and the other two use a new approach (SA, WA). Calibration is done using four different objective functions with storage anomaly from remote sensing data.
I find the paper interesting. It does not show big improvements using the different reservoir algorithms and/or different objective functions, but it is solid work testing the hypothesis with different objective functions. I appreciate the summary of “findings” in the conclusion part.
A few points here:
The title “How can observational data be used to improve the modeling of human-
managed reservoirs in large-scale hydrological models?” is a bit long and not to the point.
First, I would remove the human-manage part. Maybe something like: Using observed data and reservoir operation algorithm to improve the representation of reservoirs in large-scale hydrological models.2.3 Data. The data for storage, outflow, estimated outflow from ResOpsUS and GRanD is well described. But the description of the storage anomaly data from EO are lacking some information e.g some description of the SWOT mission, where the data is exactly from, and what are the postprocessing steps from SWOT to storage anomalies?
Calibration: The paper discusses outflow, storage, storage anomaly, estm. Storage as öbservations”to calibrate for with a trend including KGE. But which parameters are calibrated in each algorithm? How many parameters are calibrated? Is the number the same for all algorithms?
Line 187. Eq 6. You mentioned C in the equation before, but please put in storage capacity again (like in line 151)
Line 332 Fig 1: The part between 0 and 1 (or -0.73 and 1) is the interesting part. Maybe you can skip the values <-0.73 or sum them up or display them differently.
Fig 3/4: the calibration against outflow (b,d,f,h) is hard to distinguish. Maybe skip that part and put it in the appendix or show the differences to Observed.
Citation: https://doi.org/10.5194/hess-2024-291-RC2 - AC2: 'Reply on RC2', Seyed-Mohammad Hosseini-Moghari, 06 Feb 2025
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
274 | 58 | 56 | 388 | 75 | 12 | 17 |
- HTML: 274
- PDF: 58
- XML: 56
- Total: 388
- Supplement: 75
- BibTeX: 12
- EndNote: 17
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1