the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Using global reanalysis and rainfall-runoff model to study multi-decadal variability in catchment hydrology at the European scale
Abstract. This study explores the ability of global reanalyses to simulate catchment hydrology at the European scale using a conceptual rainfall–runoff model. We used two reanalyses, NOAA 20CR and ERA-20C, to simulate daily streamflows for over 2000 catchments since the 1840s. Our findings show that both reanalyses perform well, particularly for mean flows, with simulation performance improving as catchment size increases, though challenges remain for Mediterranean and snow-dominated regions. Additionally, the study highlights significant multi-decadal variations in streamflow, revealing alternating wet and dry periods across Europe. These findings provide valuable insights into long-term hydrological trends and offer a useful framework for understanding future changes in both water resources and hydrological extremes, such as floods, under climate variability.
- Preprint
(2831 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on hess-2024-336', Anonymous Referee #1, 02 Jan 2025
This paper presents an analysis of the century-long ERA-20C and NOAA-20CR reanalysis products to simulate flows and long-term trends that could otherwise be missed when looking at shorter time periods. I think the study is of interest and proposes an alternative and convincing argument that the streamflows alternate between wet and dry periods over long (i.e. decadal or longer) time periods. While I found the paper to be well written and interesting, I also have some issues that I think would need to be solved before recommending acceptance for publication.
1. The title should reflect the fact that the reanalyses are not just global reanalyses. They are century-long reanalyses, which is the real kicker and novelty.
2. Line 63: Define what the 20CR and 20C mean (20th century reanalysis, please add)
3. Line 64: "eight-times daily" --> Better to say 3-hourly.
4. Line 81: "such modeling" --> such a modelling"
5. I see the resolution is a key issue in this paper. For starters, the MSWEP+ERA5 combination is mismatched, and the authors should have considered ERA5-Land to match the MSWEP precipitation resolution (0.1° each). Furthermore, to preserve coherence, ERA5-Land precipitation could have been used instead of MSWEP. I think a justification of this should be provided as well as an impact analysis (not redoing runs, but perhaps contextualizing with respect to the size of the catchments?).
6. In a similar vein, catchments sizes were restricted to above 100km2, whereas the ERA and NOAA datasets cover swaths of land between 10000 and 17000km2, which is a huge mismatch, especially the strong elevation gradients all over large parts of Europe. Was downscaling not an option? It seems that at least with the altitude/elevation and some background information it would be possible to do at least a rudimentary approach. I think the authors should consider this in their next version, or at least discuss it in more details because it is a key element of the paper.
7. KGE version used is the 2009 version, whereas the community has moved on to the 2012 modified KGE. While I have no problem with this (it is still a good metric), it would be good to explain that this was an editorial choice.
8. Line 198: "manipulated" has a negative connotation. I would suggest: "was implemented".
9. It seems Figure 2a-c first boxplot is not colored in red as supposed to? Or is it some other variable, given there are 9 boxes and only 8 legend entries? Which seems true for the two other boxplots as well. Please clarify.
10. Lines 248 and 256: p-value can be set to 0 when it is basically machine precision (2.2e-16).
11. Figure 5: One problem here is that the top rows will be a significant subset of the bottom rows, so the results are not necessarily comparable. For example, imagine that 50% of the catchments only have validation data on the exact 1982-1995 period. That would mean that those basis' scores would be exactly the same in the bottom plots even though it should cover la longer period, but it cannot be interpreted that way. I would suggest identifying a series of catchments that have at least 50 years of validation data and keeping those independent for the long-duration tests.
12. Lines 275-276: I think this is useful information that should be shown as it would show another mode/dimension to the problem that filters out random perturbations and focuses on the longer-term patterns.
13. General comment: It would be good to have a series of simulations for which only the ones that obtained KGE in calibration/validation above a certain quality threshold are preserved. Ex KGE > 0.7 for NOAA/NOAA and ERA/ERA (not really useful to do ERA5+MESWEP/NOAA(ERA). This would ensure results are not negatively affected by poorly modelled basins/models, as we have some that have quite low KGE values that contribute to the detailed variability results, and are perhaps not as trustworthy.
14. Figure 6: Not clear why the number of catchments changes for each metric. I would assume they would be the same from one metric to the next since they are only excluded if they don't have 30 years of observations?
15. Figure 6: I would show the boxplots in their entirety here. Not much use limiting to 0.2. At least to 0.0.
16. Lines 310-311: There have been numerous studies on this previously, I think it would be good to reference a few to show that your results are in-line with the current literature.
17. Line 321: "of this work" --> "for this work"
18. Lines 334-335: But also human intervention, forestry, agriculture, urbanization over the past ~120 years has definitely had an impact on hydrological response.
19. Figure 11: I think there is a problem here, all three figures are exactly the same.
20. Lines 407-408: But at the same time, the MSWEP + ERA5 dataset would still outperform the others if it had been used for calibration and evaluation (if it were available on the same periods), so I am not sure this point holds. It is true that consistency is important, but perhaps the gain would be much more if the resolution was also highly increased.
21. General comment: How are NOAA and ERA related? i.e. I imagine they must share a lot of the same historical data for the period prior to 1970-ish. It might be good to give more details on these in the data section.
22. In the Author contribution section, there is PB and OD, but I imagine OD = LO?
Overall a good work that could be much more impactful with a few adjustments, so I recommend minor revisions at this stage since I do not think it will require redoing simulations (although perhaps sampling high-quality datasets as subsets from the existing ones could be done with not too much work).
Citation: https://doi.org/10.5194/hess-2024-336-RC1 -
AC1: 'Reply on RC1', Pierre Brigode, 05 Mar 2025
This paper presents an analysis of the century-long ERA-20C and NOAA-20CR reanalysis products to simulate flows and long-term trends that could otherwise be missed when looking at shorter time periods. I think the study is of interest and proposes an alternative and convincing argument that the streamflows alternate between wet and dry periods over long (i.e. decadal or longer) time periods. While I found the paper to be well written and interesting, I also have some issues that I think would need to be solved before recommending acceptance for publication.
Thank you for this positive feedback on our work.
1. The title should reflect the fact that the reanalyses are not just global reanalyses. They are century-long reanalyses, which is the real kicker and novelty.
Thank you for this suggestion. We will propose a new title for the article to emphasize the length of the reanalyses used and to incorporate the word "explore" (as suggested by Reviewer 2).
The previous title of the article was: "Using global reanalysis and rainfall-runoff model to study multi-decadal variability in catchment hydrology at the European scale."
We will propose the following revised title: "Using century-long reanalysis and a rainfall-runoff model to explore multi-decadal variability in catchment hydrology at the European scale."
2. Line 63: Define what the 20CR and 20C mean (20th century reanalysis, please add)
We will clarify the meaning of 20CR in the revised version of the article.
3. Line 64: "eight-times daily" --> Better to say 3-hourly.
Thank you, we will change this term in the revised version of the article.
4. Line 81: "such modeling" --> such a modelling"
Thank you, we will correct this in the revised version of the article.
5. I see the resolution is a key issue in this paper. For starters, the MSWEP+ERA5 combination is mismatched, and the authors should have considered ERA5-Land to match the MSWEP precipitation resolution (0.1° each). Furthermore, to preserve coherence, ERA5-Land precipitation could have been used instead of MSWEP. I think a justification of this should be provided as well as an impact analysis (not redoing runs, but perhaps contextualizing with respect to the size of the catchments?).
Thank you for this comment. We agree with the reviewer: the spatial resolution of the input data for the rainfall-runoff model is a key aspect of our article. We will further address these points in the revised version of the article.
For the first version of our article, in addition to the historical reanalyses ERA-20C and NOAA 20CR, we considered two reference datasets for the recent period: (i) the combination of MSWEP precipitation and ERA5 air temperature and (ii) the combination of ERA5-Land precipitation and ERA5 air temperature. The calibration performances obtained with these four datasets are presented in the following figure.
The results show that the performances obtained using the combination of ERA5-Land precipitation and ERA5 air temperature (in black, not shown in the first version of the paper) are lower than those obtained with the combination of MSWEP precipitation and ERA5 air temperature. For this reason, we chose not to present the results obtained with the ERA5-Land precipitation and ERA5 air temperature combination in our article. However, we did not test the combination of ERA5-Land precipitation and ERA5-Land air temperature. We will mention these elements in the revised version of the article.
Figure 1 : Calibration performance (KGE) for four different climate forcings (MSWEP & ERA5, ERA5-Land & ERA5, NOAA, and ERA, from left to right) according to region (note that Spanish catchments are not present in this subset). Boxplots are constructed with the 0.10, 0.25, 0.50, 0.75, and 0.90 quantiles.
6. In a similar vein, catchments sizes were restricted to above 100km2, whereas the ERA and NOAA datasets cover swaths of land between 10000 and 17000km2, which is a huge mismatch, especially the strong elevation gradients all over large parts of Europe. Was downscaling not an option? It seems that at least with the altitude/elevation and some background information it would be possible to do at least a rudimentary approach. I think the authors should consider this in their next version, or at least discuss it in more details because it is a key element of the paper.
We thank the reviewer for this important comment. Yes, we conducted tests to extrapolate meteorological forcings before model parameter calibration and rainfall-runoff simulations, particularly for high-altitude catchments. These tests aimed to extrapolate precipitation and air temperature based on the difference between the median elevation of each catchment and the median elevation of the meteorological forcing grid cells in each dataset. The results showed an improvement in calibration performance for small mountainous catchments but also led to a decrease in performance for other catchments. Since the performance improvement was not consistent across all catchments, we did not pursue this option further in the article and used the meteorological forcings as they were, without downscaling. By doing so, we assume having lower performance in terms of hydrological simulations at the daily time step, but we maintain the long-term trends in air temperature and precipitation from the considered reanalyses.
However, it is important to note that the use of the snow accumulation and melt model CemaNeige inherently involves a downscaling of meteorological forcings by distributing precipitation and air temperature over five elevation bands (in our case), which are constructed based on the hypsometric curve of the catchment. Thus, the forcings input into the model are considered representative of those observed at the median elevation of the catchment and are then distributed across the five zones according to the gradients described by Valéry et al. (2014).
The development of a downscaling method at the scale of all the European catchments used in our study was beyond the scope of our article but represents a natural perspective for future work. These aspects will be clarified in the revised version of the article.
7. KGE version used is the 2009 version, whereas the community has moved on to the 2012 modified KGE. While I have no problem with this (it is still a good metric), it would be good to explain that this was an editorial choice.
Thank you for this comment. We used the 2009 version of the KGE to be able to compare the performance and parameter values obtained in other similar studies on several catchment subsets (comparisons not shown in the article). We will clarify this point in the revised version of the article.
8. Line 198: "manipulated" has a negative connotation. I would suggest: "was implemented".
Thank you, we will change this sentence in the revised version of the article.
9. It seems Figure 2a-c first boxplot is not colored in red as supposed to? Or is it some other variable, given there are 9 boxes and only 8 legend entries? Which seems true for the two other boxplots as well. Please clarify.
Thank you for this comment: the first boxplot in each figure represents all the studied catchments. We will clarify this point in the revised version of the figure.
10. Lines 248 and 256: p-value can be set to 0 when it is basically machine precision (2.2e-16).
Thank you, we will set to 0 the p-value in the revised version of the article.
11. Figure 5: One problem here is that the top rows will be a significant subset of the bottom rows, so the results are not necessarily comparable. For example, imagine that 50% of the catchments only have validation data on the exact 1982-1995 period. That would mean that those basis' scores would be exactly the same in the bottom plots even though it should cover la longer period, but it cannot be interpreted that way. I would suggest identifying a series of catchments that have at least 50 years of validation data and keeping those independent for the long-duration tests.
Thank you for this suggestion: we will follow the reviewer's advice and present, in the revised version of the article, the results of a subsample of catchments with at least 50 years of data over the evaluation period.
12. Lines 275-276: I think this is useful information that should be shown as it would show another mode/dimension to the problem that filters out random perturbations and focuses on the longer-term patterns.
Thank you for this suggestion: we will include these additional results in the appendix of the revised version of the article to avoid making the article too lengthy.
13. General comment: It would be good to have a series of simulations for which only the ones that obtained KGE in calibration/validation above a certain quality threshold are preserved. Ex KGE > 0.7 for NOAA/NOAA and ERA/ERA (not really useful to do ERA5+MESWEP/NOAA(ERA). This would ensure results are not negatively affected by poorly modelled basins/models, as we have some that have quite low KGE values that contribute to the detailed variability results, and are perhaps not as trustworthy.
Thank you for this suggestion. In the revised version of the article, we will add the simulations obtained for a subsample of catchments where the rainfall-runoff model exceeds a performance threshold considered as good.
14. Figure 6: Not clear why the number of catchments changes for each metric. I would assume they would be the same from one metric to the next since they are only excluded if they don't have 30 years of observations?
Thank you for this comment. In addition to the missing value threshold used at the daily timestep to select a catchment sub-set, each flow metric has its own gap threshold. These differences will be clarified in the revised version of the article.
15. Figure 6: I would show the boxplots in their entirety here. Not much use limiting to 0.2. At least to 0.0.
Thank you for this comment: we will change the y-axis limit of Figure 6 in the revised version of the article.
16. Lines 310-311: There have been numerous studies on this previously, I think it would be good to reference a few to show that your results are in-line with the current literature.
Thank you for this comment. We agree with the reviewer and will add several references on this topic in the revised version of the article : Hydrological models have the ability to compensate for errors in forcing via parameter calibration, whether these errors are systematic (bias) or random (see e.g. Dawdy and Bergmann, 1969 ; Oudin et al., 2006).
17. Line 321: "of this work" --> "for this work"
Thank you, we will correct this in the revised version of the article.
References
Coron, L., Thirel, G., Delaigue, O., Perrin, C. & Andréassian, V. (2017). The suite of lumped GR hydrological models in an R package. Environmental Modelling & Software 94: 166‑71. https://doi.org/10.1016/j.envsoft.2017.05.002.
Dawdy, D. R. & Bergmann, J. M. (1969). Effect of Rainfall Variability on Streamflow Simulation. Water Resources Research 5, nᵒ 5): 958‑66. https://doi.org/10.1029/WR005i005p00958.
Michel, C. (1991). Hydrologie Appliquée Aux Petits Bassins Ruraux (Applied Hydrology for Small Catchments). Internal Report (Cemagref Antony, France).
Oudin, L., Perrin, P., Mathevet,T., Andréassian,V. & Michel, C. (2006). Impact of biased and randomly corrupted inputs on the efficiency and the parameters of watershed models. Journal of Hydrology 320, nᵒ 1‑2: 62‑83. https://doi.org/10.1016/j.jhydrol.2005.07.016.
Valéry, A., Andréassian, V. & Perrin, C. (2014). ‘As simple as possible but not simpler’: what is useful in a temperature-based snow-accounting routine? Part 2 - Sensitivity analysis of the Cemaneige snow accounting routine on 380 catchments. Journal of Hydrology 517: 1176 87. https://doi.org/10.1016/j.jhydrol.2014.04.058.
Citation: https://doi.org/10.5194/hess-2024-336-AC1
-
AC1: 'Reply on RC1', Pierre Brigode, 05 Mar 2025
-
RC2: 'Comment on hess-2024-336', Anonymous Referee #2, 20 Jan 2025
The paper seeks to explore the applicability of two long term reanalysis datasets for reconstructing river flows (low, mean and high) across a large sample of European catchments. This is an important topic for understanding variability and change and contextualising emerging trends and will thus be of interest. I enjoyed reading the paper. While supportive of the paper and ultimately I recommend only minor corrections there are some adjustments to structure and a couple of points of clarity that in my mind would make the paper stronger.
- The title might be reframed to explicitly include the words ‘exploring’ or ‘evaluating’ the utility of these datasets across the flow regime. This is ultimately what the paper does. For a full reconstruction additional uncertainties including hydrological model would need to be included and many of the limitations noted in the discussion integrated into the analysis. However, if the aim is to evaluate the utility of these products then the current study design stands.
- The introduction and literature review provides a nice summary and collection of useful references.
- Rather than NOAA and ERA please use the full reanalysis product name throughout for clarity.
- Line 81 suggest evaluate rather than document
- In your aims on line 86, what does efficient mean, use of the word here is a little vague.
- Given the scale mismatches can you offer a sentence or two on why downscaling or a combination of downscaling and bias correction was not included?
- The section on the four criteria used for catchment selection (line 121) could be shortened with the actual criteria introduced as you list them. I found myself wondering what you mean by relatively long series, adequate area etc. Why the threshold of 100km?
- No need for bullets in differentiating catchment set.
- The model calibration process is generally well described, however it might be worth noting how parameter sets were identified – what search was used – the default in GR4J package or another approach.
- A single module structure is used across a very diverse catchment set. Some reflection on why and the strengths/weaknesses of this approach in the context of the aim of the paper would be useful in this section.
- Maybe introduce the Wilcoxon rank test in the methods and why it is used.
- Fig 7 and eslewhere – have you any suggestions as to why the reanalysis datasets diverge at particular points – eg. Western France prior to 1940 while they show comparable performance for more recent periods. Might there be differences in the sea level pressure data assimilated in each?
- In the discussion the limitation might be framed better in the context of the aims of the study, i.e. Full exploration of these aspects was not the purpose of the study but rather to evaluate the input datasets.
- Results are presented throughout the discussion section. It would be better for the reader and the clarity of the paper if these were in the results section.
- The conclusion is rather like a discussion to me. It would be better if the core research questions were discussed in the discussion section and then more concise conclusions drawn. This would help the sharpness and clarity of the paper.
Citation: https://doi.org/10.5194/hess-2024-336-RC2 -
AC2: 'Reply on RC2', Pierre Brigode, 05 Mar 2025
The paper seeks to explore the applicability of two long term reanalysis datasets for reconstructing river flows (low, mean and high) across a large sample of European catchments. This is an important topic for understanding variability and change and contextualising emerging trends and will thus be of interest. I enjoyed reading the paper. While supportive of the paper and ultimately I recommend only minor corrections there are some adjustments to structure and a couple of points of clarity that in my mind would make the paper stronger.
Thank you for these positive comments.
1. The title might be reframed to explicitly include the words ‘exploring’ or ‘evaluating’ the utility of these datasets across the flow regime. This is ultimately what the paper does. For a full reconstruction additional uncertainties including hydrological model would need to be included and many of the limitations noted in the discussion integrated into the analysis. However, if the aim is to evaluate the utility of these products then the current study design stands.
Thank you for this suggestion. We will propose a new title for the article, incorporating the word "explore" and also integrating Reviewer 1's suggestion regarding the length of the reanalyses used in the study.
The previous title of the article was: "Using global reanalysis and rainfall-runoff model to study multi-decadal variability in catchment hydrology at the European scale."
We will propose the following revised title: "Using century-long reanalysis and a rainfall-runoff model to explore multi-decadal variability in catchment hydrology at the European scale".
2. The introduction and literature review provides a nice summary and collection of useful references.
Thank you for these positive comments.
3. Rather than NOAA and ERA please use the full reanalysis product name throughout for clarity.
In the revised version of the article, we will use the full name of each reanalysis.
4. Line 81 suggest evaluate rather than document
Thank you for this suggestion: we will change this term in the revised version of the article.
5. In your aims on line 86, what does efficient mean, use of the word here is a little vague.
Thank you for this comment: we will clarify what we mean by the term "efficient" in the revised version of the article.
6. Given the scale mismatches can you offer a sentence or two on why downscaling or a combination of downscaling and bias correction was not included?
We thank the reviewer for this important comment. We conducted tests to extrapolate meteorological forcings before model parameter calibration and rainfall-runoff simulations, particularly for high-altitude catchments. These tests aimed to extrapolate precipitation and air temperature based on the difference between the median elevation of each catchment and the median elevation of the meteorological forcing grid cells in each dataset. The results showed an improvement in calibration performance for small mountainous catchments but also led to a decrease in performance for other catchments. Since the performance improvement was not consistent across all catchments, we did not pursue this option further in the article and used the meteorological forcings as they were, without downscaling. By doing so, we assume having lower performance in terms of hydrological simulations at the daily time step, but we maintain the long-term trends in air temperature and precipitation from the considered reanalyses.
However, it is important to note that the use of the snow accumulation and melt model CemaNeige inherently involves a downscaling of meteorological forcings by distributing precipitation and air temperature over five elevation bands (in our case), which are constructed based on the hypsometric curve of the catchment. Thus, the forcings input into the model are considered representative of those observed at the median elevation of the catchment and are then distributed across the five zones according to the gradients described by Valéry et al., 2014).
The development of a downscaling method at the scale of all the European catchments used in our study was beyond the scope of our article but represents a natural perspective for future work. These aspects will be clarified in the revised version of the article.
7. The section on the four criteria used for catchment selection (line 121) could be shortened with the actual criteria introduced as you list them. I found myself wondering what you mean by relatively long series, adequate area etc. Why the threshold of 100km?
We will shorten this paragraph by directly stating what we consider to be a sufficiently long streamflow series, a sufficiently large catchment, etc. The catchment area threshold of 100 km² is not based on an objective criterion, and we will clarify this in the revised version of the article.
8. No need for bullets in differentiating catchment set.
We will remove this list in the revised version of the article.
9. The model calibration process is generally well described, however it might be worth noting how parameter sets were identified – what search was used – the default in GR4J package or another approach.
We used the default optimization algorithm included in the airGR R package. This algorithm was specifically designed for GR models (Michel, 1991 ; Coron et al., 2017). We will clarify this in the revised version of the article.
10. A single module structure is used across a very diverse catchment set. Some reflection on why and the strengths/weaknesses of this approach in the context of the aim of the paper would be useful in this section.
We agree with the reviewer: verifying whether the results obtained in our study using a single rainfall-runoff model are robust by reproducing the methodology with different rainfall-runoff models is an interesting perspective for future work.
The results we obtained with a single model suggest a predominant control of meteorological forcings on the long-term evolution of streamflow. This implies that modeling choices (e.g., parameter optimization method, model structure) may have only a minor impact on the trends of flow indicators. However, this hypothesis still needs to be more directly demonstrated by considering other calibration strategies or different models.
We will further discuss this point in the revised version of the article.
11. Maybe introduce the Wilcoxon rank test in the methods and why it is used.
We will explain the reasons for using this test in the revised version of the article.
12. Fig 7 and eslewhere – have you any suggestions as to why the reanalysis datasets diverge at particular points – eg. Western France prior to 1940 while they show comparable performance for more recent periods. Might there be differences in the sea level pressure data assimilated in each?
We will discuss possible explanations for the divergence/convergence of the reanalyses depending on the periods considered in the revised version of the article, particularly by looking for explanations related to the assimilated data for each of the reanalyses.
13. In the discussion the limitation might be framed better in the context of the aims of the study, i.e. Full exploration of these aspects was not the purpose of the study but rather to evaluate the input datasets.
Thank you for this comment: we will propose a new organization for the discussion and conclusion sections to clarify the key messages conveyed by our article.
14. Results are presented throughout the discussion section. It would be better for the reader and the clarity of the paper if these were in the results section.
Thank you for this comment: we will propose a new organization for the discussion and conclusion sections to clarify the key messages conveyed by our article.
15. The conclusion is rather like a discussion to me. It would be better if the core research questions were discussed in the discussion section and then more concise conclusions drawn. This would help the sharpness and clarity of the paper.
Thank you for this comment: we will propose a new organization for the discussion and conclusion sections to clarify the key messages conveyed by our article.
References
Coron, L., Thirel, G., Delaigue, O., Perrin, C. & Andréassian, V. (2017). The suite of lumped GR hydrological models in an R package. Environmental Modelling & Software 94: 166‑71. https://doi.org/10.1016/j.envsoft.2017.05.002.
Dawdy, D. R. & Bergmann, J. M. (1969). Effect of Rainfall Variability on Streamflow Simulation. Water Resources Research 5, nᵒ 5): 958‑66. https://doi.org/10.1029/WR005i005p00958.
Michel, C. (1991). Hydrologie Appliquée Aux Petits Bassins Ruraux (Applied Hydrology for Small Catchments). Internal Report (Cemagref Antony, France).
Oudin, L., Perrin, P., Mathevet,T., Andréassian,V. & Michel, C. (2006). Impact of biased and randomly corrupted inputs on the efficiency and the parameters of watershed models. Journal of Hydrology 320, nᵒ 1‑2: 62‑83. https://doi.org/10.1016/j.jhydrol.2005.07.016.
Valéry, A., Andréassian, V. & Perrin, C. (2014). ‘As simple as possible but not simpler’: what is useful in a temperature-based snow-accounting routine? Part 2 - Sensitivity analysis of the Cemaneige snow accounting routine on 380 catchments. Journal of Hydrology 517: 1176 87. https://doi.org/10.1016/j.jhydrol.2014.04.058.
Citation: https://doi.org/10.5194/hess-2024-336-AC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
273 | 85 | 16 | 374 | 14 | 11 |
- HTML: 273
- PDF: 85
- XML: 16
- Total: 374
- BibTeX: 14
- EndNote: 11
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1