the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Towards robust seasonal streamflow forecasts in mountainous catchments: impact of calibration metric selection in hydrological modeling
Diego Araya
Eduardo Muñoz-Castro
James McPhee
Abstract. Dynamical (i.e., model-based) methods are widely used by forecasting centers to generate seasonal streamflow forecasts, building upon process-based hydrological models that require parameter specification (i.e., calibration). Here, we investigate the extent to which the choice of calibration objective function affects the quality of seasonal (spring-summer) streamflow forecasts produced with the traditional ensemble streamflow prediction (ESP) method and explore connections between forecast skill and hydrological consistency – measured in terms of biases in hydrological signatures – obtained from the model parameter sets. To this end, we calibrate three popular conceptual rainfall-runoff models (GR4J, TUW, and Sacramento) using 12 different objective functions, including seasonal metrics that emphasize errors during the snowmelt period, and produce hindcasts for five initialization times over a 33-year period (April/1987–March/2020) in 22 mountain catchments that span diverse hydroclimatic conditions along the semiarid Andes Cordillera (28°–37° S). The results show that the choice of calibration metric becomes relevant as the winter (snow accumulation) season begins (i.e., July 1), enhancing inter-basin differences in forecast skill as initializations approach the beginning of the snowmelt season (i.e., September 1). The comparison of seasonal forecasts obtained from different calibration metrics shows that hydrological consistency does not ensure satisfactory seasonal ESP forecasts (e.g., Split KGE), and that satisfactory ESP forecasts are not necessarily associated to a hydrologically consistent parameter set (e.g., VE-Sep). Among the options explored here, an objective function that combines the Kling-Gupta Efficiency (KGE) and the Nash-Sutcliffe Efficiency (NSE) with flows in log space provides the best compromise between hydrologically consistent model simulations and good forecast performance. Finally, the choice of calibration metric generally affects the magnitude of correlations between forecast quality attributes and catchment descriptors, rather than the sign, being the baseflow index and interannual runoff variability the best predictors of forecast skill. Overall, this study highlights the need for careful parameter estimation strategies in the forecasting production chain to generate skillful forecasts for the right reasons and draw robust conclusions on hydrologic predictability.
- Preprint
(8349 KB) - Metadata XML
-
Supplement
(1280 KB) - BibTeX
- EndNote
Diego Araya et al.
Status: closed
-
RC1: 'Comment on hess-2023-116', Anonymous Referee #1, 16 Jun 2023
The manuscript aims to evaluate the role of calibration metrics (objective function for calibration and performance evaluation metrics) on the seasonal streamflow forecasts in 22 mountainous river basins in Chile based on CAMELS-CL datasets. The quantum of work done by the authors needs appreciation, as well as the framing of the scientific questions. The manuscript has enough scientific content to be published in HESS after revision. The problems framed in the manuscript are tested using scientifically sound methodology.
However, I feel the manuscript is a complicated read due to the multiple parameters, metrics and lack of clarity, especially in the methods section. I believe the manuscript can benefit from reorganizing the content. The main result must be better highlighted, and others could be moved to the supplementary section to improve readability. The result sections do not highlight the overall conclusion or takeaway in each section. Therefore, I was pretty confused, even after multiple reads, about what the authors were trying to communicate.
A detailed flow chart can be used to convey the method. Parts of the methodology are distributed across different sections, including the introduction. The concept of Ensemble Streamflow Prediction used in the study is defined in the introduction section. I would appreciate elaborating on it in the methodology section instead. The introduction section should better focus on existing gaps in the literature and highlight the need for the present study.
There is a lack of consistency in the terms used, which makes it more confusing. For instance, though hindcasts are performed in the paper, at certain places, forecasts are used. Similarly, I did not understand what the authors meant by the first and second validation periods in the Figure 8 caption. Did you mean the calibration period, where the model is calibrated using different parameters, and the hindcast period, where the ensemble streamflow prediction method is employed?
I think the manuscript will also benefit from redesigning the figures. The multiple boxplot figures create a lot of complexity in analyzing. For instance, I would suggest focusing on the median result of all model combinations while showing the effect of initialization time and performance metrics for each calibration OF (Figure 7).
Reducing the amount of noise while focusing on a particular science question could considerably improve the manuscript's readability and merit.
Figure S2 is not cited, and supplementary figure S3 is wrongly numbered.
In respect of results, '(not shown)' is used multiple times in the manuscript. I would suggest it will be better to include it in the supplementary section if the results are an important part of the argument.
I reiterate the scientific questions in the manuscript intend to improve the seasonal ensemble streamflow prediction by assessing its sensitivity to calibration metrics is an important question. However, improving the organization and presentability of results are required to understand the manuscript outcomes better.
Citation: https://doi.org/10.5194/hess-2023-116-RC1 - AC1: 'Reply on RC1', Pablo Mendoza, 16 Aug 2023
-
RC2: 'Comment on hess-2023-116', Paul C. Astagneau, 30 Jun 2023
This study evaluates the effect of calibration, initial condition and choice of model structure on forecasting seasonal volumes in mountainous areas. To that end, the parameters of three conceptual models were calibrated for 22 river basins in Chile using 12 objective functions. The calibrated models were then used to produce ESP forecasts considering five initial conditions. The authors evaluated the quality of the spring-summer forecasts using 33 years of data given the different combinations of models, parameters and initial conditions. An evaluation of the links between forecast quality, model performance in simulating different streamflow signatures, and catchment characteristics was conducted. The authors found that the choice of objective function has an impact on forecast quality and that a high performance in simulating hydrological signatures does not ensure good forecast quality. The authors also found that these results depend to some extent on the hydrological model.
As mentioned by the first reviewer, this study represents a significant amount of work. The subject is very relevant and has not been covered much in the literature. The differences in forecast quality and hydrological consistency for the different forecast combinations demonstrate that this study covers an important topic that needs to be considered for operational use.
Given the amount of model/calibration/HIC combinations and the choice of presentation by the authors, the article is not very easy to understand. I also have several questions and comments on some of the methodological choices that were made. Because the questions, comments and corrections I am making below might require a lot of work, I think they should be considered in light of the choices that will be made to reorganize the paper.
Major comments:
- Twenty-two catchments with seasonal snowmelt contributions to total runoff were selected from the CAMELS-CL dataset for this study. It is stated (line 105) that “the selected basins are included in the CAMELS-CL dataset … and meet the following criteria…”. Were all the catchments meeting these criteria selected (resulting in a total of 22 catchments) or were there other mountainous catchments of CAMELS-CL meeting the same criteria? Although these catchments encompass a large variety of hydroclimatic conditions, a larger dataset would enable more general conclusions to be drawn from this study. If other catchments of CAMELS-CL were to meet the same criteria, I would suggest including them in this study. Since they come from the same open-source database, it would mean running the same calculations but for more catchments. If the choice of selecting 22 catchments was driven by computation time restrictions, please consider mentioning it in the manuscript.
- As pointed out by the first reviewer, the manuscript can benefit from reorganizing the content. One way to reorganize the content would be to keep only one figure presenting the results for all models/objective functions/HICs (e.g. fig. 5, extended in height to enhance visualisation) and change the other figures so that they better highlight the conclusions of the paper. A few related suggestions:
- Fig. 4 and section 4.1: I would have put this section at the end of the results section to illustrate the results. This figure could be reduced in terms of results by only keeping one model and two OFs (a popular OF and KGE(Q)+NSE(log(Q)))
- Fig. 5: to extend in height to enhance visualisation.
- Fig. 6: to move to the supplementary materials.
- Fig. 7: show only three OFs (pick the most relevant to highlight your conclusions), May 1 and Sep 1 for initialization times and two forecast criteria.
- Fig. 8: keep only two OFs and present the daily hydrographs for two years in one of the validation periods.
- Fig. 9: keep only two or three OFs and extend the figure in height.
- Fig. 10: remove the alpha and bias lines. Show only KGE or NSE for popular OFs.
- I did not understand why the ESP forecasts were evaluated on both calibration and evaluation periods without separation in the result analyses. Even if there are 32 meteorological inputs for each forecasted year, and that they are different from the meteorological input of the forecasted year, the streamflow data used to calculate seasonal performance has already been “seen” by the model during calibration. Since one of the goals of this study is to evaluate the impacts of calibration on seasonal forecasts, I think the authors should consider the evaluation of forecast quality only on the evaluation periods (or better explain why it was done that way). 33-1 years of meteorological data can still be used for the ensemble forecasts of each forecasted year. To improve the analysis of the hydrological consistency of model simulations, the authors could perform a split-sample test.
- Lines 21 and 435, the term “hydrologically consistent parameter set” is used. No analyses of parameter sets were made in this study, only the ability of the models to reproduce streamflow signatures was evaluated. A high performance for specific streamflow signatures may imply more consistency in simulating streamflow than using a “popular” metric. However, I would argue that it does not necessarily imply that the parameter sets of the models are more hydrologically consistent, as a model can be wrongly parameterized and the hypotheses behind not fit for the studied catchments. Even when a specific parameter set leads to better signature performance, equifinality of parameters can remain high, especially if the parameter sets yielding good performance vary between periods. As the manuscript already includes many results, I suggest considering a small analysis of the parameter sets of one of the models (e.g. TUWmodel that seems to be giving the highest forecast quality). This analysis would only be conducted for three objective functions (the one associated with the lowest hydrological consistency, the one associated with the highest hydrological consistency and KGE(Q)+NSE(log(Q)) which is the best compromise between forecast quality and hydrological consistency). In TUW model, not all the parameters would need to be assessed but, for instance, only the ones related to baseflow (in the TUWmodel package, it would be “param k2”) or/and snow, to relate the results to catchment attributes that have a strong correlation with forecast quality. The variations of parameters between periods could then be evaluated (if you were to follow the previous comment about periods of calibration and evaluation). Of course, this would need to be considered after a reorganisation of the manuscript (to better highlight the results already presented). It should not come in the same format as the other results. The content of the manuscript (in terms of results) needs to be reduced first.
Minor comments:
- The link to the Zenodo repository does not seem to be working anymore (last checked: 30/06).
- Lines 27 and 86: the term “for the right reasons” is a bit strong, as no additional data to streamflow was used for model evaluation. I suggest using “more hydrologically consistent simulations” as you did in the remaining of the paper.
- Line 108: how were the seasonal snowmelt contributions calculated?
- Line 155: the CemaNeige model also partitions total precipitation into liquid and solid precipitation. Liquid precipitation and snowmelt are fed to the soil moisture store.
- Line 158: the GR4J model also includes a non-conservative function for water exchanges between topographical catchments.
- Line 160: what do you mean by response area?
- Sect 3.1.1: were different elevation bands considered in TUWmodel and SNOW17?
- Sect 3.1.1: the three models used in this study were implemented within the R environment. These models and their exact implementation were described in (Astagneau et al., 2021; https://doi.org/10.5194/hess-25-3937-2021). In addition, the structures of the models are compared using a unified representation of the different storages and fluxes (Fig. 1). If, and only if, you used this paper to choose, implement or understand these models, you should cite it. Otherwise, please ignore this comment.
- Fig. 3 and sect. 3.1.2: I am not sure there is any validation of the models made in this study. For me, validation means that you are choosing one model over the other (or rejecting one) and evaluation means that you are evaluating the models outside the calibration period. Please consider replacing “validation” by “evaluation”.
- Sect 3.2: it could be useful to add a more detailed presentation of the ESP method (for instance by adding a reference to Fig. 3 of Crochemore et al., 2020; https://doi.org/10.1029/2019WR025700) and extend Fig. 3 that really helps to understand your framework.
Citation: https://doi.org/10.5194/hess-2023-116-RC2 - AC2: 'Reply on RC2', Pablo Mendoza, 16 Aug 2023
-
RC3: 'Comment on hess-2023-116', Anonymous Referee #3, 03 Jul 2023
Thank you for the opportunity to review this manuscript. The authors have clearly put substantial effort into this work to analyzing impact of calibration metrics, and providing potentially valuable insights in mountainous areas. While the overall presentation of the content is satisfactory, some of the arguments need stronger scientific support and detailed information. As other referees have already pointed out, the structure needs to be improved, while the content needs also to be filtered by relevance to reduce redundancy.
In terms of specific comments:
- Lines 20-25, please expand or explain abbreviations when they are first used to ensure clarity.
- The cited work "Troin et al., 2021" doesn't appear in the reference list.
- Line 112, There is a typo error, should be "km2".
- Line 123, could the authors expand on the underestimation they mention here? Is the underestimation from CR2MET or CAMELS dataset?
- Line 123 and 126, similar as the previous comment: bias in runoff is mentioned in line 123, but it's unclear how this ties into the observed runoff mentioned in line 126. Further clarification would be useful.
- Could the authors clarify which dataset is used to conduct Figures 1 and 2?
- On line 128, the manuscript mentions both basin-averaged precipitation from CR2MET and precipitation from CAMELS. Could the authors elaborate on how they are used? It would be helpful if they clarified the specific roles these data sources play in their analysis.
- Line 240, I assume WY stands for Water Years? But this is not stated in the contents when they are first mentioned.
- There are a large number of citations scattered throughout the paper, which makes it challenging to follow. Please consider revisiting these citations and remove any that may not be strictly necessary.
Again, thank you for the opportunity to review this work. With clarifications and improvements, I believe this paper has potential to make a valuable contribution to the field.
Citation: https://doi.org/10.5194/hess-2023-116-RC3 - AC3: 'Reply on RC3', Pablo Mendoza, 16 Aug 2023
Status: closed
-
RC1: 'Comment on hess-2023-116', Anonymous Referee #1, 16 Jun 2023
The manuscript aims to evaluate the role of calibration metrics (objective function for calibration and performance evaluation metrics) on the seasonal streamflow forecasts in 22 mountainous river basins in Chile based on CAMELS-CL datasets. The quantum of work done by the authors needs appreciation, as well as the framing of the scientific questions. The manuscript has enough scientific content to be published in HESS after revision. The problems framed in the manuscript are tested using scientifically sound methodology.
However, I feel the manuscript is a complicated read due to the multiple parameters, metrics and lack of clarity, especially in the methods section. I believe the manuscript can benefit from reorganizing the content. The main result must be better highlighted, and others could be moved to the supplementary section to improve readability. The result sections do not highlight the overall conclusion or takeaway in each section. Therefore, I was pretty confused, even after multiple reads, about what the authors were trying to communicate.
A detailed flow chart can be used to convey the method. Parts of the methodology are distributed across different sections, including the introduction. The concept of Ensemble Streamflow Prediction used in the study is defined in the introduction section. I would appreciate elaborating on it in the methodology section instead. The introduction section should better focus on existing gaps in the literature and highlight the need for the present study.
There is a lack of consistency in the terms used, which makes it more confusing. For instance, though hindcasts are performed in the paper, at certain places, forecasts are used. Similarly, I did not understand what the authors meant by the first and second validation periods in the Figure 8 caption. Did you mean the calibration period, where the model is calibrated using different parameters, and the hindcast period, where the ensemble streamflow prediction method is employed?
I think the manuscript will also benefit from redesigning the figures. The multiple boxplot figures create a lot of complexity in analyzing. For instance, I would suggest focusing on the median result of all model combinations while showing the effect of initialization time and performance metrics for each calibration OF (Figure 7).
Reducing the amount of noise while focusing on a particular science question could considerably improve the manuscript's readability and merit.
Figure S2 is not cited, and supplementary figure S3 is wrongly numbered.
In respect of results, '(not shown)' is used multiple times in the manuscript. I would suggest it will be better to include it in the supplementary section if the results are an important part of the argument.
I reiterate the scientific questions in the manuscript intend to improve the seasonal ensemble streamflow prediction by assessing its sensitivity to calibration metrics is an important question. However, improving the organization and presentability of results are required to understand the manuscript outcomes better.
Citation: https://doi.org/10.5194/hess-2023-116-RC1 - AC1: 'Reply on RC1', Pablo Mendoza, 16 Aug 2023
-
RC2: 'Comment on hess-2023-116', Paul C. Astagneau, 30 Jun 2023
This study evaluates the effect of calibration, initial condition and choice of model structure on forecasting seasonal volumes in mountainous areas. To that end, the parameters of three conceptual models were calibrated for 22 river basins in Chile using 12 objective functions. The calibrated models were then used to produce ESP forecasts considering five initial conditions. The authors evaluated the quality of the spring-summer forecasts using 33 years of data given the different combinations of models, parameters and initial conditions. An evaluation of the links between forecast quality, model performance in simulating different streamflow signatures, and catchment characteristics was conducted. The authors found that the choice of objective function has an impact on forecast quality and that a high performance in simulating hydrological signatures does not ensure good forecast quality. The authors also found that these results depend to some extent on the hydrological model.
As mentioned by the first reviewer, this study represents a significant amount of work. The subject is very relevant and has not been covered much in the literature. The differences in forecast quality and hydrological consistency for the different forecast combinations demonstrate that this study covers an important topic that needs to be considered for operational use.
Given the amount of model/calibration/HIC combinations and the choice of presentation by the authors, the article is not very easy to understand. I also have several questions and comments on some of the methodological choices that were made. Because the questions, comments and corrections I am making below might require a lot of work, I think they should be considered in light of the choices that will be made to reorganize the paper.
Major comments:
- Twenty-two catchments with seasonal snowmelt contributions to total runoff were selected from the CAMELS-CL dataset for this study. It is stated (line 105) that “the selected basins are included in the CAMELS-CL dataset … and meet the following criteria…”. Were all the catchments meeting these criteria selected (resulting in a total of 22 catchments) or were there other mountainous catchments of CAMELS-CL meeting the same criteria? Although these catchments encompass a large variety of hydroclimatic conditions, a larger dataset would enable more general conclusions to be drawn from this study. If other catchments of CAMELS-CL were to meet the same criteria, I would suggest including them in this study. Since they come from the same open-source database, it would mean running the same calculations but for more catchments. If the choice of selecting 22 catchments was driven by computation time restrictions, please consider mentioning it in the manuscript.
- As pointed out by the first reviewer, the manuscript can benefit from reorganizing the content. One way to reorganize the content would be to keep only one figure presenting the results for all models/objective functions/HICs (e.g. fig. 5, extended in height to enhance visualisation) and change the other figures so that they better highlight the conclusions of the paper. A few related suggestions:
- Fig. 4 and section 4.1: I would have put this section at the end of the results section to illustrate the results. This figure could be reduced in terms of results by only keeping one model and two OFs (a popular OF and KGE(Q)+NSE(log(Q)))
- Fig. 5: to extend in height to enhance visualisation.
- Fig. 6: to move to the supplementary materials.
- Fig. 7: show only three OFs (pick the most relevant to highlight your conclusions), May 1 and Sep 1 for initialization times and two forecast criteria.
- Fig. 8: keep only two OFs and present the daily hydrographs for two years in one of the validation periods.
- Fig. 9: keep only two or three OFs and extend the figure in height.
- Fig. 10: remove the alpha and bias lines. Show only KGE or NSE for popular OFs.
- I did not understand why the ESP forecasts were evaluated on both calibration and evaluation periods without separation in the result analyses. Even if there are 32 meteorological inputs for each forecasted year, and that they are different from the meteorological input of the forecasted year, the streamflow data used to calculate seasonal performance has already been “seen” by the model during calibration. Since one of the goals of this study is to evaluate the impacts of calibration on seasonal forecasts, I think the authors should consider the evaluation of forecast quality only on the evaluation periods (or better explain why it was done that way). 33-1 years of meteorological data can still be used for the ensemble forecasts of each forecasted year. To improve the analysis of the hydrological consistency of model simulations, the authors could perform a split-sample test.
- Lines 21 and 435, the term “hydrologically consistent parameter set” is used. No analyses of parameter sets were made in this study, only the ability of the models to reproduce streamflow signatures was evaluated. A high performance for specific streamflow signatures may imply more consistency in simulating streamflow than using a “popular” metric. However, I would argue that it does not necessarily imply that the parameter sets of the models are more hydrologically consistent, as a model can be wrongly parameterized and the hypotheses behind not fit for the studied catchments. Even when a specific parameter set leads to better signature performance, equifinality of parameters can remain high, especially if the parameter sets yielding good performance vary between periods. As the manuscript already includes many results, I suggest considering a small analysis of the parameter sets of one of the models (e.g. TUWmodel that seems to be giving the highest forecast quality). This analysis would only be conducted for three objective functions (the one associated with the lowest hydrological consistency, the one associated with the highest hydrological consistency and KGE(Q)+NSE(log(Q)) which is the best compromise between forecast quality and hydrological consistency). In TUW model, not all the parameters would need to be assessed but, for instance, only the ones related to baseflow (in the TUWmodel package, it would be “param k2”) or/and snow, to relate the results to catchment attributes that have a strong correlation with forecast quality. The variations of parameters between periods could then be evaluated (if you were to follow the previous comment about periods of calibration and evaluation). Of course, this would need to be considered after a reorganisation of the manuscript (to better highlight the results already presented). It should not come in the same format as the other results. The content of the manuscript (in terms of results) needs to be reduced first.
Minor comments:
- The link to the Zenodo repository does not seem to be working anymore (last checked: 30/06).
- Lines 27 and 86: the term “for the right reasons” is a bit strong, as no additional data to streamflow was used for model evaluation. I suggest using “more hydrologically consistent simulations” as you did in the remaining of the paper.
- Line 108: how were the seasonal snowmelt contributions calculated?
- Line 155: the CemaNeige model also partitions total precipitation into liquid and solid precipitation. Liquid precipitation and snowmelt are fed to the soil moisture store.
- Line 158: the GR4J model also includes a non-conservative function for water exchanges between topographical catchments.
- Line 160: what do you mean by response area?
- Sect 3.1.1: were different elevation bands considered in TUWmodel and SNOW17?
- Sect 3.1.1: the three models used in this study were implemented within the R environment. These models and their exact implementation were described in (Astagneau et al., 2021; https://doi.org/10.5194/hess-25-3937-2021). In addition, the structures of the models are compared using a unified representation of the different storages and fluxes (Fig. 1). If, and only if, you used this paper to choose, implement or understand these models, you should cite it. Otherwise, please ignore this comment.
- Fig. 3 and sect. 3.1.2: I am not sure there is any validation of the models made in this study. For me, validation means that you are choosing one model over the other (or rejecting one) and evaluation means that you are evaluating the models outside the calibration period. Please consider replacing “validation” by “evaluation”.
- Sect 3.2: it could be useful to add a more detailed presentation of the ESP method (for instance by adding a reference to Fig. 3 of Crochemore et al., 2020; https://doi.org/10.1029/2019WR025700) and extend Fig. 3 that really helps to understand your framework.
Citation: https://doi.org/10.5194/hess-2023-116-RC2 - AC2: 'Reply on RC2', Pablo Mendoza, 16 Aug 2023
-
RC3: 'Comment on hess-2023-116', Anonymous Referee #3, 03 Jul 2023
Thank you for the opportunity to review this manuscript. The authors have clearly put substantial effort into this work to analyzing impact of calibration metrics, and providing potentially valuable insights in mountainous areas. While the overall presentation of the content is satisfactory, some of the arguments need stronger scientific support and detailed information. As other referees have already pointed out, the structure needs to be improved, while the content needs also to be filtered by relevance to reduce redundancy.
In terms of specific comments:
- Lines 20-25, please expand or explain abbreviations when they are first used to ensure clarity.
- The cited work "Troin et al., 2021" doesn't appear in the reference list.
- Line 112, There is a typo error, should be "km2".
- Line 123, could the authors expand on the underestimation they mention here? Is the underestimation from CR2MET or CAMELS dataset?
- Line 123 and 126, similar as the previous comment: bias in runoff is mentioned in line 123, but it's unclear how this ties into the observed runoff mentioned in line 126. Further clarification would be useful.
- Could the authors clarify which dataset is used to conduct Figures 1 and 2?
- On line 128, the manuscript mentions both basin-averaged precipitation from CR2MET and precipitation from CAMELS. Could the authors elaborate on how they are used? It would be helpful if they clarified the specific roles these data sources play in their analysis.
- Line 240, I assume WY stands for Water Years? But this is not stated in the contents when they are first mentioned.
- There are a large number of citations scattered throughout the paper, which makes it challenging to follow. Please consider revisiting these citations and remove any that may not be strictly necessary.
Again, thank you for the opportunity to review this work. With clarifications and improvements, I believe this paper has potential to make a valuable contribution to the field.
Citation: https://doi.org/10.5194/hess-2023-116-RC3 - AC3: 'Reply on RC3', Pablo Mendoza, 16 Aug 2023
Diego Araya et al.
Diego Araya et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
525 | 195 | 26 | 746 | 42 | 6 | 4 |
- HTML: 525
- PDF: 195
- XML: 26
- Total: 746
- Supplement: 42
- BibTeX: 6
- EndNote: 4
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1