the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Technical Note: Flood frequency study using partial duration series coupled with entropy principle
Abstract. Quality discharge measurements and frequency analysis are two major prerequisites for defining a design flood. Flood frequency analysis (FFA) utilizes a comprehensive understanding of the probabilistic behavior of extreme events but has certain limitations regarding the sampling method and choice of distribution models. Entropy as a modernday tool has found several applications in FFA, mainly in the derivation of probability distributions and their parameter estimation as per the principle of maximum entropy (POME) theory. The present study explores a new dimension to this area of research, where POME theory is applied in the partial duration series (PDS) modeling of FFA to locate the optimum threshold and the respective distribution models. The proposed methodology is applied to the Waimakariri River at the Old Highway Bridge site in New Zealand, as it has one of the best quality discharge data. The catchment also has a history of significant flood events in the last few decades. The degree of fitness of models to the exceedances is compared with the standardized statistical approach followed in literature. Also, the threshold estimated from this study is matched with some previous findings. Various return period quantiles are calculated, and their predictive ability is tested by bootstrap sampling. An overall analysis of results shows that entropy can be also be used as an effective tool for threshold identification in PDS modeling of flood frequency studies.
 Preprint
(1219 KB)  Metadata XML
 BibTeX
 EndNote
Status: closed

RC1: 'Comment on hess2021570', Anonymous Referee #1, 24 Dec 2021
The Technical Note "Flood frequency study using partial duration series coupled with entropy principle" by Swetapadma and Ojha discusses methods to use partial duration series type of data to carry out flood frequency estimation. The topic is interesting and appropriate for the journal, but I somehow fail to see what the main contributions of the note, which I think does not provide a clear overview of the new developments, significant advances, and novel aspects of experimental and theoretical methods and techniques which are relevant for scientific investigations within the journal scope (this is a quote from the description of HESS technical notes).
The paper is fairly well organised and the references mostly suitable, giving an overview of what is the current understanding of the question. It presents the modelling framework using a case study in New Zealand.
My understanding is that the novel contribution proposed by the authors is to use entropy as a way to choice the PDS threshold, but I am not entirely sure this innovation is presented in a clear and convincing way. In particular there are a few points that I find quite unclear or that I believe undermine the strength of the authors' argument: I'll try to outline them below.
* I feel there note is somewhat lacking a discussion of the consequences connected to the many choices which are done in the modelling pipeline: the more obvious one to me is the choice of estimating the distribution parameters with Lmoments rather than with other methods. Would the threshold/distribution choice be different if we used standard moments or maximum likelihood to estimate the parameters?
* the choice of distributions used to model the number and magnitude of exceedances could be better motivated. The "traditional" framework uses the Poisson and the Generalised Pareto distribution respectively: these are motivated by some well known theoretical results. The Negative binomial extends the Poisson distribution, allowing for overdispersion. I do not quite understand how the Binomial distribution is instead fitted here, as we would need to have a k value of exceedances over N "trials" but the N value should be different from year to year since we only focus on independent peaks. Is this what the authors do? Further the use of the GEV, P3 and LP3 surprised me here as these are typically employed to describe annual maxima and have little theoretical or practical justification in the context of threshold exceedances: they can of course be used, but I'd mention the fact that the GP has a somewhat stronger theoretical grounding.
* The definition of AIC and BIC is not correct in Table 3: the definition is, for AIC, n*loglik(model) + 2k . For the Gaussian case it can be shown that the loglik of the model reduces to the RSS, but that is a special case of a more general definition. In the caption of the table $o_i$ and $p_i$ should be written using capital letters for consistency with the table content
* Although he case study is quite interesting I find it is fairly hard to generalise anything from this. How do we know that this approach to PDS modelling is any more suitable than the other currently employed approaches? How could we evaluate that? How does this work in other places? How does this perform under different scenarios of true underlying processes? Overall I think the study does not give enough details about how generalisable the findings are (and actually it is not very clear what the main findings are). The note presents a modelling framework and applies it, but I feel it fails to convince the reader that this modelling approach is somehow better or worth adding to the currently available modelling tools. In particular I feel the modelling approach as presented is still very much needing the analyst to make some apriori choices: something that is one of the main issues which make the widespread use of PDS harder to implement.
Some other small minor points in the presentation:
Line 22: interference > inference
Line 29: the average number of events can be hardly be larger than the total number of annual maxima. It is often the case that the total number of PDS observations are more than the AMS observations, but this depends on the threshold: a very high threshold might result in PDS which have less observations than AMS.
Line 52: "gave the best results": in what sense? using what metrics? (This is a fundamental question which might also be addressed in the note: how do we evaluate what methods work well?)
Line 174: "To justify" sounds like an odd wording, maybe "to verify"? Further I would provide some more description of the test (very briefly) specifying the null and alternative hypothesis being tested and how to interpret the result (since these are not really commented on in the text around Figure 2)
Section 5: I would expect somewhere a plot showing the data series
Line 379: the threshold is much higher than 2.47 or 3.22: the threshold which is exceeded on average between 2.47 and 3.22 times per year.Citation: https://doi.org/10.5194/hess2021570RC1 
AC1: 'Reply on RC1', SONALI SWETAPADMA, 18 Feb 2022
The authors would like to thank the reviewer for all their constructive and insightful comments about this work. We have attached a PDF file that complies with these comments. And the required changes are made in the revised manuscript, which will be uploaded later. For ease of viewing, the replies are shown in blue color.
We believe that these comments help make our manuscript more precise and beneficial to the reader.

AC1: 'Reply on RC1', SONALI SWETAPADMA, 18 Feb 2022

RC2: 'Comment on hess2021570', Anonymous Referee #2, 08 Jan 2022
General comments:
The technical note “Flood frequency study using partial duration series coupled with entropy principle” is interesting. The subject is practical in flood frequency analysis. However, I could not determine the tangible advantage of the applied method. From my point of view, this paper is more like a research paper than a technical note. In general, it follows the scopes of the HESS journal. The advantage and novelty of the paper have to be highlighted in the manuscript. Therefore, the manuscript has more room for improvement.
Specific comments:
L7 & L14: In the text, you mentioned “quality discharge” several times; what does this term mean?
L1719: In the abstract, more focus on results is needed. Why “POME” is an effective tool?
In the manuscript and especially in the introduction, you used 34 references that are old (before 2000). It is better to employ recent research. However, it is not a critical point.
L3133: Could you please elaborate more? How is it possible to thoroughly evaluate the flood generating processes?!
L37: What does “better performance of PDS” mean?
L39: Please explain “Poisson arrival of peaks” before mentioning it in the text.
L 41: What do you mean by “Poisson process”? Readers demand to have clear fundamental literature in the introduction.
Please reflect and indicate your method advantage in the introduction. By having a wide variety of “λ”, what is entropybased models' preference?
You mentioned several times “probability dist.” And “fitting dist”, what are your purposes to point them in the introduction part? I understand what you did, but is it not vivid in your manuscript.
L6672: This paragraph must be rewritten to address the purposed method's necessity and novelty. Now, I did not get any points.
L75: In this sentence, what do you mean by “dual”?
Table 1: What is “Γ”?, I did not find its definition in the text.
L100: What is the benefit of the “negative binomial dist.” in your context?
L119: Why, in this paper, “e” has to be in a logarithm base?
L129130: “therefore, while … (Lee et al., 2011). I do not understand this sentence.
L131136: irrelevant to previous sentences; it has to be somewhere else.
L163165: Rewrite the sentences. It is not understandable.
L169171: Could you please graphically explain this condition?, then “intermediate” discharge can be intelligible in L173.
L171: Do you mean mathematically and logically using OR in Eq. 13? Because it has to be in this form.
Table 3: ADC has to be mentioned after AD, not in the end.
L200201: Does not have vivid meaning.
L206: What is conventional statistics in this research? And what do you want to point by comparing these models?
Page 9: What does this method work if λ=2? Two independent events per year.
Is there any way to calculate the threshold by assuming two events per year?
Figure 1: It is good instruction; however, you need to elaborate more on the second box “Extract PDS at …”, explain the third bo in the text, before this figure, and answer the question of “what if for nonlinear approach” for the fourth box.
L 225226: remove it. It is not relevant. “Flood management …”
L228: What does excellence mean in FFA? Does it mean longterm?
Table 4: The mean and std of maximum daily flow are the same. I do not have your data. Do you think, is it correct?!
L238: What are the applied thresholds? Please write them in this part.
Figure 2: in a, did you omit the values upper the critical dash line? What is the interpretation of the negative values in Kendal tau (yaxis)
I am curious to know the reason for the higher correlation in 9 step time lag in b.
L244: How many peaks did you select in the designated threshold?
Figure3: is an excellent figure, but a question rase up, why did you consider values below λ=1? What is the benefit of showing, for example, eight peaks per year in FFA? Because they are not “flood” anymore.
L278279: How do you recognize the linear behavior in the figure? This is an entirely ocular and nonmathematical diagnosis. Where does the plot start to shift?! How do you consider linearity if the threshold in Figure 4.b? By changing the yaxis, it is not linear anymore!
L283: explain more.
Page 14: What is the range of dispersion ind.? What does 1 in your study mean?
What is/are the reasons for having high DI at low dispersion? and reflect it in the manuscript.
L319: “The average number of peaks per … 2.5 to 3.2”, It is a wide range for longterm highresolution time series. What do you think about that? Could you suggest λ=3 as an average value for your case study area? Or, it is still sensitive to this range.
Table 5: It is not needed to write the λ column here. It is not in the continuation of the entropy section.
L320324: Do not need to mention here.
L324: Rewrite the sentence “A similar analysis …”
L339340: Did you have any other expectation for having a higher design flood for the considerable return period in GEV?!
Figure 9: What is the reason for the abrupt jump around 950? I know, what is LL & UL but please mention their abbr.
L359: Why and how do you select 730?
Page 17: Still, I do not understand the advantages of your method!. Is it faster? Is it hydrologically more reasonable? Is it prevent to do some additional steps?
Conclusion question: Is it possible to have no peaks per year? I mean, the average peak per year is 3.2, and theoretically, it is possible to have several independent peaks in a year and no peak in another year. Did you have such a drought year or period?
Technical corrections:
L36: You already mentioned “λ” in the text.
L46: Please define EDF abbr.
L9394: Please write this part in equation format
Table 1: Please crosscheck the L moment expressions. I believe it has a mistake in the typing.
L98: GPA is wrong. It is GP all over the manuscript.
L 109: Can be merged to the above equation.
Page 5: “y” and others are not the same format as other parts of the paper. i.e., “y” à y
L 114: by (Shannon, 1948) is the wrong citation form.
L 141: Eq. (4), while in line 124, it is written Eqn. So it has to be the same in all parts of the text.
L 149: the “Generalized extreme value” should be “Generalized Extreme Value” or “generalized extreme value”.
L154: Why eq. 11 is bold?
L177: The repetition of (PDS) is not needed anymore.
L185: When you mention the “Schwarz bayesian criterion” instead of the “Bayesian information criterion” term, you should write SBC, SIC, or SBIC, not BIC.
L203: “The Dispersion ind.” Should be “The dispersion ind.”.
L228: Extra parenthesis
L239: in Sect. 2.4 à in Sect. 2.4.
Figure 3: Please fix the place of the arrow for the domain3
Figure 4: It is better to have the same xaxis (200400). Also, yais in b is not appropriate.
L284285: by Cunnane is mentioned several times.
L286: DI, did you mention this abbr in the text before?
Please take care of using abbr in the text. Sometimes, it seems that they are written too much!!
Figure6: Different yaxis makes it difficult to compare total entropies. You can at least use the same minor grid with two decimals.
Using colors may be better to show the result. Sometimes it is not easy to recognize the exact points.
L325: KS and AD statistics
Figure 7&8: Surely use colors. Legends are not readable for me.
L373: Different abbr. at OH. Sometimes it is OBH.
Citation: https://doi.org/10.5194/hess2021570RC2 
AC2: 'Reply on RC2', SONALI SWETAPADMA, 18 Feb 2022
The authors would like to thank the reviewer for all their constructive and insightful comments about this work. We have attached a PDF file that complies with these comments. And the required changes are made in the revised manuscript, which will be uploaded later. For ease of viewing, the replies are shown in blue color.
We believe that these comments help make our manuscript more precise and beneficial to the reader.

AC2: 'Reply on RC2', SONALI SWETAPADMA, 18 Feb 2022

RC3: 'Comment on hess2021570', Anonymous Referee #3, 13 Jan 2022
The manuscript presents an approach based on entropy for choosing the most suitable statistical models to represent partial duration series of streamflow. In particular, it proposes to evaluate the combined entropy of the statistical models describing the arrival of peaks above a certain threshold and the magnitudes above this threshold.
The idea is interesting, especially because it advocates using additional criteria (i.e., the capability to represent occurrences of events exceeding a threshold) to the goodness of fit of theoretical distribution calibrated to magnitude exceedances above the threshold. However, the study has some issues which prevent reaching substantial conclusions. They are described below.
 Calculating entropy constitutes an additional step to the usually applied procedure in this field. The value of performing this additional step should be made clear. As I stated above, I see value in the evaluation of an additional criterion to the goodness of fit of theoretical distribution calibrated to magnitude exceedances above the threshold. However, what advantage does it actually bring with it? Does this method for choosing the most suitable statistical model improve its predictive power? The authors claim it does, but the support of this claim is not clear to me (see the next comment).
 The authors claim to discuss the predictive ability of the statistical model selected by means of the proposed entropybased approach in Figure 9. The figure shows the flow value associated to 50 and 100 years return period, calculated by means of a generalized Pareto distribution calibrated to exceedances above a set of different thresholds. Confidence intervals of the estimates are also displayed. I do not understand what this figure tells about predictive ability. I would be happy to hear about it, in case I am missing something obvious. First of all, Log Pearson 3 is the most suitable statistical distribution according to the values of entropy, whereas results for generalized Pareto are shown here. Then, where do we see in Figure 9 a better predictive performance of the distribution suggested by the entropy metric? Also, its predictive performance is better compared to what? I guess it should be better compared to the performance of the statistical model that would have been chosen based on goodnessoffit metrics displayed in Figure 7 (see the next comment about the interpretation of those results). I also do not understand how the bootstrapping was performed: could you provide a number for the length of data used for each resampling (line 344)? In addition, the authors state at line 365 that the proposed method “gives more accurate optimum threshold values”. Based on what facts do they claim the threshold identified from the entropy metric to be more accurate? What is their reference value? Lines 358363 simply discuss thresholds identified by means of alternative methods. If the value from the operational guidelines of Lang et al. (1999) (i.e., 730 m^{3}/s, line 359) is used as reference (although this is also just another method) then Langbein (1949) would still provide a more accurate threshold (716 m^{3}/s) than the proposed method (710 m^{3}/s). Please clarify.
Additional points
 The proposed approach involves several steps which rely on visual observations and graphical analyses. These usually imply a high degree of subjectivity and difficulties to apply them to large datasets. It occurred to me that the approach described in section 2.4 to identify independent peaks is the same adopted by recent papers which leverage the Metastatistical Extreme Value framework to estimate flood magnitude and frequency from the whole series of ordinary peaks (i.e., with no need to define a threshold). Given that this novel statistical approach is gaining momentum, and that differently from the approach proposed here it can be completely automatized, it may be good to spend some words to justify the importance of identifying partial duration series by means of the classical peak over threshold methods.
 Some suggestions concerning the structure of the paper:
  More precise explanations of what is shown in the figures and how it enables to reach the stated results are needed. Just to give two examples: line 240: how did Kendall’s Tau verified the independence of the series? How do we see it?; line 286: why finally the Poisson and not the Binomial distribution is chosen for the arrival of events above a threshold?
  Figures 1 to 5 display results of standard procedures which could be easily summarized with a few words in the text. Although this is a Technical Note where technical details shall be provided, Figure 2bd only shows examples of results for arbitrarily chosen thresholds and Figure 4b is simply a zoom of Figure 4a. These figures could be deleted, which would help highlighting the actual results of the approach proposed in the paper (Figure 6).
  Figures with several panels could be condensed. For example, Figure 6 could be condensed to Figure 6f only; Figure 7 can be condensed in one single panel displaying total rank only.
 Several minor issues exist in the paper, especially related to correct and precise use of language (e.g., 22, 68, 163, 167, 174, 128129, 130, 212, 216, 255), definition of symbols and units (symbols shall be introduced the first time a variable is named, e.g., t is only defined at line 274 although appearing in Figure 1), differences between statements on the same subject (e.g., lines 88 and 314), motivations for showing these specific plots, given that many are examples for, e.g, different threshold (e.g., Figure 2 and 7). I do not detail them all here given the prior need to address the major issues described above. A carefully revision of the manuscript is however recommended.
Citation: https://doi.org/10.5194/hess2021570RC3 
AC3: 'Reply on RC3', SONALI SWETAPADMA, 18 Feb 2022
The authors would like to thank the reviewer for all their constructive and insightful comments about this work. We have attached a PDF file that complies with these comments. And the required changes are made in the revised manuscript, which will be uploaded later. For ease of viewing, the replies are shown in blue color.
We believe that these comments help make our manuscript more precise and beneficial to the reader.
Status: closed

RC1: 'Comment on hess2021570', Anonymous Referee #1, 24 Dec 2021
The Technical Note "Flood frequency study using partial duration series coupled with entropy principle" by Swetapadma and Ojha discusses methods to use partial duration series type of data to carry out flood frequency estimation. The topic is interesting and appropriate for the journal, but I somehow fail to see what the main contributions of the note, which I think does not provide a clear overview of the new developments, significant advances, and novel aspects of experimental and theoretical methods and techniques which are relevant for scientific investigations within the journal scope (this is a quote from the description of HESS technical notes).
The paper is fairly well organised and the references mostly suitable, giving an overview of what is the current understanding of the question. It presents the modelling framework using a case study in New Zealand.
My understanding is that the novel contribution proposed by the authors is to use entropy as a way to choice the PDS threshold, but I am not entirely sure this innovation is presented in a clear and convincing way. In particular there are a few points that I find quite unclear or that I believe undermine the strength of the authors' argument: I'll try to outline them below.
* I feel there note is somewhat lacking a discussion of the consequences connected to the many choices which are done in the modelling pipeline: the more obvious one to me is the choice of estimating the distribution parameters with Lmoments rather than with other methods. Would the threshold/distribution choice be different if we used standard moments or maximum likelihood to estimate the parameters?
* the choice of distributions used to model the number and magnitude of exceedances could be better motivated. The "traditional" framework uses the Poisson and the Generalised Pareto distribution respectively: these are motivated by some well known theoretical results. The Negative binomial extends the Poisson distribution, allowing for overdispersion. I do not quite understand how the Binomial distribution is instead fitted here, as we would need to have a k value of exceedances over N "trials" but the N value should be different from year to year since we only focus on independent peaks. Is this what the authors do? Further the use of the GEV, P3 and LP3 surprised me here as these are typically employed to describe annual maxima and have little theoretical or practical justification in the context of threshold exceedances: they can of course be used, but I'd mention the fact that the GP has a somewhat stronger theoretical grounding.
* The definition of AIC and BIC is not correct in Table 3: the definition is, for AIC, n*loglik(model) + 2k . For the Gaussian case it can be shown that the loglik of the model reduces to the RSS, but that is a special case of a more general definition. In the caption of the table $o_i$ and $p_i$ should be written using capital letters for consistency with the table content
* Although he case study is quite interesting I find it is fairly hard to generalise anything from this. How do we know that this approach to PDS modelling is any more suitable than the other currently employed approaches? How could we evaluate that? How does this work in other places? How does this perform under different scenarios of true underlying processes? Overall I think the study does not give enough details about how generalisable the findings are (and actually it is not very clear what the main findings are). The note presents a modelling framework and applies it, but I feel it fails to convince the reader that this modelling approach is somehow better or worth adding to the currently available modelling tools. In particular I feel the modelling approach as presented is still very much needing the analyst to make some apriori choices: something that is one of the main issues which make the widespread use of PDS harder to implement.
Some other small minor points in the presentation:
Line 22: interference > inference
Line 29: the average number of events can be hardly be larger than the total number of annual maxima. It is often the case that the total number of PDS observations are more than the AMS observations, but this depends on the threshold: a very high threshold might result in PDS which have less observations than AMS.
Line 52: "gave the best results": in what sense? using what metrics? (This is a fundamental question which might also be addressed in the note: how do we evaluate what methods work well?)
Line 174: "To justify" sounds like an odd wording, maybe "to verify"? Further I would provide some more description of the test (very briefly) specifying the null and alternative hypothesis being tested and how to interpret the result (since these are not really commented on in the text around Figure 2)
Section 5: I would expect somewhere a plot showing the data series
Line 379: the threshold is much higher than 2.47 or 3.22: the threshold which is exceeded on average between 2.47 and 3.22 times per year.Citation: https://doi.org/10.5194/hess2021570RC1 
AC1: 'Reply on RC1', SONALI SWETAPADMA, 18 Feb 2022
The authors would like to thank the reviewer for all their constructive and insightful comments about this work. We have attached a PDF file that complies with these comments. And the required changes are made in the revised manuscript, which will be uploaded later. For ease of viewing, the replies are shown in blue color.
We believe that these comments help make our manuscript more precise and beneficial to the reader.

AC1: 'Reply on RC1', SONALI SWETAPADMA, 18 Feb 2022

RC2: 'Comment on hess2021570', Anonymous Referee #2, 08 Jan 2022
General comments:
The technical note “Flood frequency study using partial duration series coupled with entropy principle” is interesting. The subject is practical in flood frequency analysis. However, I could not determine the tangible advantage of the applied method. From my point of view, this paper is more like a research paper than a technical note. In general, it follows the scopes of the HESS journal. The advantage and novelty of the paper have to be highlighted in the manuscript. Therefore, the manuscript has more room for improvement.
Specific comments:
L7 & L14: In the text, you mentioned “quality discharge” several times; what does this term mean?
L1719: In the abstract, more focus on results is needed. Why “POME” is an effective tool?
In the manuscript and especially in the introduction, you used 34 references that are old (before 2000). It is better to employ recent research. However, it is not a critical point.
L3133: Could you please elaborate more? How is it possible to thoroughly evaluate the flood generating processes?!
L37: What does “better performance of PDS” mean?
L39: Please explain “Poisson arrival of peaks” before mentioning it in the text.
L 41: What do you mean by “Poisson process”? Readers demand to have clear fundamental literature in the introduction.
Please reflect and indicate your method advantage in the introduction. By having a wide variety of “λ”, what is entropybased models' preference?
You mentioned several times “probability dist.” And “fitting dist”, what are your purposes to point them in the introduction part? I understand what you did, but is it not vivid in your manuscript.
L6672: This paragraph must be rewritten to address the purposed method's necessity and novelty. Now, I did not get any points.
L75: In this sentence, what do you mean by “dual”?
Table 1: What is “Γ”?, I did not find its definition in the text.
L100: What is the benefit of the “negative binomial dist.” in your context?
L119: Why, in this paper, “e” has to be in a logarithm base?
L129130: “therefore, while … (Lee et al., 2011). I do not understand this sentence.
L131136: irrelevant to previous sentences; it has to be somewhere else.
L163165: Rewrite the sentences. It is not understandable.
L169171: Could you please graphically explain this condition?, then “intermediate” discharge can be intelligible in L173.
L171: Do you mean mathematically and logically using OR in Eq. 13? Because it has to be in this form.
Table 3: ADC has to be mentioned after AD, not in the end.
L200201: Does not have vivid meaning.
L206: What is conventional statistics in this research? And what do you want to point by comparing these models?
Page 9: What does this method work if λ=2? Two independent events per year.
Is there any way to calculate the threshold by assuming two events per year?
Figure 1: It is good instruction; however, you need to elaborate more on the second box “Extract PDS at …”, explain the third bo in the text, before this figure, and answer the question of “what if for nonlinear approach” for the fourth box.
L 225226: remove it. It is not relevant. “Flood management …”
L228: What does excellence mean in FFA? Does it mean longterm?
Table 4: The mean and std of maximum daily flow are the same. I do not have your data. Do you think, is it correct?!
L238: What are the applied thresholds? Please write them in this part.
Figure 2: in a, did you omit the values upper the critical dash line? What is the interpretation of the negative values in Kendal tau (yaxis)
I am curious to know the reason for the higher correlation in 9 step time lag in b.
L244: How many peaks did you select in the designated threshold?
Figure3: is an excellent figure, but a question rase up, why did you consider values below λ=1? What is the benefit of showing, for example, eight peaks per year in FFA? Because they are not “flood” anymore.
L278279: How do you recognize the linear behavior in the figure? This is an entirely ocular and nonmathematical diagnosis. Where does the plot start to shift?! How do you consider linearity if the threshold in Figure 4.b? By changing the yaxis, it is not linear anymore!
L283: explain more.
Page 14: What is the range of dispersion ind.? What does 1 in your study mean?
What is/are the reasons for having high DI at low dispersion? and reflect it in the manuscript.
L319: “The average number of peaks per … 2.5 to 3.2”, It is a wide range for longterm highresolution time series. What do you think about that? Could you suggest λ=3 as an average value for your case study area? Or, it is still sensitive to this range.
Table 5: It is not needed to write the λ column here. It is not in the continuation of the entropy section.
L320324: Do not need to mention here.
L324: Rewrite the sentence “A similar analysis …”
L339340: Did you have any other expectation for having a higher design flood for the considerable return period in GEV?!
Figure 9: What is the reason for the abrupt jump around 950? I know, what is LL & UL but please mention their abbr.
L359: Why and how do you select 730?
Page 17: Still, I do not understand the advantages of your method!. Is it faster? Is it hydrologically more reasonable? Is it prevent to do some additional steps?
Conclusion question: Is it possible to have no peaks per year? I mean, the average peak per year is 3.2, and theoretically, it is possible to have several independent peaks in a year and no peak in another year. Did you have such a drought year or period?
Technical corrections:
L36: You already mentioned “λ” in the text.
L46: Please define EDF abbr.
L9394: Please write this part in equation format
Table 1: Please crosscheck the L moment expressions. I believe it has a mistake in the typing.
L98: GPA is wrong. It is GP all over the manuscript.
L 109: Can be merged to the above equation.
Page 5: “y” and others are not the same format as other parts of the paper. i.e., “y” à y
L 114: by (Shannon, 1948) is the wrong citation form.
L 141: Eq. (4), while in line 124, it is written Eqn. So it has to be the same in all parts of the text.
L 149: the “Generalized extreme value” should be “Generalized Extreme Value” or “generalized extreme value”.
L154: Why eq. 11 is bold?
L177: The repetition of (PDS) is not needed anymore.
L185: When you mention the “Schwarz bayesian criterion” instead of the “Bayesian information criterion” term, you should write SBC, SIC, or SBIC, not BIC.
L203: “The Dispersion ind.” Should be “The dispersion ind.”.
L228: Extra parenthesis
L239: in Sect. 2.4 à in Sect. 2.4.
Figure 3: Please fix the place of the arrow for the domain3
Figure 4: It is better to have the same xaxis (200400). Also, yais in b is not appropriate.
L284285: by Cunnane is mentioned several times.
L286: DI, did you mention this abbr in the text before?
Please take care of using abbr in the text. Sometimes, it seems that they are written too much!!
Figure6: Different yaxis makes it difficult to compare total entropies. You can at least use the same minor grid with two decimals.
Using colors may be better to show the result. Sometimes it is not easy to recognize the exact points.
L325: KS and AD statistics
Figure 7&8: Surely use colors. Legends are not readable for me.
L373: Different abbr. at OH. Sometimes it is OBH.
Citation: https://doi.org/10.5194/hess2021570RC2 
AC2: 'Reply on RC2', SONALI SWETAPADMA, 18 Feb 2022
The authors would like to thank the reviewer for all their constructive and insightful comments about this work. We have attached a PDF file that complies with these comments. And the required changes are made in the revised manuscript, which will be uploaded later. For ease of viewing, the replies are shown in blue color.
We believe that these comments help make our manuscript more precise and beneficial to the reader.

AC2: 'Reply on RC2', SONALI SWETAPADMA, 18 Feb 2022

RC3: 'Comment on hess2021570', Anonymous Referee #3, 13 Jan 2022
The manuscript presents an approach based on entropy for choosing the most suitable statistical models to represent partial duration series of streamflow. In particular, it proposes to evaluate the combined entropy of the statistical models describing the arrival of peaks above a certain threshold and the magnitudes above this threshold.
The idea is interesting, especially because it advocates using additional criteria (i.e., the capability to represent occurrences of events exceeding a threshold) to the goodness of fit of theoretical distribution calibrated to magnitude exceedances above the threshold. However, the study has some issues which prevent reaching substantial conclusions. They are described below.
 Calculating entropy constitutes an additional step to the usually applied procedure in this field. The value of performing this additional step should be made clear. As I stated above, I see value in the evaluation of an additional criterion to the goodness of fit of theoretical distribution calibrated to magnitude exceedances above the threshold. However, what advantage does it actually bring with it? Does this method for choosing the most suitable statistical model improve its predictive power? The authors claim it does, but the support of this claim is not clear to me (see the next comment).
 The authors claim to discuss the predictive ability of the statistical model selected by means of the proposed entropybased approach in Figure 9. The figure shows the flow value associated to 50 and 100 years return period, calculated by means of a generalized Pareto distribution calibrated to exceedances above a set of different thresholds. Confidence intervals of the estimates are also displayed. I do not understand what this figure tells about predictive ability. I would be happy to hear about it, in case I am missing something obvious. First of all, Log Pearson 3 is the most suitable statistical distribution according to the values of entropy, whereas results for generalized Pareto are shown here. Then, where do we see in Figure 9 a better predictive performance of the distribution suggested by the entropy metric? Also, its predictive performance is better compared to what? I guess it should be better compared to the performance of the statistical model that would have been chosen based on goodnessoffit metrics displayed in Figure 7 (see the next comment about the interpretation of those results). I also do not understand how the bootstrapping was performed: could you provide a number for the length of data used for each resampling (line 344)? In addition, the authors state at line 365 that the proposed method “gives more accurate optimum threshold values”. Based on what facts do they claim the threshold identified from the entropy metric to be more accurate? What is their reference value? Lines 358363 simply discuss thresholds identified by means of alternative methods. If the value from the operational guidelines of Lang et al. (1999) (i.e., 730 m^{3}/s, line 359) is used as reference (although this is also just another method) then Langbein (1949) would still provide a more accurate threshold (716 m^{3}/s) than the proposed method (710 m^{3}/s). Please clarify.
Additional points
 The proposed approach involves several steps which rely on visual observations and graphical analyses. These usually imply a high degree of subjectivity and difficulties to apply them to large datasets. It occurred to me that the approach described in section 2.4 to identify independent peaks is the same adopted by recent papers which leverage the Metastatistical Extreme Value framework to estimate flood magnitude and frequency from the whole series of ordinary peaks (i.e., with no need to define a threshold). Given that this novel statistical approach is gaining momentum, and that differently from the approach proposed here it can be completely automatized, it may be good to spend some words to justify the importance of identifying partial duration series by means of the classical peak over threshold methods.
 Some suggestions concerning the structure of the paper:
  More precise explanations of what is shown in the figures and how it enables to reach the stated results are needed. Just to give two examples: line 240: how did Kendall’s Tau verified the independence of the series? How do we see it?; line 286: why finally the Poisson and not the Binomial distribution is chosen for the arrival of events above a threshold?
  Figures 1 to 5 display results of standard procedures which could be easily summarized with a few words in the text. Although this is a Technical Note where technical details shall be provided, Figure 2bd only shows examples of results for arbitrarily chosen thresholds and Figure 4b is simply a zoom of Figure 4a. These figures could be deleted, which would help highlighting the actual results of the approach proposed in the paper (Figure 6).
  Figures with several panels could be condensed. For example, Figure 6 could be condensed to Figure 6f only; Figure 7 can be condensed in one single panel displaying total rank only.
 Several minor issues exist in the paper, especially related to correct and precise use of language (e.g., 22, 68, 163, 167, 174, 128129, 130, 212, 216, 255), definition of symbols and units (symbols shall be introduced the first time a variable is named, e.g., t is only defined at line 274 although appearing in Figure 1), differences between statements on the same subject (e.g., lines 88 and 314), motivations for showing these specific plots, given that many are examples for, e.g, different threshold (e.g., Figure 2 and 7). I do not detail them all here given the prior need to address the major issues described above. A carefully revision of the manuscript is however recommended.
Citation: https://doi.org/10.5194/hess2021570RC3 
AC3: 'Reply on RC3', SONALI SWETAPADMA, 18 Feb 2022
The authors would like to thank the reviewer for all their constructive and insightful comments about this work. We have attached a PDF file that complies with these comments. And the required changes are made in the revised manuscript, which will be uploaded later. For ease of viewing, the replies are shown in blue color.
We believe that these comments help make our manuscript more precise and beneficial to the reader.
Viewed
HTML  XML  Total  BibTeX  EndNote  

800  328  44  1,172  24  24 
 HTML: 800
 PDF: 328
 XML: 44
 Total: 1,172
 BibTeX: 24
 EndNote: 24
Viewed (geographical distribution)
Country  #  Views  % 

Total:  0 
HTML:  0 
PDF:  0 
XML:  0 
 1