the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
If a Rainfall-Runoff Model was a Hydrologist
Abstract. Personification can be annoying, but also instructive. If a Rainfall-Runoff (RR) model was a hydrologist it could be called the Modelled Hydrologist, MH. Ideally, an MH used when tackling real–world problems such as flooding and climate change would be acquainted with hydrologic laws at the catchment scale and with a diverse panel of desk and field hydrologists who have between them thousands of years of experience. In practice, though, the MHs for RR models are largely ignorant of hydrology. Some of this ignorance is real (e.g. the hydrologic laws are unknown). The rest is selective ignorance, as is practised throughout science whenever there is a need for complex system analysis, parsimony, or similar. It is a form of designed neglect. In lumped RR modelling, the classic MH is that used in Jakeman and Hornberger’s experiment on their question “How much complexity is warranted in a rainfall–runoff model?”. Based on what that MH “knows” it is a statistician dilettante–hydrologist. When studying difficult and confusing problems (conundrums) like RR modelling it is helpful to have simple concrete examples to use as benchmarks and when generating hypotheses. Here, an MH for lumped modelling is built which is a layman with an interest in the weather and river flows (e.g. a river fisherman). The MH is created in a novel experiment in which statements of knowledge in everyday English are transformed systematically into a novel parameterless RR model. For a set of 38 UK catchments, the relative importance is measured as 1 and 6, respectively, for the layman’s knowledge about seasonality and wetness, and 2 for the knowledge that runoff records are always unpredictable to some degree. Hydrologic laws are discussed and hydrologic similarly in time and place is explored.
- Preprint
(925 KB) - Metadata XML
- BibTeX
- EndNote
Status: closed
-
RC1: 'Review of hess-2021-170', Keith Beven, 23 Apr 2021
Review of Ewan and O’Donnell, If a rainfall-runoff model was a hydrologist by Keith Beven
It is always interesting to read one of John’s papers, and this one is no exception. Given the current penchant for throwing data into the black boxes of machine/deep learning algorithms and declaring success without producing much in the way of understanding, the more thoughtful approach presented here, in a highly original way, is of value. I did find the paper not easy to read in places, with some sweeping generalisations and somewhat limited reference to previous work (see below). And the end result is limited to a really simple non-parametric modelling approach that appears to only reflect the basic minimum of hydrological knowledge, that flows reflect today’s rainfall and some index of antecedent conditions (here represented as only by matching the pattern of average rainfalls over the last n days, in a least squares sense without any allowance for autocorrelation in those values).
It sort of works (even better than a calibrated rainfall-runoff model for some of the catchments and mostly not much worse in terms of NSE). It sort of works for transfer from a proxy catchment (but see comment below on this). But there are perhaps some issues of hydrological knowledge that are somewhat glossed over.
- There will be uncertainties in the data. These are mentioned but then ignored. But can be significant – e.g. where event runoff coefficients are highly variable and go greater than 1 (see Beven and Smith, 2015; Beven, 2019).
- This implies that it might be useful to allow for uncertainty in the mean prediction – you can after all pattern match to get an ensemble of possible values which could be treated as a first estimate of a pdf reflecting uncertainty.
- The use of daily data is convenient in terms of getting hold of the data but will be subject to discretisation issues in small catchments (where the peak occurs in the day affects the volume for that day) and autocorrelation issues in larger catchments (every day is here treated as independent, even on recessions).
- Snow is mentioned, but then neglected. It may be relatively unimportant in most UK catchments perhaps, but one of the outcomes from the Iorgulescu and Beven (2004) attempt at a similar non-parametric data-based predictor based on CART also with different rainfall period inputs, was that the classification identified anomalous periods associated with delayed snowmelt. That this might happen is hydrological knowledge is easily stated in English!
- For the transferability in space, a brute force approach to finding the best KERR model is taken but checking all the catchments in the data set and picking the best as the donor site. That cannot be used if a catchment was treated as ungauged (when the proxy basin transfer is actually required) and really does not seem to be making too much use of hydrological knowledge (though the difficulty of transferring response characteristics using catchment characteristics or model parameters is, of course, well known). But our model hydrologist could perhaps be expected to know that there is an expectation that catchments of different scales might involve different processes, or catchments in hard rock wet areas of the west might be expected to be different to chalk catchments in the south east. So, in respect if the title of the paper the words very naïve might need to be added before hydrologist (and indeed the MH is referred to elsewhere as a layman or angler rather than someone with better hydrological knowledge).
- The paper does not mention that we are often wanting to simulate the potential effects of future change. If that change is only to the inputs then the proposed strategy might work, perhaps with some degradation if the processes change. If, however, if it is change due to reforestation or NFM measures or other changes, then it could be used as a baseline to compare with future observations, but not as a simulator of a changed future (and indeed the changes might be within the uncertainty of the predictions if that was assessed in some way).
Which then, of course, raises the question of what might happen if the MH had access to that committee of experienced hydrologists (or even inexperienced hydrologists – see the tale of the hydrological monkeys in the Prophecy paper cited). That experience might lead them to think more in terms of model parameters than direct use of data (Norman Crawford, Sten Bergstrom, and Dave Dawdy are examples from quite different modelling strategies but all were known for their skill in estimating parameters for models, including for ungauged sites …. though there may have been some potential for positive bias in tweaking and reporting results there). And there are instances of committees of experienced hydrologists not doing that well in setting up models and even getting worse results as more data were made available (see the Rae Mackay et al. groundwater example from NIREX days).
I would suggest that the authors could make more of the difficulties of going further with more experience and knowledge about catchment characteristics. It is an argument for their KERR approach – but I would also suggest that the KERR approach also be extended to reflect the uncertainty to be expected as a result of that simplicity.
Some specific comments
L37 Best available theory – but there is also the issue of whether that theory is good enough when it differs from the perceptual model of the processes.
L56. There seems to be a lot of overlap between what is referred to here as hydrological knowledge and the concept of a qualitative perceptual model. Both need to be simplified to make quantitative predictions (and often do so in ways that conflict with the perceptual model because of what is called here selective ignorance).
P121. There were earlier suggestions of this approach, e.g. Buytaert and Beven, 2009, or even the donor catchment approach of the FSR/FEH.
L128. This is analogous to the Condition Tree concept in Beven et al, CIRIA Report C721 (also Beven and Alcock, 2012) that results in an audit trail to be evaluated by others
L132. Performance really ought to take accopunt of uncertainty in the data (see earlier comment and papers cited in Beven, 2019)
L137. But catchments that look very similar can also respond quite diffrerentl;y – even if mapped as the same soils/geology. We have an example from monitoring two small catchments on the Howgills. So issue of requisite knowledge here is when such small scale variability might integrate out (or not) – this was discussed in the 80s as a representative elementary are concept – eg. Wood et al.1988; also papers on when variability in stream chemistry starts to integrate out).
L149. I think the “peasant’s model” was suggested by Eamon Nash in modelling the Nile before this.
L166. But again that upper limit will also definitely depend on the uncertainty and inconsistencies in the observations.
L182 nowhere to hide – exactly the point made for the Condition Tree / audit trail
L336. Why not use an ensemble here to add in some uncertainty to the process?
L394. But it is only a match to rainfall pattern – is there no additional knowledge that could be used? In the case of expected greater autocorrelation in large catchment for example (or extension to shorter time steps in small catchments) perhaps the last few predictions of flow might be useful (avoiding just doing 1 step ahead forecasting, though such a model with updating does also represent the forecasting model to beat – e.g the Lambert ISO model, see RRM book)
P425 relative importance is 2???? Not clear
P426. Always upredictable – see the inexact science paper again
P456. Does not require scaling – There is an expectation that processes change with increasing scale, and that specific discharge becomes less variable with increasing scale, and generally less in moving from headwaters to large scales, so does this imply that this is compensated by a decline in mean catchment rainfalls so that the power on the area scaling is low, or simply that the variability is within the uncertainty of the predictions so does not have too great an effect on NSE? There are past studies on scaling with area that might be treated as hydrological knowledge here.
Wood, E.F., Sivapalan, M., Beven, K.J. and Band, L. (1988), Effects of spatial variability and scale with implications to hydrologic modelling. J. Hydrology, 102, 29-47.
Buytaert, W and Beven, K J, 2009, Regionalisation as a learning process, Water Resour. Res., 45, W11419, doi:10.1029/2008WR007359.
Beven, K. J. and Alcock, R., 2012, Modelling everything everywhere: a new approach to decision making for water management under uncertainty, Freshwater Biology, 56, 124-132, doi:10.1111/j.1365-2427.2011.02592.x
Beven, K. J., and Smith, P. J., 2015, Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models, ASCE J. Hydrol. Eng., 20(1), p.A4014010, doi: 10.1061/(ASCE)HE.1943-5584.0000991.
Beven, K. J., 2019, Towards a methodology for testing models as hypotheses in the inexact sciences, Proceedings Royal Society A, 475 (2224), doi: 10.1098/rspa.2018.0862
Iorgulescu, I and Beven, K J, 2004, Non-parametric direct mapping of rainfall-runoff relationships: an alternative approach to data analysis and modelling, Water Resources Research, 40 (8), W08403, 10.1029/2004WR003094
Citation: https://doi.org/10.5194/hess-2021-170-RC1 - AC3: 'Reply on RC1', Greg O'Donnell, 07 Jun 2021
-
CC1: 'Comment on hess-2021-170', Grey Nearing, 27 Apr 2021
Review by Grey Nearing
Summary of Review:
It’s always exciting to see people thinking about hydrology in new ways, and I was initially excited to receive this paper to review. However, after reading it carefully my opinion is that there is little new in this paper, either technically or philosophically. While I appreciate the effort to reframe some of the historical arguments into a new synthesis, I found the synthesis unnecessarily convoluted and the paper generally difficult to read. The idea of “Knowledge Documents” or “Knowledge Tables” is interesting at first glance (it is always a good idea to document all assumptions), however I cannot see what this adds over standard practices for documenting models, model development, and scientific experiments. Additionally, the model presented in this paper performed poorly relative to the benchmark model.
It seems unnecessarily difficult and cumbersome to document everything in “everyday English” as the authors suggest (line 130 and elsewhere). There is a reason why we use equations, plots, technical language and jargon, and diagrams to convey information in scientific documents, and I neither want to read nor write papers where everything that is “assumed known” (table 2) must be expressed in everyday english and/or in table format. I do not see in this paper any suggestion about how the current style of technical writing fails - good papers and technical reports already are expected to document all assumptions.
To be blunt, it’s a big ask to change the way we do technical writing, and this paper does not approach the problem with any theory or evidence to support that suggestion. There is a science (and history) of technical writing (e.g., [1, 2]), and this paper makes suggestions that belong in that area of study without acknowledging the existence of that discipline, adopting any of the practices of analysis from that discipline, or even citing any papers from that discipline. I am also not an expert in the theory of technical writing, but given that this is the main focus of the paper I guess it should play a central role.
I have several major concerns with the philosophy of modeling outlined in this paper, and could go through the article paragraph by paragraph to highlight why I think that many of the ideas developed here are either redundant or, in certain cases, counter-productive. But I hesitate to spend too much time going through these concerns in detail because I don’t see a path to publication given the reasons summarized above. If there is some disagreement between reviewers, I will create a more detailed account of my concerns throughout the paper.
General Concerns:
I will try to respond briefly to the main points of the paper (without deconstructing the philosophical discussion in the paper). The paper is written in a way that makes it difficult to understand what the authors intend to be their primary contributions, so the points below reflect my best attempt at extracting the main points.
- The paper introduces a concept of a “Modeled Hydrologist” to help contextualize certain difficulties in developing complex systems models.
The “Modeled Hydrologist” concept appears to be the similar to what has previously been called the “perceptual” and/or “conceptual” parts of hydrological modeling [3, 4]. I’m not sure what anthropomorphizing these concepts, or the model itself, adds to the discussion. It is already well known and widely discussed that models are collections of approximations and assumptions, and there are numerous papers in the hydrology literature that explicitly recognize and discuss this [e.g., 5]. I am having trouble seeing any new ideas here, or any explanation about what is missing from previous “philosophy of modeling” work.
- The paper argues to use “Knowledge Documentation” to “document comprehensively what is assumed known, so that “the ignorance can be deduced when necessary.”
This comment reiterates my main concern from the review summary above. I do not think that the argument to use “knowledge documents” is well thought out. Documenting all assumptions and methods is the purpose of standard technical writing in science. The format and customs around this ~300 year old culture and practice about how to construct scientific documents, document assumptions, and allow reproducibility is mature and, in my opinion, very effective when done correctly. What is missing from this? Why focus on plain English statements? Why not use the standard set of tools that scientists use to perform exactly this job (e.g., equations, figures, plots, tables, etc.)? All of these things contain knowledge or information in any meaningful sense. Equations especially are used to formally document assumptions and are often much more efficient and precise than written text.
My opinion is that by encouraging framing assumptions in everyday english, we will encourage sloppy and imprecise thinking. To be perfectly frank, it would be difficult for me to think of a suggestion about how to change scientific writing that I would disagree with more strongly than this one. Our goal should be to move *away from* plain language descriptions toward more formal, mathematical descriptions of all assumptions.
Further, the knowledge tables in this paper (e.g., Table 6) do not contain enough information to recreate the model and experiment. Not only are they difficult to read and to synthesize, they are incomplete. If the model were described in the traditional way, it would be much easier to understand and reproduce. Maybe I’m just not used to these tables, but it would be much easier both to see and understand your assumptions and also to recreate your model structure if you just used equations like normal papers do.
A counter argument for the position I’ve outlined is that it might be easier for artificial intelligence or natural language processing systems to extract conceptual or semantic information from knowledge tables like the ones used here, rather than from the narrative style text that is more common in scientific writing. This is an interesting thought, however if this were a goal - however that is not the goal here. If we were to formally adapt scientific writing practices to be more accessible to NLP and automated text readers, I would like to see the problem addressed (theoretically and empirically) from that perspective.
- The paper outlines a new model that is used as an example of applying the Modeled Hydrologist concept to guide and/or document model development.
The new model performs poorly against the benchmark. What is the advantage of this new model vs. existing models? What are the use cases for a model like this? Is the only reason for creating this new model to provide an example for the “knowledge table” procedure? If so, it seems artificial. In general, the model itself is not something that I can see any hydrologist using or being interested in for any practical or intellectual reason, and publishing a new model to illustrate a new philosophy would be interesting if it worked and if it demonstrated some particular value to the new philosophy, but here it didn’t work and the authors did not make a convincing argument about how or why the new philosophy gave us something that standard scientific writing would not.
[1] O Hara, F. M. "A brief history of technical communication." ANNUAL CONFERENCE-SOCIETY FOR TECHNICAL COMMUNICATION. Vol. 48. UNKNOWN, 2001.
[2] Todd, Jeff. "Teaching the history of technical communication: A lesson with Franklin and Hoover." Journal of technical writing and communication 33.1 (2003): 65-81.
[3] Beven, Keith. "Spatially distributed modeling: Conceptual approach to runoff prediction." Recent Advances in the Modeling of Hydrologic Systems. Springer, Dordrecht, 1991. 373-387.
[4] Gupta, Hoshin V., et al. "Towards a comprehensive assessment of model structural adequacy." Water Resources Research 48.8 (2012).
[5] Gupta, Hoshin V., and Grey S. Nearing. "Debates—The future of hydrological sciences: A (common) path forward? Using models and data to learn: A systems theoretic perspective on the future of hydrological science." Water Resources Research 50.6 (2014): 5351-5359.
Citation: https://doi.org/10.5194/hess-2021-170-CC1 -
AC1: 'Reply on CC1', Greg O'Donnell, 07 May 2021
Grey
Thank you for taking the time to review this paper. Your review approaches the paper as if it is a collection of separate technical notes: one for a model, another for a method, and a third for a set of concepts and definitions. You then robustly criticize each note on the basis that its contents is not fit for purpose in general RR modelling (along the lines that the model does not have advantages relative to existing models, the method is poorer than existing methods, and the definitions and concepts are redundant).
The reality of the paper is quite different. It is about an experiment which is a scientific exploration of the very difficult problem of properly linking performance to hydrologic knowledge. Specifically, it is a report on the huge effort expended by a pair of researchers to develop numerous arguments and definitions, reframe some historical arguments, develop a full blown method and a completely new RR model, and thus make it possible to create a simple concrete example for the link between performance and hydrologic knowledge (that concrete example is a benchmark for the link and gives a basis for generating hypotheses). Somewhere, therefore, there must be a problem with communication or expectation. Note, for example, the paper does not even hint that the model is proposed for general use in the way you assume (and are very robust about) in your review. Perhaps that is a problem of expectation. If there are problems of communication, then surely they are quite easy to resolve.
We note that in the future there may be a more detailed account of your concerns. In the meantime, here are responses to three points you have made so far.
1. You commented that "The new model performs poorly against the benchmark". A benchmark is not a target; it is a reference point against which things can be measured. The performance benchmark used shows what can be achieved when employing the substantial advantages of allowing calibration and using evaporation data. KERR is simple, parameterless, and does not use evaporation data, so if its performance consistently matched or exceeded the benchmark the resulting shock would have marked the start of a revolution in RR modelling.
2. You commented that "In general, the model itself is not something that I can see any hydrologist using or being interested in for any practical or intellectual reason". Let us look closely at KERR: (i) it is probably the best performing simple parameterless model available for temperate catchments; (ii) it gives information on similarity across time and place; (iii) it can use proxy catchment data which is not scaled or modified in any way; (iv) it is linked directly and comprehensively to a set of hydrologic knowledge; and (v) it preserves that knowledge and allows it to be quantified. It is easy to imagine a research hydrologist spending a decade or more working with KERR and its descendants.
3. Our response to your claim that the concept of the Modelled Hydrologist (MH) is redundant is as follows. A perceptual or conceptual model does not exist in a vacuum, it entails understanding and actions by human beings, and places restrictions on what that understanding and actions can or must be. There is, therefore, an entity (the MH) which is larger than the perceptual or conceptual model. This raises the question of what that larger entity is and how it relates to the search for hydrologic laws at the catchment scale. There is, though, another side to this. One of the points made in the blind validation work is that models and modellers must be seen as a package (Ewen and Parkin, 1996). Our experience is that hydrologists running an RR model sometimes forget the nature of the model. Sometimes it is treated as a statistical black box. The worst case is when the model is treated as if it is reality, and it is implicitly assumed that there are no constraints on what can be concluded from the resulting simulations. Putting all fine scientific arguments to one side, it is quite legitimate in an international research journal to encourage thinking, or a change in thinking, about the nature of RR modelling and RR models.
There are obvious dangers in creating a lot of things at one time. It gives multiple targets for criticism (fair and unfair criticism), and runs the risk that the baby will be thrown out with the bathwater.
John and Greg
Citation: https://doi.org/10.5194/hess-2021-170-AC1 -
CC3: 'Reply on AC1', Grey Nearing, 07 May 2021
HI John and Greg,
Please notice that your reply does not address the majority of my comments. You have chosen to focus on one (small) aspect of my comment and ignore the rest. I acknowledged explicitly in my comment that your model was not intended to be a new model for practical use, yet you reated my comment as if this was the point and ignored the point I was making (which was stated very clearly). I would appreaciate a more sincere attempt to respond to what I worte.
Thank you,
Grey
Citation: https://doi.org/10.5194/hess-2021-170-CC3 -
CC4: 'Reply on AC1', Grey Nearing, 07 May 2021
Just to to add a little clarity to my previous response, I want to make it clear that my opinion is that this paper is not anywhere near the ballpark of being a serious contribution to the literature. It is (yet another) philosphically naive paper by hydrologists who are working from intuition rather than from any epistmological formality.
I do not agree with the authros that there needs to be a reconceptualization of how we think of rainfall-runoff models. This tired cliche of people treating models as "black boxes" is an excuse to do bad philsoophy, not an actual phenomena that happens among model developers. Notice that the authors did not actually point to any concrete problem (bad inferences, bad resutls, bad predictions) that occur *because of* this supposed problem (thus my criticism of their model), they imagine an audience of people who are apparently less sophisticated than themselves and lack a basic understnaidng of what models are. This imaginary boogyman does not exist, and if it did exist, it is not justification for doing bad philosophy or for chaning how we write scientific papers.
What I mean when I say that this is bad philosophy is simple. This paper touches primarily on two subjects: (1) scientific epistemology, especially as it relates to the role of models in science, and (2) scientific writing. These authors make zero effort to review either of these (large) bodies of literature. They cite only other hydrologists who have made the same mistake in the past. We have a tradition in hydrology of letting hydrologists pretend to be philosophers without any actual attempt to synthesise the state of the philosophy of science as it relates to the problem of model realism (or uncertainty, or hypothesis testing, or underdetermination, etc.). This paper does not make even an attempt to build on the state of the science (here, the state of the philosophy) that is at the core of the discussion. The authors make no attempt to synthesize scientific epistemology or the philosophy of technical writing (which are both deep fields in their own right), yet they presume to make suggestions about both. It's embarassing to read this paper.
Imagine if the situation were reversed -- suppose a philosopher with no technical knowledge published a paper on climate change attacking a strawman problem and coming to a ridiculous conclusion about how climate scientists should change their models - perhaps their suggestion is to account for sun cycles without making an attempt to understand the current state of cliamte models. The analogy is that this current paper under review comes to an equally ridiculous conclusion about how to write scientific articles with "knowledge tables" in "plain english" instead of using the standard conventions of technical writing, and it comes to this conclusion without making even an attempt to understand either (1) the state of the philosophy of science on modeling or (2) the state of the science on the philosophy of technical writing. The hypothetical paper by this hypothetical philosopher would not pass the laugh test, let alone be sent out for review in any journal (hopefully), yet here we are reviewing an equally ridiculous philosophy paper in a hydrology journal. Frankly, it is disgraceful that we would even consider publishing this nonsense - this paper makes a mockery of not one, but two academic disciplines.
I strongly advise that this paper should be rejected.Citation: https://doi.org/10.5194/hess-2021-170-CC4 -
AC2: 'Reply on CC4', Greg O'Donnell, 17 May 2021
Grey
This is a response to CC3 and CC4.
CC3 says: "You have chosen to focus on one (small) aspect of my comment and ignore the rest." In fact, we focussed on a claim that our model has absolutely no lasting value whatsoever to anyone, and a claim to the effect that the subject of the title of the paper is a redundant conception. These claims are central to two of the three general concerns you highlighted in your review. The claims are not well founded (they lack insight) and some readers may read the review but not the paper. We therefore responded to the claims as soon as we could.
A sense of proportion and fairness is needed in discussing the third highlighted concern (philosophy). In CC4, in the name of philosophy, you try and shout down (cancel) hydrologists who use intuition creatively in hydrology. Cannot intuition, and the insight it brings, not simply be appreciated and be adapted for use for the general good. It never crossed our minds that a reader or reviewer would persist in the notion that we are somehow trying to reinvent technical writing or are engaged with what you describe as "changing how we write scientific papers". Neither did it cross our minds that a reader or reviewer would persist in assuming we propose the use of everyday English other than in scientific exploration, and then only when it is useful and practical (our background is physically-based, distributed RR modelling, where the documentation runs to hundreds of pages of text, equations and diagrams).
The paper gives a science-based solution to a real-world problem: benchmark links between hydrologic knowledge and performance are needed as a basis for measurements related to engineering decisions. There is irony in that a serious attempt to be clear about what is assumed known in reaching the solution is attacked on philosophical grounds, especially given L128-131. Also, any discussion of the solution, or how it was arrived at, must take into account that in L180-181 we explicitly allow for permanent review.
We have been thinking about what might be covered if a discussion section is to be added to the paper. The predictions are for the numbers in runoff records, so in the context of the paper the records are reality. Say there are three regions in a space: physical reality (i.e. the river catchments), hydrologic knowledge and performance. The paper is about a single mapping from hydrologic knowledge to performance. Other mappings are not discussed, such as mappings to or from physical reality, or back from performance. One-to-many, many-to-one and many-to-many mappings are not discussed. To the extent that it can be helpful, such mappings could be described in a discussion section in terms of common philosophical concepts which interest RR modellers.
The 2nd paragraph in CC4 is grossly unfair. It seems to be a reaction to this text from AC1: "One of the points made in the blind validation work is that models and modellers must be seen as a package (Ewen and Parkin, 1996). Our experience is that hydrologists running an RR model sometimes forget the nature of the model. Sometimes it is treated as a statistical black box. The worst case is when the model is treated as if it is reality, and it is implicitly assumed that there are no constraints on what can be concluded from the resulting simulations." The term "black box" seems to have been lifted from this text and its meaning adjusted to fit your case. The reality is that RR models are often run as a general resource, well outside the control of model developers (you seemed to have assumed that the text is about model developers running their own models). Some models run as a general resource have considerable complexity, and this can lead to belief in simulated detail (including spatial variations in response) or in all the available energy being spent on the sheer effort of parameter calibration against one or a few statistics (i.e. treating the model as if it is a black box).
John and Greg
Citation: https://doi.org/10.5194/hess-2021-170-AC2 - CC5: 'Reply on AC2', Grey Nearing, 17 May 2021
-
CC6: 'Reply on AC2', Grey Nearing, 17 May 2021
As a note to the editor regarding my previous comments, there is only one question that needs to be answered:
Is it appropriate for hydroloigsts to publish on subjects that have well-established homes in different academic departments, without making an effort to collaborate with or incorporate the literature from those departments?
If we believe that this is appropriate, then the message we are sending is that while we value the importance of the quations that these other disciplines address (here philosophy of science and technical communication), we do not value the contributions of practitioners in those fields. My opinion is that this degrades the structure of academic pursuits in toto. It is, in my estimation, pure arrogance to imagne that we can speak meaningfully about deep questions that are at the center of other disciplines areas of study without collaborating directly with those other disciplines, or at least making an attempt to read and synthesize the literature from those other areas of study.
Citation: https://doi.org/10.5194/hess-2021-170-CC6 -
AC5: 'Reply on CC6', Greg O'Donnell, 07 Jun 2021
The reviewer recommends that the paper should not be published because it inappropriately addresses “deep questions that are at the center of other discipline areas of study”. The disciplines in question are philosophy and technical communication. In the paper, the RR records are treated as a set of numbers that lie at an outer boundary to the work (see Fig. 1), so all deep philosophical questions about the reality of river catchments lie outside the scope of the work. The usual rules of logic, English, and written communication are not breached in the development of KERR, nor is the work unusual in paying close attention to describing the basis and assumptions for modelling. All deep general questions in technical communication therefore lie outside the scope of the work, such as questions about how scientific papers should be written.
Here is a note on the science in the paper. It addresses the following statement, in CC5, from the reviewer:
There is no science in this paper. No hypothesis was tested (in my original review, I attempted to - generously - treat the new model development as a hypothesis test of the new philosophy, but the reviewers did not even recognize that this is what I was doing in their replies).
The title of the paper is “If a rainfall-runoff model was a hydrologist”. An appropriate hypothesis was tested: i.e. An RR model can be a model of a hydrologist. KERR is a model of a layman. KERR was tested by comparing its median NSE against that achieved using GR4J, with success being equated with achieving a value approaching that for GR4J (showing that KERR is a reasonable RR model, taking into account that GR4J has the advantage of using evaporation data and calibration). The layman is a benchmark for a hydrologist (i.e. a reference against which hydrologists can be compared and measured).
Citation: https://doi.org/10.5194/hess-2021-170-AC5
-
AC5: 'Reply on CC6', Greg O'Donnell, 07 Jun 2021
-
AC2: 'Reply on CC4', Greg O'Donnell, 17 May 2021
-
CC3: 'Reply on AC1', Grey Nearing, 07 May 2021
-
CC2: 'Comment on hess-2021-170', Grey Nearing, 27 Apr 2021
Review by Grey Nearing
Summary of Review:
It’s always exciting to see people thinking about hydrology in new ways, and I was initially excited to receive this paper to review. However, after reading it carefully my opinion is that there is little new in this paper, either technically or philosophically. While I appreciate the effort to reframe some of the historical arguments into a new synthesis, I found the synthesis unnecessarily convoluted and the paper generally difficult to read. The idea of “Knowledge Documents” or “Knowledge Tables” is interesting at first glance (it is always a good idea to document all assumptions), however I cannot see what this adds over standard practices for documenting models, model development, and scientific experiments. Additionally, the model presented in this paper performed poorly relative to the benchmark model.
It seems unnecessarily difficult and cumbersome to document everything in “everyday English” as the authors suggest (line 130 and elsewhere). There is a reason why we use equations, plots, technical language and jargon, and diagrams to convey information in scientific documents, and I neither want to read nor write papers where everything that is “assumed known” (table 2) must be expressed in everyday english and/or in table format. I do not see in this paper any suggestion about how the current style of technical writing fails - good papers and technical reports already are expected to document all assumptions.
To be blunt, it’s a big ask to change the way we do technical writing, and this paper does not approach the problem with any theory or evidence to support that suggestion. There is a science (and history) of technical writing (e.g., [1, 2]), and this paper makes suggestions that belong in that area of study without acknowledging the existence of that discipline, adopting any of the practices of analysis from that discipline, or even citing any papers from that discipline. I am also not an expert in the theory of technical writing, but given that this is the main focus of the paper I guess it should play a central role.
I have several major concerns with the philosophy of modeling outlined in this paper, and could go through the article paragraph by paragraph to highlight why I think that many of the ideas developed here are either redundant or, in certain cases, counter-productive. But I hesitate to spend too much time going through these concerns in detail because I don’t see a path to publication given the reasons summarized above. If there is some disagreement between reviewers, I will create a more detailed account of my concerns throughout the paper.
General Concerns:
I will try to respond briefly to the main points of the paper (without deconstructing the philosophical discussion in the paper). The paper is written in a way that makes it difficult to understand what the authors intend to be their primary contributions, so the points below reflect my best attempt at extracting the main points.
- The paper introduces a concept of a “Modeled Hydrologist” to help contextualize certain difficulties in developing complex systems models.
The “Modeled Hydrologist” concept appears to be the similar to what has previously been called the “perceptual” and/or “conceptual” parts of hydrological modeling [3, 4]. I’m not sure what anthropomorphizing these concepts, or the model itself, adds to the discussion. It is already well known and widely discussed that models are collections of approximations and assumptions, and there are numerous papers in the hydrology literature that explicitly recognize and discuss this [e.g., 5]. I am having trouble seeing any new ideas here, or any explanation about what is missing from previous “philosophy of modeling” work.
- The paper argues to use “Knowledge Documentation” to “document comprehensively what is assumed known, so that “the ignorance can be deduced when necessary.”
This comment reiterates my main concern from the review summary above. I do not think that the argument to use “knowledge documents” is well thought out. Documenting all assumptions and methods is the purpose of standard technical writing in science. The format and customs around this ~300 year old culture and practice about how to construct scientific documents, document assumptions, and allow reproducibility is mature and, in my opinion, very effective when done correctly. What is missing from this? Why focus on plain English statements? Why not use the standard set of tools that scientists use to perform exactly this job (e.g., equations, figures, plots, tables, etc.)? All of these things contain knowledge or information in any meaningful sense. Equations especially are used to formally document assumptions and are often much more efficient and precise than written text.
My opinion is that by encouraging framing assumptions in everyday english, we will encourage sloppy and imprecise thinking. To be perfectly frank, it would be difficult for me to think of a suggestion about how to change scientific writing that I would disagree with more strongly than this one. Our goal should be to move *away from* plain language descriptions toward more formal, mathematical descriptions of all assumptions.
Further, the knowledge tables in this paper (e.g., Table 6) do not contain enough information to recreate the model and experiment. Not only are they difficult to read and to synthesize, they are incomplete. If the model were described in the traditional way, it would be much easier to understand and reproduce. Maybe I’m just not used to these tables, but it would be much easier both to see and understand your assumptions and also to recreate your model structure if you just used equations like normal papers do.
A counter argument for the position I’ve outlined is that it might be easier for artificial intelligence or natural language processing systems to extract conceptual or semantic information from knowledge tables like the ones used here, rather than from the narrative style text that is more common in scientific writing. This is an interesting thought, however if this were a goal - however that is not the goal here. If we were to formally adapt scientific writing practices to be more accessible to NLP and automated text readers, I would like to see the problem addressed (theoretically and empirically) from that perspective.
- The paper outlines a new model that is used as an example of applying the Modeled Hydrologist concept to guide and/or document model development.
The new model performs poorly against the benchmark. What is the advantage of this new model vs. existing models? What are the use cases for a model like this? Is the only reason for creating this new model to provide an example for the “knowledge table” procedure? If so, it seems artificial. In general, the model itself is not something that I can see any hydrologist using or being interested in for any practical or intellectual reason, and publishing a new model to illustrate a new philosophy would be interesting if it worked and if it demonstrated some particular value to the new philosophy, but here it didn’t work and the authors did not make a convincing argument about how or why the new philosophy gave us something that standard scientific writing would not.
[1] O Hara, F. M. "A brief history of technical communication." ANNUAL CONFERENCE-SOCIETY FOR TECHNICAL COMMUNICATION. Vol. 48. UNKNOWN, 2001.
[2] Todd, Jeff. "Teaching the history of technical communication: A lesson with Franklin and Hoover." Journal of technical writing and communication 33.1 (2003): 65-81.
[3] Beven, Keith. "Spatially distributed modeling: Conceptual approach to runoff prediction." Recent Advances in the Modeling of Hydrologic Systems. Springer, Dordrecht, 1991. 373-387.
[4] Gupta, Hoshin V., et al. "Towards a comprehensive assessment of model structural adequacy." Water Resources Research 48.8 (2012).
[5] Gupta, Hoshin V., and Grey S. Nearing. "Debates—The future of hydrological sciences: A (common) path forward? Using models and data to learn: A systems theoretic perspective on the future of hydrological science." Water Resources Research 50.6 (2014): 5351-5359.
Citation: https://doi.org/10.5194/hess-2021-170-CC2 -
RC2: 'Comment on hess-2021-170', Anonymous Referee #2, 24 May 2021
Review of the manuscript “If a Rainfall-Runoff Model was a Hydrologist” of Ewen and O’Donnell
Summary
In the manuscript “If a Rainfall-Runoff Model was a Hydrologist” by Ewen and O’Donnell, a set of parameterless rainfall-runoff models are developed in an experiment which aims to quantify the importance of the knowledge contained by the model itself. To make this knowledge explicit, the rainfall-runoff model is personified as a layman with an interest in the weather and river flows. The model is parameterless and relies on time-matching based on the similarity of the simulation day with a set of other days from the historical data record. The model performance of the developed KERR model is overall just slightly lower than the GR4J model performance for a set of UK catchments. A main finding of the paper is the strong relative importance of the temporal pattern of antecedent rainfall. This concrete model development example supports a broader and more philosophical discussion on hydrologic knowledge and laws within rainfall-runoff models.
The manuscript addresses relevant scientific questions on the role of hydrologic knowledge contained in rainfall-runoff models on model performance. However, at first read, I found the manuscript to be confusing in how it is structured and in its balance between the broader philosophical discussion and the very concrete, simple and specific modeling experiment. Which aspects of the specific example should or can we apply in more complex traditional rainfall-runoff modeling, is it the knowledge documentation through personification of the rainfall-runoff model?
I hope that the comments below will help improving the manuscript.
General comments:
1) The aim of the paper is not clearly stated in the abstract, I would suggest to explicitly add it. The aims are described in L70 and later L110 and in L174, however, throughout the manuscript it remains unclear what is exactly meant by ‘corruption’ of hydrologic knowledge flows within RR modelling. Could you clarify this further?
2) The manuscript does not contain a dedicated Conclusion section. Concluding remarks are provided in the Summary of Section 8. However, I think it would help the reader to include a dedicated conclusion section which specifically links back to the aims of the study. This would help to clarify the main message/focus of the manuscript.
3) The experiment was performed for a set of UK catchments. Could you discuss the application of the developed parameterless models and the conclusions drawn on the importance of wetness in the light of different climatic zones?
4) How could the proposed methodology of quantifying the importance of hydrologic knowledge held by the MH on model performance be applied in more traditional rainfall-runoff modeling?
5) Hydrological modeling is often used in practice to quantify the effect of change in a catchment (e.g. land use). In science, hydrological modeling is often used to increase our understanding of catchment functioning. Both would be difficult using the proposed approach of the parameterless model, could you please elaborate on this?
6) The manuscript includes several references to the study of Jakeman and Hornberger (1993). However, a short summary of the main aspects of this paper in relation to the current paper seems to be lacking.
7) The way the work is presented is sometimes confusing. For example, in Section 4.2, the Trivial and Seasonal RR models are presented. Later in Section 6, an additional Wetness model is mentioned. In Table 6, also the KERR model is presented. Perhaps, it would be good to clarify this in Section 4.2 and in the Method section 3 so that the reader has a better understanding of the main approach.
Specific comments
L5: Could you add here why personification can also be instructive?
L10: Simplification of complex systems is inherent to modeling, but I guess you want to quantify how and which of the knowledge contained in the model mostly affects model performance?
L11: What do you mean by classic MH?
L17: I found the sentence with “the relative importance is measured as 1 and 6” rather confusing. Do you mean: antecedent wetness is 6 times more important than seasons in simulating runoff in a time-matching modeling approach? I would suggest rephrasing this sentence (also in the Summary section).
Figure 1: Although mentioned in the caption, it was at first read not entirely clear to me that the numbers refer to the knowledge statements of the different Tables, I would suggest rephrasing the caption to clarify.
L66-67: Could you elaborate further on this?
L70: As mentioned before, what do you exactly mean by ‘lost or corrupted’ hydrologic knowledge flows within RR modeling?
L72: What do you exactly mean by science is an “activity and attitude”?
L73: Could you elaborate further what you mean by the significant deficits, difficulties or dangers?
L85: It is not clear to me what you mean by “not well behaved”, could you clarify?
L111: aim (3) is not entirely clear, could you elaborate on “the need to draw valid hydrologic conclusions”?
L134: In ‘traditional’ hydrological modeling, would you also recommend describing equations in the form of statements in everyday English? How does this relate with the more commonly provided model descriptions and equations?
L138: with “Here,” do you mean: in the models being developed in this study?
L141: could you specify conclusion 3 in Sect. 1, it is unclear to me to which point you are specifically referring to.
L174: when you mention “one of the aims”, I would find it helpful to also have a recap of the other aims of the research.
L198: I am not sure to fully understand what you mean by “the MH should know why”, could you clarify this part?
L201: “from such an experiment”, do you mean an experiment without data fitting?
L259: in contrast to the Trivial and Seasonal models, the Wetness model was not introduced earlier.
Fig2: Rain is a flux and should therefore have unit [L/T], I assume here it is mm/d. It is somewhat confusing to show negative values on the y-axis of the top panel. It would be clearer for the reader if rainfall pattern difference was also explained in the text describing Fig 2.
In the paragraph 313-319, the horizontal alignments around 1996 are explained twice.
Table 6: is the KERR model a general name for the Trivial, Seasonal and Wetness models and a fourth model? Could you please clarify?
L355: could you elaborate: “when drawing conclusions” on what?
Table 9: conclusion based on the third statement “Unpredictability” are not explained in the text of Section 7.1. This statement only comes back in the Summary of Section 8. Could you please elaborate on this finding already in Section 7.1?
L429: Here, I would suggest to explicitly mention “wetness, seasonality and unpredictability” to clarify: “the three pieces of hydrologic knowledge given in 8 and 9”.
L449: Could you elaborate further on what you mean by the wetness kernel and discuss more in detail the related hydrologic law?
Discussion: how important is personification of RR models? This seemed to be an important focus point at the start of the manuscript.
Typos:
L14: a MH instead of an MH
L228: meteorological instead of metrological
Citation: https://doi.org/10.5194/hess-2021-170-RC2 - AC4: 'Reply on RC2', Greg O'Donnell, 07 Jun 2021
-
EC1: 'Editor comment on hess-2021-170', Jan Seibert, 26 Jun 2021
We have received three valuable reviews for this contribution. While the reviews vary in their criticism, all three highlight that the manuscript is difficult to read. The reviews also point out that important previous studies have not been considered. Both points make it difficult to assess the novelty of the approach as it is hard to see how the presented work extends previous work if this work is not mentioned.
After reading the manuscript and discussion several times I am still rather confused on why personification (modelled hydrologist) is needed. Basically, what is nicely shown in this study is how easy it is to reproduce discharge in these UK catchments with some hydrological common sense. By considering seasonality and antecedent precipitation, one can reproduce observed discharge quite well. But this should not come as too much of a surprise; isn’t this the reason why our simple bucket-type models perform well? Still, I see value in this analysis of precipitation and discharge time series, providing insights into catchment functioning. However, I do not see why the personification framework is needed to present this analysis. Honestly, I agree with the reviewers that this makes the text rather confusing. Framing the study in this way also implies touching on important aspects of the philosophy of modelling. I don’t think this is needed, but if framed in this way, this needs to be clearly motivated, and previous work in the field needs to be acknowledged.
Another issue is the applicability of the presented approach in more challenging situations. The KERR approach will be more challenging in regions where discharge is less directly related to precipitation than in the UK (e.g., snow, dry seasons, …). An important question is also how the KERR approach could be used for predicting discharge for conditions outside observed conditions, which is one of the most important tasks of a model after all (especially also for those applications mentioned at the beginning of the manuscript).
I need to add an important note: One reviewer used inappropriate language in some of his comments. The authors did not complain about these comments and declined an offer to have them rewritten. Therefore, in the interest of the scientific debate, we decided to leave the comment-reply chain as it is. However, we want to make it very clear that reviews need to be written to avoid any offensive formulations. It is accepted, and even welcome, to express strong opinions, but the use of abusive language does more harm than good to communicate the content of the critic. While this applies to the formulation of any reviews, in HESS, with its open review process, the use of appropriate language is especially crucial.
Best regards,
Jan SeibertCitation: https://doi.org/10.5194/hess-2021-170-EC1
Status: closed
-
RC1: 'Review of hess-2021-170', Keith Beven, 23 Apr 2021
Review of Ewan and O’Donnell, If a rainfall-runoff model was a hydrologist by Keith Beven
It is always interesting to read one of John’s papers, and this one is no exception. Given the current penchant for throwing data into the black boxes of machine/deep learning algorithms and declaring success without producing much in the way of understanding, the more thoughtful approach presented here, in a highly original way, is of value. I did find the paper not easy to read in places, with some sweeping generalisations and somewhat limited reference to previous work (see below). And the end result is limited to a really simple non-parametric modelling approach that appears to only reflect the basic minimum of hydrological knowledge, that flows reflect today’s rainfall and some index of antecedent conditions (here represented as only by matching the pattern of average rainfalls over the last n days, in a least squares sense without any allowance for autocorrelation in those values).
It sort of works (even better than a calibrated rainfall-runoff model for some of the catchments and mostly not much worse in terms of NSE). It sort of works for transfer from a proxy catchment (but see comment below on this). But there are perhaps some issues of hydrological knowledge that are somewhat glossed over.
- There will be uncertainties in the data. These are mentioned but then ignored. But can be significant – e.g. where event runoff coefficients are highly variable and go greater than 1 (see Beven and Smith, 2015; Beven, 2019).
- This implies that it might be useful to allow for uncertainty in the mean prediction – you can after all pattern match to get an ensemble of possible values which could be treated as a first estimate of a pdf reflecting uncertainty.
- The use of daily data is convenient in terms of getting hold of the data but will be subject to discretisation issues in small catchments (where the peak occurs in the day affects the volume for that day) and autocorrelation issues in larger catchments (every day is here treated as independent, even on recessions).
- Snow is mentioned, but then neglected. It may be relatively unimportant in most UK catchments perhaps, but one of the outcomes from the Iorgulescu and Beven (2004) attempt at a similar non-parametric data-based predictor based on CART also with different rainfall period inputs, was that the classification identified anomalous periods associated with delayed snowmelt. That this might happen is hydrological knowledge is easily stated in English!
- For the transferability in space, a brute force approach to finding the best KERR model is taken but checking all the catchments in the data set and picking the best as the donor site. That cannot be used if a catchment was treated as ungauged (when the proxy basin transfer is actually required) and really does not seem to be making too much use of hydrological knowledge (though the difficulty of transferring response characteristics using catchment characteristics or model parameters is, of course, well known). But our model hydrologist could perhaps be expected to know that there is an expectation that catchments of different scales might involve different processes, or catchments in hard rock wet areas of the west might be expected to be different to chalk catchments in the south east. So, in respect if the title of the paper the words very naïve might need to be added before hydrologist (and indeed the MH is referred to elsewhere as a layman or angler rather than someone with better hydrological knowledge).
- The paper does not mention that we are often wanting to simulate the potential effects of future change. If that change is only to the inputs then the proposed strategy might work, perhaps with some degradation if the processes change. If, however, if it is change due to reforestation or NFM measures or other changes, then it could be used as a baseline to compare with future observations, but not as a simulator of a changed future (and indeed the changes might be within the uncertainty of the predictions if that was assessed in some way).
Which then, of course, raises the question of what might happen if the MH had access to that committee of experienced hydrologists (or even inexperienced hydrologists – see the tale of the hydrological monkeys in the Prophecy paper cited). That experience might lead them to think more in terms of model parameters than direct use of data (Norman Crawford, Sten Bergstrom, and Dave Dawdy are examples from quite different modelling strategies but all were known for their skill in estimating parameters for models, including for ungauged sites …. though there may have been some potential for positive bias in tweaking and reporting results there). And there are instances of committees of experienced hydrologists not doing that well in setting up models and even getting worse results as more data were made available (see the Rae Mackay et al. groundwater example from NIREX days).
I would suggest that the authors could make more of the difficulties of going further with more experience and knowledge about catchment characteristics. It is an argument for their KERR approach – but I would also suggest that the KERR approach also be extended to reflect the uncertainty to be expected as a result of that simplicity.
Some specific comments
L37 Best available theory – but there is also the issue of whether that theory is good enough when it differs from the perceptual model of the processes.
L56. There seems to be a lot of overlap between what is referred to here as hydrological knowledge and the concept of a qualitative perceptual model. Both need to be simplified to make quantitative predictions (and often do so in ways that conflict with the perceptual model because of what is called here selective ignorance).
P121. There were earlier suggestions of this approach, e.g. Buytaert and Beven, 2009, or even the donor catchment approach of the FSR/FEH.
L128. This is analogous to the Condition Tree concept in Beven et al, CIRIA Report C721 (also Beven and Alcock, 2012) that results in an audit trail to be evaluated by others
L132. Performance really ought to take accopunt of uncertainty in the data (see earlier comment and papers cited in Beven, 2019)
L137. But catchments that look very similar can also respond quite diffrerentl;y – even if mapped as the same soils/geology. We have an example from monitoring two small catchments on the Howgills. So issue of requisite knowledge here is when such small scale variability might integrate out (or not) – this was discussed in the 80s as a representative elementary are concept – eg. Wood et al.1988; also papers on when variability in stream chemistry starts to integrate out).
L149. I think the “peasant’s model” was suggested by Eamon Nash in modelling the Nile before this.
L166. But again that upper limit will also definitely depend on the uncertainty and inconsistencies in the observations.
L182 nowhere to hide – exactly the point made for the Condition Tree / audit trail
L336. Why not use an ensemble here to add in some uncertainty to the process?
L394. But it is only a match to rainfall pattern – is there no additional knowledge that could be used? In the case of expected greater autocorrelation in large catchment for example (or extension to shorter time steps in small catchments) perhaps the last few predictions of flow might be useful (avoiding just doing 1 step ahead forecasting, though such a model with updating does also represent the forecasting model to beat – e.g the Lambert ISO model, see RRM book)
P425 relative importance is 2???? Not clear
P426. Always upredictable – see the inexact science paper again
P456. Does not require scaling – There is an expectation that processes change with increasing scale, and that specific discharge becomes less variable with increasing scale, and generally less in moving from headwaters to large scales, so does this imply that this is compensated by a decline in mean catchment rainfalls so that the power on the area scaling is low, or simply that the variability is within the uncertainty of the predictions so does not have too great an effect on NSE? There are past studies on scaling with area that might be treated as hydrological knowledge here.
Wood, E.F., Sivapalan, M., Beven, K.J. and Band, L. (1988), Effects of spatial variability and scale with implications to hydrologic modelling. J. Hydrology, 102, 29-47.
Buytaert, W and Beven, K J, 2009, Regionalisation as a learning process, Water Resour. Res., 45, W11419, doi:10.1029/2008WR007359.
Beven, K. J. and Alcock, R., 2012, Modelling everything everywhere: a new approach to decision making for water management under uncertainty, Freshwater Biology, 56, 124-132, doi:10.1111/j.1365-2427.2011.02592.x
Beven, K. J., and Smith, P. J., 2015, Concepts of Information Content and Likelihood in Parameter Calibration for Hydrological Simulation Models, ASCE J. Hydrol. Eng., 20(1), p.A4014010, doi: 10.1061/(ASCE)HE.1943-5584.0000991.
Beven, K. J., 2019, Towards a methodology for testing models as hypotheses in the inexact sciences, Proceedings Royal Society A, 475 (2224), doi: 10.1098/rspa.2018.0862
Iorgulescu, I and Beven, K J, 2004, Non-parametric direct mapping of rainfall-runoff relationships: an alternative approach to data analysis and modelling, Water Resources Research, 40 (8), W08403, 10.1029/2004WR003094
Citation: https://doi.org/10.5194/hess-2021-170-RC1 - AC3: 'Reply on RC1', Greg O'Donnell, 07 Jun 2021
-
CC1: 'Comment on hess-2021-170', Grey Nearing, 27 Apr 2021
Review by Grey Nearing
Summary of Review:
It’s always exciting to see people thinking about hydrology in new ways, and I was initially excited to receive this paper to review. However, after reading it carefully my opinion is that there is little new in this paper, either technically or philosophically. While I appreciate the effort to reframe some of the historical arguments into a new synthesis, I found the synthesis unnecessarily convoluted and the paper generally difficult to read. The idea of “Knowledge Documents” or “Knowledge Tables” is interesting at first glance (it is always a good idea to document all assumptions), however I cannot see what this adds over standard practices for documenting models, model development, and scientific experiments. Additionally, the model presented in this paper performed poorly relative to the benchmark model.
It seems unnecessarily difficult and cumbersome to document everything in “everyday English” as the authors suggest (line 130 and elsewhere). There is a reason why we use equations, plots, technical language and jargon, and diagrams to convey information in scientific documents, and I neither want to read nor write papers where everything that is “assumed known” (table 2) must be expressed in everyday english and/or in table format. I do not see in this paper any suggestion about how the current style of technical writing fails - good papers and technical reports already are expected to document all assumptions.
To be blunt, it’s a big ask to change the way we do technical writing, and this paper does not approach the problem with any theory or evidence to support that suggestion. There is a science (and history) of technical writing (e.g., [1, 2]), and this paper makes suggestions that belong in that area of study without acknowledging the existence of that discipline, adopting any of the practices of analysis from that discipline, or even citing any papers from that discipline. I am also not an expert in the theory of technical writing, but given that this is the main focus of the paper I guess it should play a central role.
I have several major concerns with the philosophy of modeling outlined in this paper, and could go through the article paragraph by paragraph to highlight why I think that many of the ideas developed here are either redundant or, in certain cases, counter-productive. But I hesitate to spend too much time going through these concerns in detail because I don’t see a path to publication given the reasons summarized above. If there is some disagreement between reviewers, I will create a more detailed account of my concerns throughout the paper.
General Concerns:
I will try to respond briefly to the main points of the paper (without deconstructing the philosophical discussion in the paper). The paper is written in a way that makes it difficult to understand what the authors intend to be their primary contributions, so the points below reflect my best attempt at extracting the main points.
- The paper introduces a concept of a “Modeled Hydrologist” to help contextualize certain difficulties in developing complex systems models.
The “Modeled Hydrologist” concept appears to be the similar to what has previously been called the “perceptual” and/or “conceptual” parts of hydrological modeling [3, 4]. I’m not sure what anthropomorphizing these concepts, or the model itself, adds to the discussion. It is already well known and widely discussed that models are collections of approximations and assumptions, and there are numerous papers in the hydrology literature that explicitly recognize and discuss this [e.g., 5]. I am having trouble seeing any new ideas here, or any explanation about what is missing from previous “philosophy of modeling” work.
- The paper argues to use “Knowledge Documentation” to “document comprehensively what is assumed known, so that “the ignorance can be deduced when necessary.”
This comment reiterates my main concern from the review summary above. I do not think that the argument to use “knowledge documents” is well thought out. Documenting all assumptions and methods is the purpose of standard technical writing in science. The format and customs around this ~300 year old culture and practice about how to construct scientific documents, document assumptions, and allow reproducibility is mature and, in my opinion, very effective when done correctly. What is missing from this? Why focus on plain English statements? Why not use the standard set of tools that scientists use to perform exactly this job (e.g., equations, figures, plots, tables, etc.)? All of these things contain knowledge or information in any meaningful sense. Equations especially are used to formally document assumptions and are often much more efficient and precise than written text.
My opinion is that by encouraging framing assumptions in everyday english, we will encourage sloppy and imprecise thinking. To be perfectly frank, it would be difficult for me to think of a suggestion about how to change scientific writing that I would disagree with more strongly than this one. Our goal should be to move *away from* plain language descriptions toward more formal, mathematical descriptions of all assumptions.
Further, the knowledge tables in this paper (e.g., Table 6) do not contain enough information to recreate the model and experiment. Not only are they difficult to read and to synthesize, they are incomplete. If the model were described in the traditional way, it would be much easier to understand and reproduce. Maybe I’m just not used to these tables, but it would be much easier both to see and understand your assumptions and also to recreate your model structure if you just used equations like normal papers do.
A counter argument for the position I’ve outlined is that it might be easier for artificial intelligence or natural language processing systems to extract conceptual or semantic information from knowledge tables like the ones used here, rather than from the narrative style text that is more common in scientific writing. This is an interesting thought, however if this were a goal - however that is not the goal here. If we were to formally adapt scientific writing practices to be more accessible to NLP and automated text readers, I would like to see the problem addressed (theoretically and empirically) from that perspective.
- The paper outlines a new model that is used as an example of applying the Modeled Hydrologist concept to guide and/or document model development.
The new model performs poorly against the benchmark. What is the advantage of this new model vs. existing models? What are the use cases for a model like this? Is the only reason for creating this new model to provide an example for the “knowledge table” procedure? If so, it seems artificial. In general, the model itself is not something that I can see any hydrologist using or being interested in for any practical or intellectual reason, and publishing a new model to illustrate a new philosophy would be interesting if it worked and if it demonstrated some particular value to the new philosophy, but here it didn’t work and the authors did not make a convincing argument about how or why the new philosophy gave us something that standard scientific writing would not.
[1] O Hara, F. M. "A brief history of technical communication." ANNUAL CONFERENCE-SOCIETY FOR TECHNICAL COMMUNICATION. Vol. 48. UNKNOWN, 2001.
[2] Todd, Jeff. "Teaching the history of technical communication: A lesson with Franklin and Hoover." Journal of technical writing and communication 33.1 (2003): 65-81.
[3] Beven, Keith. "Spatially distributed modeling: Conceptual approach to runoff prediction." Recent Advances in the Modeling of Hydrologic Systems. Springer, Dordrecht, 1991. 373-387.
[4] Gupta, Hoshin V., et al. "Towards a comprehensive assessment of model structural adequacy." Water Resources Research 48.8 (2012).
[5] Gupta, Hoshin V., and Grey S. Nearing. "Debates—The future of hydrological sciences: A (common) path forward? Using models and data to learn: A systems theoretic perspective on the future of hydrological science." Water Resources Research 50.6 (2014): 5351-5359.
Citation: https://doi.org/10.5194/hess-2021-170-CC1 -
AC1: 'Reply on CC1', Greg O'Donnell, 07 May 2021
Grey
Thank you for taking the time to review this paper. Your review approaches the paper as if it is a collection of separate technical notes: one for a model, another for a method, and a third for a set of concepts and definitions. You then robustly criticize each note on the basis that its contents is not fit for purpose in general RR modelling (along the lines that the model does not have advantages relative to existing models, the method is poorer than existing methods, and the definitions and concepts are redundant).
The reality of the paper is quite different. It is about an experiment which is a scientific exploration of the very difficult problem of properly linking performance to hydrologic knowledge. Specifically, it is a report on the huge effort expended by a pair of researchers to develop numerous arguments and definitions, reframe some historical arguments, develop a full blown method and a completely new RR model, and thus make it possible to create a simple concrete example for the link between performance and hydrologic knowledge (that concrete example is a benchmark for the link and gives a basis for generating hypotheses). Somewhere, therefore, there must be a problem with communication or expectation. Note, for example, the paper does not even hint that the model is proposed for general use in the way you assume (and are very robust about) in your review. Perhaps that is a problem of expectation. If there are problems of communication, then surely they are quite easy to resolve.
We note that in the future there may be a more detailed account of your concerns. In the meantime, here are responses to three points you have made so far.
1. You commented that "The new model performs poorly against the benchmark". A benchmark is not a target; it is a reference point against which things can be measured. The performance benchmark used shows what can be achieved when employing the substantial advantages of allowing calibration and using evaporation data. KERR is simple, parameterless, and does not use evaporation data, so if its performance consistently matched or exceeded the benchmark the resulting shock would have marked the start of a revolution in RR modelling.
2. You commented that "In general, the model itself is not something that I can see any hydrologist using or being interested in for any practical or intellectual reason". Let us look closely at KERR: (i) it is probably the best performing simple parameterless model available for temperate catchments; (ii) it gives information on similarity across time and place; (iii) it can use proxy catchment data which is not scaled or modified in any way; (iv) it is linked directly and comprehensively to a set of hydrologic knowledge; and (v) it preserves that knowledge and allows it to be quantified. It is easy to imagine a research hydrologist spending a decade or more working with KERR and its descendants.
3. Our response to your claim that the concept of the Modelled Hydrologist (MH) is redundant is as follows. A perceptual or conceptual model does not exist in a vacuum, it entails understanding and actions by human beings, and places restrictions on what that understanding and actions can or must be. There is, therefore, an entity (the MH) which is larger than the perceptual or conceptual model. This raises the question of what that larger entity is and how it relates to the search for hydrologic laws at the catchment scale. There is, though, another side to this. One of the points made in the blind validation work is that models and modellers must be seen as a package (Ewen and Parkin, 1996). Our experience is that hydrologists running an RR model sometimes forget the nature of the model. Sometimes it is treated as a statistical black box. The worst case is when the model is treated as if it is reality, and it is implicitly assumed that there are no constraints on what can be concluded from the resulting simulations. Putting all fine scientific arguments to one side, it is quite legitimate in an international research journal to encourage thinking, or a change in thinking, about the nature of RR modelling and RR models.
There are obvious dangers in creating a lot of things at one time. It gives multiple targets for criticism (fair and unfair criticism), and runs the risk that the baby will be thrown out with the bathwater.
John and Greg
Citation: https://doi.org/10.5194/hess-2021-170-AC1 -
CC3: 'Reply on AC1', Grey Nearing, 07 May 2021
HI John and Greg,
Please notice that your reply does not address the majority of my comments. You have chosen to focus on one (small) aspect of my comment and ignore the rest. I acknowledged explicitly in my comment that your model was not intended to be a new model for practical use, yet you reated my comment as if this was the point and ignored the point I was making (which was stated very clearly). I would appreaciate a more sincere attempt to respond to what I worte.
Thank you,
Grey
Citation: https://doi.org/10.5194/hess-2021-170-CC3 -
CC4: 'Reply on AC1', Grey Nearing, 07 May 2021
Just to to add a little clarity to my previous response, I want to make it clear that my opinion is that this paper is not anywhere near the ballpark of being a serious contribution to the literature. It is (yet another) philosphically naive paper by hydrologists who are working from intuition rather than from any epistmological formality.
I do not agree with the authros that there needs to be a reconceptualization of how we think of rainfall-runoff models. This tired cliche of people treating models as "black boxes" is an excuse to do bad philsoophy, not an actual phenomena that happens among model developers. Notice that the authors did not actually point to any concrete problem (bad inferences, bad resutls, bad predictions) that occur *because of* this supposed problem (thus my criticism of their model), they imagine an audience of people who are apparently less sophisticated than themselves and lack a basic understnaidng of what models are. This imaginary boogyman does not exist, and if it did exist, it is not justification for doing bad philosophy or for chaning how we write scientific papers.
What I mean when I say that this is bad philosophy is simple. This paper touches primarily on two subjects: (1) scientific epistemology, especially as it relates to the role of models in science, and (2) scientific writing. These authors make zero effort to review either of these (large) bodies of literature. They cite only other hydrologists who have made the same mistake in the past. We have a tradition in hydrology of letting hydrologists pretend to be philosophers without any actual attempt to synthesise the state of the philosophy of science as it relates to the problem of model realism (or uncertainty, or hypothesis testing, or underdetermination, etc.). This paper does not make even an attempt to build on the state of the science (here, the state of the philosophy) that is at the core of the discussion. The authors make no attempt to synthesize scientific epistemology or the philosophy of technical writing (which are both deep fields in their own right), yet they presume to make suggestions about both. It's embarassing to read this paper.
Imagine if the situation were reversed -- suppose a philosopher with no technical knowledge published a paper on climate change attacking a strawman problem and coming to a ridiculous conclusion about how climate scientists should change their models - perhaps their suggestion is to account for sun cycles without making an attempt to understand the current state of cliamte models. The analogy is that this current paper under review comes to an equally ridiculous conclusion about how to write scientific articles with "knowledge tables" in "plain english" instead of using the standard conventions of technical writing, and it comes to this conclusion without making even an attempt to understand either (1) the state of the philosophy of science on modeling or (2) the state of the science on the philosophy of technical writing. The hypothetical paper by this hypothetical philosopher would not pass the laugh test, let alone be sent out for review in any journal (hopefully), yet here we are reviewing an equally ridiculous philosophy paper in a hydrology journal. Frankly, it is disgraceful that we would even consider publishing this nonsense - this paper makes a mockery of not one, but two academic disciplines.
I strongly advise that this paper should be rejected.Citation: https://doi.org/10.5194/hess-2021-170-CC4 -
AC2: 'Reply on CC4', Greg O'Donnell, 17 May 2021
Grey
This is a response to CC3 and CC4.
CC3 says: "You have chosen to focus on one (small) aspect of my comment and ignore the rest." In fact, we focussed on a claim that our model has absolutely no lasting value whatsoever to anyone, and a claim to the effect that the subject of the title of the paper is a redundant conception. These claims are central to two of the three general concerns you highlighted in your review. The claims are not well founded (they lack insight) and some readers may read the review but not the paper. We therefore responded to the claims as soon as we could.
A sense of proportion and fairness is needed in discussing the third highlighted concern (philosophy). In CC4, in the name of philosophy, you try and shout down (cancel) hydrologists who use intuition creatively in hydrology. Cannot intuition, and the insight it brings, not simply be appreciated and be adapted for use for the general good. It never crossed our minds that a reader or reviewer would persist in the notion that we are somehow trying to reinvent technical writing or are engaged with what you describe as "changing how we write scientific papers". Neither did it cross our minds that a reader or reviewer would persist in assuming we propose the use of everyday English other than in scientific exploration, and then only when it is useful and practical (our background is physically-based, distributed RR modelling, where the documentation runs to hundreds of pages of text, equations and diagrams).
The paper gives a science-based solution to a real-world problem: benchmark links between hydrologic knowledge and performance are needed as a basis for measurements related to engineering decisions. There is irony in that a serious attempt to be clear about what is assumed known in reaching the solution is attacked on philosophical grounds, especially given L128-131. Also, any discussion of the solution, or how it was arrived at, must take into account that in L180-181 we explicitly allow for permanent review.
We have been thinking about what might be covered if a discussion section is to be added to the paper. The predictions are for the numbers in runoff records, so in the context of the paper the records are reality. Say there are three regions in a space: physical reality (i.e. the river catchments), hydrologic knowledge and performance. The paper is about a single mapping from hydrologic knowledge to performance. Other mappings are not discussed, such as mappings to or from physical reality, or back from performance. One-to-many, many-to-one and many-to-many mappings are not discussed. To the extent that it can be helpful, such mappings could be described in a discussion section in terms of common philosophical concepts which interest RR modellers.
The 2nd paragraph in CC4 is grossly unfair. It seems to be a reaction to this text from AC1: "One of the points made in the blind validation work is that models and modellers must be seen as a package (Ewen and Parkin, 1996). Our experience is that hydrologists running an RR model sometimes forget the nature of the model. Sometimes it is treated as a statistical black box. The worst case is when the model is treated as if it is reality, and it is implicitly assumed that there are no constraints on what can be concluded from the resulting simulations." The term "black box" seems to have been lifted from this text and its meaning adjusted to fit your case. The reality is that RR models are often run as a general resource, well outside the control of model developers (you seemed to have assumed that the text is about model developers running their own models). Some models run as a general resource have considerable complexity, and this can lead to belief in simulated detail (including spatial variations in response) or in all the available energy being spent on the sheer effort of parameter calibration against one or a few statistics (i.e. treating the model as if it is a black box).
John and Greg
Citation: https://doi.org/10.5194/hess-2021-170-AC2 - CC5: 'Reply on AC2', Grey Nearing, 17 May 2021
-
CC6: 'Reply on AC2', Grey Nearing, 17 May 2021
As a note to the editor regarding my previous comments, there is only one question that needs to be answered:
Is it appropriate for hydroloigsts to publish on subjects that have well-established homes in different academic departments, without making an effort to collaborate with or incorporate the literature from those departments?
If we believe that this is appropriate, then the message we are sending is that while we value the importance of the quations that these other disciplines address (here philosophy of science and technical communication), we do not value the contributions of practitioners in those fields. My opinion is that this degrades the structure of academic pursuits in toto. It is, in my estimation, pure arrogance to imagne that we can speak meaningfully about deep questions that are at the center of other disciplines areas of study without collaborating directly with those other disciplines, or at least making an attempt to read and synthesize the literature from those other areas of study.
Citation: https://doi.org/10.5194/hess-2021-170-CC6 -
AC5: 'Reply on CC6', Greg O'Donnell, 07 Jun 2021
The reviewer recommends that the paper should not be published because it inappropriately addresses “deep questions that are at the center of other discipline areas of study”. The disciplines in question are philosophy and technical communication. In the paper, the RR records are treated as a set of numbers that lie at an outer boundary to the work (see Fig. 1), so all deep philosophical questions about the reality of river catchments lie outside the scope of the work. The usual rules of logic, English, and written communication are not breached in the development of KERR, nor is the work unusual in paying close attention to describing the basis and assumptions for modelling. All deep general questions in technical communication therefore lie outside the scope of the work, such as questions about how scientific papers should be written.
Here is a note on the science in the paper. It addresses the following statement, in CC5, from the reviewer:
There is no science in this paper. No hypothesis was tested (in my original review, I attempted to - generously - treat the new model development as a hypothesis test of the new philosophy, but the reviewers did not even recognize that this is what I was doing in their replies).
The title of the paper is “If a rainfall-runoff model was a hydrologist”. An appropriate hypothesis was tested: i.e. An RR model can be a model of a hydrologist. KERR is a model of a layman. KERR was tested by comparing its median NSE against that achieved using GR4J, with success being equated with achieving a value approaching that for GR4J (showing that KERR is a reasonable RR model, taking into account that GR4J has the advantage of using evaporation data and calibration). The layman is a benchmark for a hydrologist (i.e. a reference against which hydrologists can be compared and measured).
Citation: https://doi.org/10.5194/hess-2021-170-AC5
-
AC5: 'Reply on CC6', Greg O'Donnell, 07 Jun 2021
-
AC2: 'Reply on CC4', Greg O'Donnell, 17 May 2021
-
CC3: 'Reply on AC1', Grey Nearing, 07 May 2021
-
CC2: 'Comment on hess-2021-170', Grey Nearing, 27 Apr 2021
Review by Grey Nearing
Summary of Review:
It’s always exciting to see people thinking about hydrology in new ways, and I was initially excited to receive this paper to review. However, after reading it carefully my opinion is that there is little new in this paper, either technically or philosophically. While I appreciate the effort to reframe some of the historical arguments into a new synthesis, I found the synthesis unnecessarily convoluted and the paper generally difficult to read. The idea of “Knowledge Documents” or “Knowledge Tables” is interesting at first glance (it is always a good idea to document all assumptions), however I cannot see what this adds over standard practices for documenting models, model development, and scientific experiments. Additionally, the model presented in this paper performed poorly relative to the benchmark model.
It seems unnecessarily difficult and cumbersome to document everything in “everyday English” as the authors suggest (line 130 and elsewhere). There is a reason why we use equations, plots, technical language and jargon, and diagrams to convey information in scientific documents, and I neither want to read nor write papers where everything that is “assumed known” (table 2) must be expressed in everyday english and/or in table format. I do not see in this paper any suggestion about how the current style of technical writing fails - good papers and technical reports already are expected to document all assumptions.
To be blunt, it’s a big ask to change the way we do technical writing, and this paper does not approach the problem with any theory or evidence to support that suggestion. There is a science (and history) of technical writing (e.g., [1, 2]), and this paper makes suggestions that belong in that area of study without acknowledging the existence of that discipline, adopting any of the practices of analysis from that discipline, or even citing any papers from that discipline. I am also not an expert in the theory of technical writing, but given that this is the main focus of the paper I guess it should play a central role.
I have several major concerns with the philosophy of modeling outlined in this paper, and could go through the article paragraph by paragraph to highlight why I think that many of the ideas developed here are either redundant or, in certain cases, counter-productive. But I hesitate to spend too much time going through these concerns in detail because I don’t see a path to publication given the reasons summarized above. If there is some disagreement between reviewers, I will create a more detailed account of my concerns throughout the paper.
General Concerns:
I will try to respond briefly to the main points of the paper (without deconstructing the philosophical discussion in the paper). The paper is written in a way that makes it difficult to understand what the authors intend to be their primary contributions, so the points below reflect my best attempt at extracting the main points.
- The paper introduces a concept of a “Modeled Hydrologist” to help contextualize certain difficulties in developing complex systems models.
The “Modeled Hydrologist” concept appears to be the similar to what has previously been called the “perceptual” and/or “conceptual” parts of hydrological modeling [3, 4]. I’m not sure what anthropomorphizing these concepts, or the model itself, adds to the discussion. It is already well known and widely discussed that models are collections of approximations and assumptions, and there are numerous papers in the hydrology literature that explicitly recognize and discuss this [e.g., 5]. I am having trouble seeing any new ideas here, or any explanation about what is missing from previous “philosophy of modeling” work.
- The paper argues to use “Knowledge Documentation” to “document comprehensively what is assumed known, so that “the ignorance can be deduced when necessary.”
This comment reiterates my main concern from the review summary above. I do not think that the argument to use “knowledge documents” is well thought out. Documenting all assumptions and methods is the purpose of standard technical writing in science. The format and customs around this ~300 year old culture and practice about how to construct scientific documents, document assumptions, and allow reproducibility is mature and, in my opinion, very effective when done correctly. What is missing from this? Why focus on plain English statements? Why not use the standard set of tools that scientists use to perform exactly this job (e.g., equations, figures, plots, tables, etc.)? All of these things contain knowledge or information in any meaningful sense. Equations especially are used to formally document assumptions and are often much more efficient and precise than written text.
My opinion is that by encouraging framing assumptions in everyday english, we will encourage sloppy and imprecise thinking. To be perfectly frank, it would be difficult for me to think of a suggestion about how to change scientific writing that I would disagree with more strongly than this one. Our goal should be to move *away from* plain language descriptions toward more formal, mathematical descriptions of all assumptions.
Further, the knowledge tables in this paper (e.g., Table 6) do not contain enough information to recreate the model and experiment. Not only are they difficult to read and to synthesize, they are incomplete. If the model were described in the traditional way, it would be much easier to understand and reproduce. Maybe I’m just not used to these tables, but it would be much easier both to see and understand your assumptions and also to recreate your model structure if you just used equations like normal papers do.
A counter argument for the position I’ve outlined is that it might be easier for artificial intelligence or natural language processing systems to extract conceptual or semantic information from knowledge tables like the ones used here, rather than from the narrative style text that is more common in scientific writing. This is an interesting thought, however if this were a goal - however that is not the goal here. If we were to formally adapt scientific writing practices to be more accessible to NLP and automated text readers, I would like to see the problem addressed (theoretically and empirically) from that perspective.
- The paper outlines a new model that is used as an example of applying the Modeled Hydrologist concept to guide and/or document model development.
The new model performs poorly against the benchmark. What is the advantage of this new model vs. existing models? What are the use cases for a model like this? Is the only reason for creating this new model to provide an example for the “knowledge table” procedure? If so, it seems artificial. In general, the model itself is not something that I can see any hydrologist using or being interested in for any practical or intellectual reason, and publishing a new model to illustrate a new philosophy would be interesting if it worked and if it demonstrated some particular value to the new philosophy, but here it didn’t work and the authors did not make a convincing argument about how or why the new philosophy gave us something that standard scientific writing would not.
[1] O Hara, F. M. "A brief history of technical communication." ANNUAL CONFERENCE-SOCIETY FOR TECHNICAL COMMUNICATION. Vol. 48. UNKNOWN, 2001.
[2] Todd, Jeff. "Teaching the history of technical communication: A lesson with Franklin and Hoover." Journal of technical writing and communication 33.1 (2003): 65-81.
[3] Beven, Keith. "Spatially distributed modeling: Conceptual approach to runoff prediction." Recent Advances in the Modeling of Hydrologic Systems. Springer, Dordrecht, 1991. 373-387.
[4] Gupta, Hoshin V., et al. "Towards a comprehensive assessment of model structural adequacy." Water Resources Research 48.8 (2012).
[5] Gupta, Hoshin V., and Grey S. Nearing. "Debates—The future of hydrological sciences: A (common) path forward? Using models and data to learn: A systems theoretic perspective on the future of hydrological science." Water Resources Research 50.6 (2014): 5351-5359.
Citation: https://doi.org/10.5194/hess-2021-170-CC2 -
RC2: 'Comment on hess-2021-170', Anonymous Referee #2, 24 May 2021
Review of the manuscript “If a Rainfall-Runoff Model was a Hydrologist” of Ewen and O’Donnell
Summary
In the manuscript “If a Rainfall-Runoff Model was a Hydrologist” by Ewen and O’Donnell, a set of parameterless rainfall-runoff models are developed in an experiment which aims to quantify the importance of the knowledge contained by the model itself. To make this knowledge explicit, the rainfall-runoff model is personified as a layman with an interest in the weather and river flows. The model is parameterless and relies on time-matching based on the similarity of the simulation day with a set of other days from the historical data record. The model performance of the developed KERR model is overall just slightly lower than the GR4J model performance for a set of UK catchments. A main finding of the paper is the strong relative importance of the temporal pattern of antecedent rainfall. This concrete model development example supports a broader and more philosophical discussion on hydrologic knowledge and laws within rainfall-runoff models.
The manuscript addresses relevant scientific questions on the role of hydrologic knowledge contained in rainfall-runoff models on model performance. However, at first read, I found the manuscript to be confusing in how it is structured and in its balance between the broader philosophical discussion and the very concrete, simple and specific modeling experiment. Which aspects of the specific example should or can we apply in more complex traditional rainfall-runoff modeling, is it the knowledge documentation through personification of the rainfall-runoff model?
I hope that the comments below will help improving the manuscript.
General comments:
1) The aim of the paper is not clearly stated in the abstract, I would suggest to explicitly add it. The aims are described in L70 and later L110 and in L174, however, throughout the manuscript it remains unclear what is exactly meant by ‘corruption’ of hydrologic knowledge flows within RR modelling. Could you clarify this further?
2) The manuscript does not contain a dedicated Conclusion section. Concluding remarks are provided in the Summary of Section 8. However, I think it would help the reader to include a dedicated conclusion section which specifically links back to the aims of the study. This would help to clarify the main message/focus of the manuscript.
3) The experiment was performed for a set of UK catchments. Could you discuss the application of the developed parameterless models and the conclusions drawn on the importance of wetness in the light of different climatic zones?
4) How could the proposed methodology of quantifying the importance of hydrologic knowledge held by the MH on model performance be applied in more traditional rainfall-runoff modeling?
5) Hydrological modeling is often used in practice to quantify the effect of change in a catchment (e.g. land use). In science, hydrological modeling is often used to increase our understanding of catchment functioning. Both would be difficult using the proposed approach of the parameterless model, could you please elaborate on this?
6) The manuscript includes several references to the study of Jakeman and Hornberger (1993). However, a short summary of the main aspects of this paper in relation to the current paper seems to be lacking.
7) The way the work is presented is sometimes confusing. For example, in Section 4.2, the Trivial and Seasonal RR models are presented. Later in Section 6, an additional Wetness model is mentioned. In Table 6, also the KERR model is presented. Perhaps, it would be good to clarify this in Section 4.2 and in the Method section 3 so that the reader has a better understanding of the main approach.
Specific comments
L5: Could you add here why personification can also be instructive?
L10: Simplification of complex systems is inherent to modeling, but I guess you want to quantify how and which of the knowledge contained in the model mostly affects model performance?
L11: What do you mean by classic MH?
L17: I found the sentence with “the relative importance is measured as 1 and 6” rather confusing. Do you mean: antecedent wetness is 6 times more important than seasons in simulating runoff in a time-matching modeling approach? I would suggest rephrasing this sentence (also in the Summary section).
Figure 1: Although mentioned in the caption, it was at first read not entirely clear to me that the numbers refer to the knowledge statements of the different Tables, I would suggest rephrasing the caption to clarify.
L66-67: Could you elaborate further on this?
L70: As mentioned before, what do you exactly mean by ‘lost or corrupted’ hydrologic knowledge flows within RR modeling?
L72: What do you exactly mean by science is an “activity and attitude”?
L73: Could you elaborate further what you mean by the significant deficits, difficulties or dangers?
L85: It is not clear to me what you mean by “not well behaved”, could you clarify?
L111: aim (3) is not entirely clear, could you elaborate on “the need to draw valid hydrologic conclusions”?
L134: In ‘traditional’ hydrological modeling, would you also recommend describing equations in the form of statements in everyday English? How does this relate with the more commonly provided model descriptions and equations?
L138: with “Here,” do you mean: in the models being developed in this study?
L141: could you specify conclusion 3 in Sect. 1, it is unclear to me to which point you are specifically referring to.
L174: when you mention “one of the aims”, I would find it helpful to also have a recap of the other aims of the research.
L198: I am not sure to fully understand what you mean by “the MH should know why”, could you clarify this part?
L201: “from such an experiment”, do you mean an experiment without data fitting?
L259: in contrast to the Trivial and Seasonal models, the Wetness model was not introduced earlier.
Fig2: Rain is a flux and should therefore have unit [L/T], I assume here it is mm/d. It is somewhat confusing to show negative values on the y-axis of the top panel. It would be clearer for the reader if rainfall pattern difference was also explained in the text describing Fig 2.
In the paragraph 313-319, the horizontal alignments around 1996 are explained twice.
Table 6: is the KERR model a general name for the Trivial, Seasonal and Wetness models and a fourth model? Could you please clarify?
L355: could you elaborate: “when drawing conclusions” on what?
Table 9: conclusion based on the third statement “Unpredictability” are not explained in the text of Section 7.1. This statement only comes back in the Summary of Section 8. Could you please elaborate on this finding already in Section 7.1?
L429: Here, I would suggest to explicitly mention “wetness, seasonality and unpredictability” to clarify: “the three pieces of hydrologic knowledge given in 8 and 9”.
L449: Could you elaborate further on what you mean by the wetness kernel and discuss more in detail the related hydrologic law?
Discussion: how important is personification of RR models? This seemed to be an important focus point at the start of the manuscript.
Typos:
L14: a MH instead of an MH
L228: meteorological instead of metrological
Citation: https://doi.org/10.5194/hess-2021-170-RC2 - AC4: 'Reply on RC2', Greg O'Donnell, 07 Jun 2021
-
EC1: 'Editor comment on hess-2021-170', Jan Seibert, 26 Jun 2021
We have received three valuable reviews for this contribution. While the reviews vary in their criticism, all three highlight that the manuscript is difficult to read. The reviews also point out that important previous studies have not been considered. Both points make it difficult to assess the novelty of the approach as it is hard to see how the presented work extends previous work if this work is not mentioned.
After reading the manuscript and discussion several times I am still rather confused on why personification (modelled hydrologist) is needed. Basically, what is nicely shown in this study is how easy it is to reproduce discharge in these UK catchments with some hydrological common sense. By considering seasonality and antecedent precipitation, one can reproduce observed discharge quite well. But this should not come as too much of a surprise; isn’t this the reason why our simple bucket-type models perform well? Still, I see value in this analysis of precipitation and discharge time series, providing insights into catchment functioning. However, I do not see why the personification framework is needed to present this analysis. Honestly, I agree with the reviewers that this makes the text rather confusing. Framing the study in this way also implies touching on important aspects of the philosophy of modelling. I don’t think this is needed, but if framed in this way, this needs to be clearly motivated, and previous work in the field needs to be acknowledged.
Another issue is the applicability of the presented approach in more challenging situations. The KERR approach will be more challenging in regions where discharge is less directly related to precipitation than in the UK (e.g., snow, dry seasons, …). An important question is also how the KERR approach could be used for predicting discharge for conditions outside observed conditions, which is one of the most important tasks of a model after all (especially also for those applications mentioned at the beginning of the manuscript).
I need to add an important note: One reviewer used inappropriate language in some of his comments. The authors did not complain about these comments and declined an offer to have them rewritten. Therefore, in the interest of the scientific debate, we decided to leave the comment-reply chain as it is. However, we want to make it very clear that reviews need to be written to avoid any offensive formulations. It is accepted, and even welcome, to express strong opinions, but the use of abusive language does more harm than good to communicate the content of the critic. While this applies to the formulation of any reviews, in HESS, with its open review process, the use of appropriate language is especially crucial.
Best regards,
Jan SeibertCitation: https://doi.org/10.5194/hess-2021-170-EC1
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
2,208 | 948 | 72 | 3,228 | 42 | 45 |
- HTML: 2,208
- PDF: 948
- XML: 72
- Total: 3,228
- BibTeX: 42
- EndNote: 45
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1