the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Use of expert elicitation to assign weights to climate and hydrological models in climate impact studies
Eva Sebok
Ernesto Pastén-Zapata
Peter Berg
Guillaume Thirel
Anthony Lemoine
Andrea Lira-Loarca
Christiana Photiadou
Rafael Pimentel
Paul Royer-Gaspard
Erik Kjellström
Jens Hesselbjerg Christensen
Jean Philippe Vidal
Philippe Lucas-Picher
Markus G. Donat
Giovanni Besio
María José Polo
Simon Stisen
Yvan Caballero
Ilias G. Pechlivanidis
Lars Troldborg
Jens Christian Refsgaard
Download
- Final revised paper (published on 09 Nov 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 23 Dec 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on hess-2021-597', Anonymous Referee #1, 06 Jan 2022
Unfortunately, despite an admirable effort by the authors to produce a robust paper, it is a fatally flawed approach to assess impacts.
Here are several papers that discuss this issue.
Burgess et al: 2020: IPCC baseline scenarios have over-projected CO2 emissions and economic growth Environmental Research Letters 16 (1), 014016
Pielke Jr R. and J Ritchie: 2021: Distorting the view of our climate future: The misuse and abuse of climate pathways and scenarios R Pielke Jr, J Ritchie Energy Research & Social Science 72, 101890
Pielke Jr R. and J Ritchie: 2021: How Climate Scenarios Lost Touch With Reality R Pielke Jr., J Ritchie Issues in Science and Technology, 74-83
Pielke Jr et al 2021::Most plausible 2005-2040 emissions scenarios project less than 2.5 degrees C of warming by 2100 R Pielke Jr, MG Burgess, J Ritchie SocArXiv
The more robust way to assess risk is the contextual approach proposed by
Füssel, H.-M. (2007), Vulnerability: A generally applicable conceptual framework for climate change research, Global Environ. Change, 17, 155–167.
O’Brien, K. L., S. Eriksen, L. Nygaard, and A. Schjolden (2007), Why different interpretations of vulnerability matter in climate change discourses, Clim. Policy, 7(1), 73–88.
Applications of this approach can be found in
Hossain, F., J. Arnold, E. Beighley, C. Brown, S. Burian, J. Chen, S. Madadgar, A. Mitra, D. Niyogi, R.A. Pielke Sr., V. Tidwell, and D. Wegner, 2015: Local-to-regional landscape drivers of extreme weather and climate: Implications for water infrastructure resilience. J. Hydrol. Eng., 10.1061/(ASCE)HE.1943-5584.0001210 , 02515002.
Pielke, Sr. R.A., J. Adegoke, F. Hossain, and D. Niyogi, 2021: Environmental and social risks to biodiversity and ecosystem health – A bottom-up, resource-focused assessment framework. Earth, 2, 440–456. https://doi.org/10.3390/earth2030026
These uses of scenarios have become a cottage industry, but are poor science in my view.
If the authors still disagree, they need to quantitatively show in hindcast runs that the models can skillfully predict changes in regional climate statistics that matter to the hydrological impacts they are assessing. Reanalyses (of changes in regional climate statistics) are the baseline to compare with the models not between models.
This statement from their paper summarizies the inadequacy of the study
“The experiment resulted in a group consensus among the climate modellers that all models should have an equal probability (similar weight) as it was not possible to discriminate between single climate models, while also maintaining the importance of using as many climate models as possible in order to cover the full uncertainty space in climate model projection”
The uncertainty of the model results does NOT bracket the real world uncertainty. These types of studies are misleading policymakers.
I checked "reconsider after major revisions", rather than recommending "rejection" since the authors' methodoloy of accepting the climate model results as having demonstrated skill at multidecadal regional climate change statistics is applied throughout the impacts communities. The authors need to objectively respond to the view that the approach they are using is not scientific robust.
Citation: https://doi.org/10.5194/hess-2021-597-RC1 -
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022
The comment was uploaded in the form of a supplement: https://hess.copernicus.org/preprints/hess-2021-597/hess-2021-597-AC1-supplement.pdf
-
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022
-
EC1: 'Comment on hess-2021-597', Lelys Bravo de Guenni, 03 Jun 2022
Initial Comments on HESS-2021-597
Use of expert elicitation to assign weights to climate and hydrological models in climate impact studies by Eva Sebok et al.
This work addresses a very important issue on climate impact studies, which is the uncertainty component associated to the use of different climate and hydrological models. However, the methods presented are rather unusual for the kind of analyses one is used to read in journals like this one. I praise your efforts in bringing the Hydrologist and Climate experts together, and your work demonstrate that there is a real difference in the physical systems modelling approach and model’s predictability assessments between these two groups. However, I disagree about the use of the term “model democracy” especially if one is willing to accept the fact that “All models are wrong, but some are useful” (G.E.P. Box). Can you please properly define this term and/or use a different terminology?
Expert elicitation needs to be based on some prior information about model’s performance, and this prior information can be updated when new data becomes available (Bayesian approach mindset). I am very concerned about your statement from line 375, and the fact that the climate experts were not comfortable with the EE methodology as a potential way of assigning weights to individual climate models. Can you please elaborate more on this?
I would also argue about how the experts’ qualitative evaluation can be articulated within the most recent trends on the use of Artificial Intelligence and Machine Learning methods. How would you demonstrate that these expert’s opinions are superior to a purely data driven approach? In addressing these important questions, you might enhance the contributions of your paper.
Reference error: Please note that there is a problem with the reference to Table 2 in line 195
Thanks again for your submission to HESS.
Citation: https://doi.org/10.5194/hess-2021-597-EC1 -
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022
The comment was uploaded in the form of a supplement: https://hess.copernicus.org/preprints/hess-2021-597/hess-2021-597-AC1-supplement.pdf
-
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022
-
RC2: 'Comment on hess-2021-597', Anonymous Referee #2, 19 Jul 2022
Review for Hydrology and Earth System Sciences manuscript ID: hess-2021-597
Title: Use of expert elicitation to assign weights to climate and hydrological models in climate impact studies
Summary:
The paper uses five case study locations in Europe to test an expert elicitation approach to weighting climate and hydrological models. The study uses a structured expert elicitation approach, involving three stages of individual elicitations and consensus building, supported by some initial training material. The study finds that hydrologists are more willing than climate modellers to articulate model weights, with climate modellers preferring model democracy in the absence of further investigation. The shift in approach from in-person to online, due to the covid-19 pandemic, affected the approach followed away from typical elicitation processes.
Overall, I found the paper interesting, well written and clearly structured. The previous review comments seemed to have missed the point that this paper is focused on testing a methodology – i.e., expert elicitation to support model weighting – rather than providing robust scenarios of future climate impacts for the case study locations. As such, I do not share their concerns. Moreover, precisely because “all models are wrong, some are useful”, I think the community should welcome efforts to more rigorously include expert judgement in providing actionable information, as relying purely on outputs from models (which we know to be flawed) risks over-confidence in uncertainty estimates.
That being said, the study isn’t as conclusive as I’d hoped. The finding that climate scientists continued to support model democracy – whilst interesting in the context explored – is not surprising. I suspect the finding would have been different if those involved had seen more model validation results – i.e. ability of the simulations to capture observed atmospheric circulation and trends, relevant to precipitation in the locations studied. It is also a shame that the climate modellers and hydrological modellers weren’t part of the same expert group as originally envisaged, as this may have yielded some more nuanced views and outcomes for model weighting.
Overall, I think the paper is worthy of publication. It will help advance the use of expert elicitation methods in the climate and hydrology community. I hope my comments and suggestions below help in improving the paper prior to publication.
Specific comments:
Line 94: “with a few exceptions (Mearns et al., 2017)” – add to this the recent study by Grainger et al. 2022 – see references. It would be interesting for the authors to comment on this study and how the methods followed compare, noting that there are very few studies in this space.
Line 102: Also worth citing McSweeney et al. 2015 – see references. This highly cited paper demonstrates a method for first excluding implausible models following model evaluation, and then spanning the uncertainty range of the remaining plausible models.
Line 150: Table 2 shows 7 GCM-RCM model combinations, but the final combination includes two realisations meaning there are 8 simulations considered. Why is this not discussed in the paper, and why were two realisations selected for this model combination?
Line 153: Typo “info3rmation”
Line 162: Typo / technical error in referencing Table 3 in the pdf – check.
Line 177: It is mentioned that the move to virtual elicitation gave an opportunity to explore how this worked. However, the paper doesn’t provide much analysis here. I would encourage a short paragraph in the discussion section to reflect more on the pros and cons of this change in approach.
Line 187: How were the 18 experts determined by the “partner institutes of the research project”? Were there any explicit or implicit considerations – e.g., PhD in a relevant topic, papers published, involvement in CORDEX?
Line 192: It would help demystify things for the reader if you briefly explain why one expert decided to leave the study.
Line 432: I agree obtaining results from models requires time, but the elicitation approach followed is also very time consuming and incurs a cost. I’m not sure saving time is a strong justification for following an elicitation approach.
Line 476: “Climate models often stem from short-term forecast models”. Taken over many decades of model development this is true. However, this is a bit misleading as CMIP5 climate models are quite different from operational numerical weather prediction models. Suggest clarifying what is meant here – yes, climate change is more of a boundary value problem but scientists don’t simply add on elements to a NWP model to simulate future climate – there is a quite a lot more involved.
Line 522: Sentence ending “…are without doubt inappropriate”. This phrasing is too strong given the evidence. Had the climate modellers been provided with compelling evaluation information, I’m sure they would have been open to excluding models. Suggest deleting “without doubt” and rephrasing.
Further thoughts for the discussion section:
- It would be useful to comment further on the uncertainty cascade, referenced in the introduction section. In particular, does having expert opinion included in the articulation of uncertainties add yet another layer of cascading uncertainties? Or does it rather try to address and reduce the cascading uncertainties? It isn’t obvious to me.
- In general, climate models (GCMs and RCMs) are more complicated and have higher dimensionality than hydrological models. Could this be a reason why climate modellers prefer model democracy, especially if they aren’t entirely familiar with all aspects of the models?
- (with particular reference to lines 444 to 448) Another reason why experts won’t assign different weights to the climate models may be because they are all from the same generation – i.e., all RCMs downscaled with CMIP5 models. Might the result be different if comparing CMIP3 vs CMIP6 models for example?
- Sample size is an issue for this study. With only 6 experts in each group, any result cannot be considered robust - i.e., the finding that 6 hydrological modellers were more willing to assign weights compared to 6 climate modellers, is not a robust finding. It would be good to comment on sample size limitations.
- At the end of the conclusions section, you comment on the impact of covid-19 moving to virtual engagement. I suggest moving some of this to the discussion section and elaborating more on the methodological implications and insights that may be relevant to other studies in the future.
All the best in revising the paper and I look forward to seeing the published article.
References
Grainger, S., Dessai, S., Daron, J., Taylor, A., & Siu, Y. L. (2022). Using expert elicitation to strengthen future regional climate information for climate services. Climate Services, 26, 100278.
McSweeney, C. F., Jones, R. G., Lee, R. W., & Rowell, D. P. (2015). Selecting CMIP5 GCMs for downscaling over multiple r
Citation: https://doi.org/10.5194/hess-2021-597-RC2 -
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022
The comment was uploaded in the form of a supplement: https://hess.copernicus.org/preprints/hess-2021-597/hess-2021-597-AC1-supplement.pdf
-
EC2: 'Comment on hess-2021-597', Lelys Bravo de Guenni, 21 Jul 2022
We now have two thoughtful reviews for your paper. Thank you to the two anonymous referees for your useful inputs and your time devoted to this important task; and thanks to the authors for putting together this work.
I invite the authors to incoprorate both reviewers comments and include the arguments you think would support your answers for the constructive ideas given by the reviewvers.
I think the referees' comments are somehow complimentary to each other, and provide a strong set of suggestions to produce a robust paper. Both reviewers agree that your work deserves publication after some considerations are made.
We look forward to a new version of your paper and future publication at HESS.
Citation: https://doi.org/10.5194/hess-2021-597-EC2 -
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022
The comment was uploaded in the form of a supplement: https://hess.copernicus.org/preprints/hess-2021-597/hess-2021-597-AC1-supplement.pdf
-
AC1: 'Reply on EC2', Eva Sebok, 18 Aug 2022