the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Predicting extreme sub-hourly precipitation intensification based on temperature shifts
Marika Koukoula
Antonio Canale
Download
- Final revised paper (published on 31 Jan 2024)
- Supplement to the final revised paper
- Preprint (discussion started on 20 Sep 2023)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on hess-2023-226', Anonymous Referee #1, 20 Oct 2023
This study presents a model for sub-hourly precipitation extremes that integrates the distribution of the temperature before the precipitation events. This model has the great advantage to have a relationship between extreme precipitation and temperature that is able to represent decreasing return levels for very high temperatures (above 25°C) which could have important implications for climate change impact studies. One important motivation for the TENAX model is that temperature is adequately represented by climate models, compared to precipitation (Fig. TS.2, IPCC, 2021). The fact that the TENAX models strongly rely on the distribution of temperatures before precipitation events is a great advantage in my opinion, and brings a lot of confidence in the resulting changes obtained from the model.
Overall, the concept and the methodology are clearly presented and motivated. Based on my personal reading, this study is of interest for the research community working on extreme precipitation. However, there are several points that should be carefully addressed before publication and are listed below.
Validation of the approach
Throughout the manuscript, many conclusions are a bit oversold in my opinion. The use of the terms “validation” (l.200, l.201, l.213, l.267, l.268, l.330), “demonstrated” (l.289-209, l.356) are used and seems to indicate that the TENAX is the “true” model that should be used for any estimation of return levels of sub-hourly precipitation. However, any model has its assumptions and limitations, and the term “validation” has been criticized in this regard. Klemes (1986) proposed to speak about the operational adequacy of a model, rather than about its validity. Oreskes (1998) provides a detailed discussion on this subject and motivates the use of the term “evaluation” instead of “validation”. I strongly advise the authors to replace “validation” with “evaluation” in the manuscript, because, in this study, the concept and the model are mostly illustrated rather than “validated”. In particular, the title of the subsection “3.2 Validation of the projections” is particularly unfortunate because climate projections cannot be validated. They represent future scenarios based on a socio-economic pathway that will never occur. I recommend using “Hindcast model evaluation” instead.
There are no quantitative metrics that demonstrate the better performance of the TENAX model over a recognized benchmark (if it exists). Different experiments are performed and are useful examples of how the TENAX model behaves (Figs. 4, 5, 7). However, at several times in the paper, other models considered as “best references” (MeteoSwiss estimates, the SMEV model in Fig. 4, “with reduced uncertainties with respect to official estimates” at l. 393-394) are used to show the superiority of the TENAX model whereas 1. These comparisons are mostly qualitative, 2. These “best references” also have their limitations/uncertainties.
Convection-permitting simulations
In the manuscript, the use of simulations from convection-permitting models is discouraged at l. 49-52 and l. 239-240 on the basis that they are available for a few socio-economic pathways, and available with delay with respect to GCM simulations. These arguments are not fair in my opinion. First, concerning the emission scenario, the current manuscript only exploits simulations with the RCP8.5 and does not illustrate the advantage of having a multi-scenario ensemble. Second, regional climate simulations are also obtained with a lot of delay compared to GCM simulations. Concerning the CMIP5 experiment, for example, GCM simulations were made available in 2011 (Taylor et al., 2011) whereas some corresponding EUROCORDEX simulations have been made available in the past two years (e.g. with the RCM HadREM3-GA7). Furthermore, it is true that there are currently a limited number of convection-permitting simulations, but I would add that they are becoming increasingly available and that the next climate scenarios produced for Switzerland will make them available (CH2025, https://www.meteoswiss.admin.ch/about-us/research-and-cooperation/projects/2023/climate-ch2025.html).
Besides these arguments, I find that the manuscript overlooks the limitations of regional climate simulations in terms of the reproduction of intense precipitation events. RCM simulations not only fail to simulate convective events, but they are also very limited concerning the moderate intensities (e.g. greater than 10 mm), the number of precipitation events, and their extent (Caillaud et al., 2021). These limitations have an important impact on the illustration of the proposed method since it exploits the changes in the number of precipitation events from the climate simulations (Fig. 6c). Even if the climate simulations have been corrected using quantile mapping methods, I am not confident that changes in terms of number of precipitations events can be properly obtained from regional climate simulations using parameterized convection.
Censoring threshold
A minor comment concerns the censoring threshold. I understood in Marra et al. (2019) that the motivation for this threshold was the estimation of the parameters of the distribution. A censoring threshold avoids the influence of very small and small intensities on the fitting. This approach is also proposed by Naveau et al. (2016) concerning the Extended GPD distribution. However, at l. 120-121, it seems that model 3 is only defined for values above the censoring threshold. I am also confused with the fact that this censoring threshold is defined in terms of possible range for the intensity x but at l.140, it seems to be a probability. I guess that it should be the quantile corresponding to this probability 0.9 but it needs to be clarified.
Additional comments
l.96-98: Here, the authors refer to MEV distributions, is that correct? It could be indicated.
l.101: “Magnitude of the ordinary events”. At this point of the text, it is not clear what these ordinary events refer to since they are defined later in section 2.1, and it is also unclear what the corresponding magnitude refers to (mean intensity over an event, maximum intensity).
l.112: The part “ordinary events of duration d are defined” is very confusing. I understand here that d is the duration of a storm whereas it seems to be a duration of interest as explained in Marra et al. (2020). I suggest providing an example or a schematic at this step, because the reader could easily get lost at the beginning of these technical explanations. This schematic could also illustrate the corresponding temperature T used in model 3 (see comment for l.160-161).
l.123: Model 3 is defined for x>V*, does it mean that W(x;T)=0 for x<V*?
l.150: Please introduce the contents of Fig. 2 and provide an interpretation of the results.
l.160-161: I am a bit confused by this comment. At l. 154-155, I understand that T is the average temperature observed during the D=24h preceding the peak intensity of an event which can occur at any time of the day. Daily temperatures from climate change projections are usually available at a daily scale for fixed daily intervals (e.g. from 12:00 day D to 12:00 day D+1). What is the temperature T taken from climate change projections in that case?
l.167-172: Similar results are shown for Swiss stations in Evin et al. (2019) which applies a skew exponential power (SEP) distribution. The SEP distribution encompasses the Generalized Gaussian distribution and can take the skewness into account. Their Fig. 6 shows that depending on the season and the climate, the skewness and/or the kurtosis can be important and disqualify the application of Gaussian distributions, for some cases because of the skewness (for the station Jungfraujoch located at the elevation of 3580 m) and for other cases because of the flatness (for all stations during the months from March until August), as the authors obtained in the submitted manuscript.
l.204: “using an established non-asymptotic method”: I understand that the term “established” intends to indicate that the return levels obtained with this method can be considered as a benchmark method. I would suggest indicating the reference Marra et al. (2020) and a description of the method given at l. 207 and avoid the term “established” which tends to oversell the SMEV model in my opinion.
l.236: “can be approximated by changes in the daily temperatures during precipitation events”. I guess that these daily temperatures correspond to the days that include the peaks of the precipitation events. What happens if a precipitation event occurs on several consecutive days? Is it possible to be more explicit on the choice of the daily temperatures? Moreover, I understand that the term “changes” refers to absolute changes in mean temperatures over a future period and a reference period, as explained later in section 3.2. I do not understand why the daily temperature preceding the peak intensity can be approximated by the temperature during the precipitation event and these projected changes. At l. 236, it is indicated that “the advantage of using D=24 hours […] becomes now clear” but it does not become clearer at this point, it was already indicated at l.160-161. I completely understand the motivation for using daily temperatures because it is what is available in the majority of the regional climate simulations, but the manuscript does not show illustrations that it is also a reasonable choice compared to other durations.
l.274-276: I did not understand what has been tested here. Likelihood ratio tests are usually applied to test different competing models for the same data. Here, model 3 seems to have been fitted on two periods. I do not understand how two models fitted with different data, with different parameters can be used to test the similarity of the model. If a Gaussian distribution is applied on temperature data for a past and a future period, assuming just a shift in the mean of the distribution, you would obtain a similar likelihood but two different distributions. Even if the test is valid, in my opinion, the invariance of the magnitude model cannot be demonstrated by comparing their properties on two periods of 20 years. Most stationary tests for precipitation need very long time series to be significant given the large inter-annual variability of precipitation (see Section 3.2 in Slater et al., 2021). However, these two lines are not necessary if this assumption is made explicit, and the paragraph at l.244-252 was sufficient from my point of view.
l.316-319: This part lacks important information and should be improved. First, it is indicated that the projections are obtained from 10 regional climate models, whereas they are obtained from 4 different RCMs (SMHI-RCA4,MPI-CSC-REMO2009,CLMcom-CCLM4-8-17, DMI-HIRHAM5) which have been to downscale 4 GCMs (ICHEC-EC-EARTH, IPSL-IPSL-CM5A-MR, MOHC-HadGEM2-ES, MPI-M-MPI-ESM-LR). Table S2 presents these 10 GCM/RCM combinations as different climate models but the RCMs and the GCMs have their own properties and limitations (some GCMs warm more than others due to their climate sensitivity). I understand that the space is limited in Table S2 but I suggest providing a separate table with the complete information of these simulations (GCM, RCM, GCM member). For example, the second line simply indicates “IPSL” which is the name of a French institution that produces many different climate simulations (from GCMs and RCMs) and not the name of a climate model. What is also missing is the spatial resolution of the climate simulations. The CMIP5-EUROCORDEX simulations have been produced at a 12 km resolution. In https://www.nccs.admin.ch/dam/nccs/de/dokumente/website/klima/CH2018_Technical_Report-compressed.pdf.download.pdf/CH2018_Technical_Report-compressed.pdf, it is indicated that quantile mapping is applied to station observations as well as gridded observations at 2 km to derive localized climate projections. Have station observations been used to provide the corrected climate simulations used in this manuscript?
l.326-327: I suggest replacing “occurrence of annual precipitation events” with “average number of precipitation events at an annual scale”. In table S2, where does the decrease of 4-7% come from? The minimum n. values for the different stations are 0.84, 0.82, 0.78, 0.81, 0.83, 0.74, 0.82.
Table 1, Figure 9: I do not recommend using median/mean statistics obtained from a multi-model ensemble. The climate models have different biases and limitations and are not independent (Brunner et al., 2020). The mean or median of this ensemble cannot be considered as a “best estimate”. While the use of multi-model ensembles is now the norm in climate change impact studies, the “one model / one vote” approach has been criticized in many studies (Tebaldi and Knutti, 2017; Abramowitz et al., 2019), as well as considering a multi-model ensemble as a “sample” (von Storch and Zwiers, 2013). I strongly recommend providing intervals of the changes in Table 1 and removing the red line in Figure 9.
l.360-361: “this model emerges from the superposition of two seasonal Gaussian models”: this might be the case in your example but that is not necessarily true, i.e. generalized Gaussian models with heavy tails can also be obtained at a monthly scale (Evin et al., 2019).
l.477: I did not find a reference “Marra et al. 2021b” and only “Marra et al. 2021” is cited at l.60. Conversely, there are two references Marra et al. (2023).
References
Abramowitz, Gab, Nadja Herger, Ethan Gutmann, Dorit Hammerling, Reto Knutti, Martin Leduc, Ruth Lorenz, Robert Pincus, and Gavin A. Schmidt. 2019. “ESD Reviews: Model Dependence in Multi-Model Climate Ensembles: Weighting, Sub-Selection and out-of-Sample Testing.” Earth System Dynamics 10 (1): 91–105. https://doi.org/10.5194/esd-10-91-2019.
Andréassian, V., C. Perrin, L. Berthet, N. Le Moine, J. Lerat, C. Loumagne, L. Oudin, T. Mathevet, M.-H. Ramos, and A. Valéry. 2009. “HESS Opinions ‘Crash Tests for a Standardized Evaluation of Hydrological Models.’” Hydrology and Earth System Sciences 13 (10): 1757–64. https://doi.org/10.5194/hess-13-1757-2009.
Ban, Nikolina, Cécile Caillaud, Erika Coppola, Emanuela Pichelli, Stefan Sobolowski, Marianna Adinolfi, Bodo Ahrens, et al. 2021. “The First Multi-Model Ensemble of Regional Climate Simulations at Kilometer-Scale Resolution, Part I: Evaluation of Precipitation.” Climate Dynamics 57 (1): 275–302. https://doi.org/10.1007/s00382-021-05708-w.
Brunner, Lukas, Angeline G. Pendergrass, Flavio Lehner, Anna L. Merrifield, Ruth Lorenz, and Reto Knutti. 2020. “Reduced Global Warming from CMIP6 Projections When Weighting Models by Performance and Independence.” Earth System Dynamics 11 (4): 995–1012. https://doi.org/10.5194/esd-11-995-2020.
Caillaud, Cécile, Samuel Somot, Antoinette Alias, Isabelle Bernard-Bouissières, Quentin Fumière, Olivier Laurantin, Yann Seity, and Véronique Ducrocq. 2021. “Modelling Mediterranean Heavy Precipitation Events at Climate Scale: An Object-Oriented Evaluation of the CNRM-AROME Convection-Permitting Regional Climate Model.” Climate Dynamics 56 (5): 1717–52. https://doi.org/10.1007/s00382-020-05558-y.
Evin, Guillaume, Anne-Catherine Favre, and Benoit Hingray. 2019. “Stochastic Generators of Multi-Site Daily Temperature: Comparison of Performances in Various Applications.” Theoretical and Applied Climatology 135 (3): 811–24. https://doi.org/10.1007/s00704-018-2404-x.
IPCC, 2021: Climate Change 2021: The Physical Science Basis. Contribution of Working Group I to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change [Masson-Delmotte, V., P. Zhai, A. Pirani, S.L. Connors, C. Péan, S. Berger, N. Caud, Y. Chen, L. Goldfarb, M.I. Gomis, M. Huang, K. Leitzell, E. Lonnoy, J.B.R.
Klemes (1986) Operational testing of hydrological simulation models, Hydrological Sciences Journal, 31:1, 13-24, DOI: 10.1080/02626668609491024.
Matthews, T.K. Maycock, T. Waterfield, O. Yelekçi, R. Yu, and B. Zhou (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 2391 pp. doi:10.1017/9781009157896.
Oreskes, N. 1998. “Evaluation (Not Validation) of Quantitative Models.” Environmental Health Perspectives 106 (Suppl 6): 1453–60.
Naveau, Philippe, Raphael Huser, Pierre Ribereau, and Alexis Hannart. 2016. “Modeling Jointly Low, Moderate, and Heavy Rainfall Intensities without a Threshold Selection.” Water Resources Research 52 (4): 2753–69. https://doi.org/10.1002/2015WR018552.
Slater, Louise J., Bailey Anderson, Marcus Buechel, Simon Dadson, Shasha Han, Shaun Harrigan, Timo Kelder, et al. 2021. “Nonstationary Weather and Water Extremes: A Review of Methods for Their Detection, Attribution, and Management.” Hydrology and Earth System Sciences 25 (7): 3897–3935. https://doi.org/10.5194/hess-25-3897-2021.
Taylor, K. E., R. J. Stouffer, and G. A. Meehl. 2011. “An Overview of CMIP5 and the Experiment Design.” Bulletin of the American Meteorological Society 93 (4): 485–98. https://doi.org/10.1175/BAMS-D-11-00094.1.
Tebaldi, Claudia, and Reto Knutti. 2007. “The Use of the Multi-Model Ensemble in Probabilistic Climate Projections.” Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 365 (1857): 2053–75. https://doi.org/10.1098/rsta.2007.2076.
von Storch, Hans, and Francis Zwiers. 2013. “Testing Ensembles of Climate Change Scenarios for ‘Statistical Significance.’” Climatic Change 117 (1): 1–9. https://doi.org/10.1007/s10584-012-0551-0.
Citation: https://doi.org/10.5194/hess-2023-226-RC1 -
AC2: 'Reply on RC1', Nadav Peleg, 26 Oct 2023
We would like to thank the reviewer for their time and efforts in reviewing our manuscript and for their positive comments. The following are our responses to the main comments the reviewer made regarding the methodology and application. Moreover, we appreciate and agree with the specific text suggestions made in the review, which we will incorporate into the revised text, although we do not address them specifically in this document.
Main issues
- Validation of the approach. In agreement with the reviewer, the term "validation" will be changed to "evaluation" wherever appropriate. Additionally, we will include a table comparing the TENAX and MeteoSwiss/SMEV products, as well as their uncertainties (as available), to strengthen the model evaluation.
- Convection-permitting simulations. We did not intend to discourage the use of these models (in fact, some of the authors frequently used them in their studies) but rather to suggest an alternative for situations in which CPMs (or multiple CPM climate scenarios) are not available. The text and tone will be revised to clarify this point. Likewise, we agree with the reviewer's comment regarding the limitations and use of RCM and will add a new paragraph in the discussion section to address the points raised.
- Censoring threshold. Thank you for highlighting this lack of clarity. In the revised text, we will clarify the two issues related to censoring threshold determination.
Specific issues
- “Magnitude of the ordinary events”. The text will be revised to clarify the term.
- “Ordinary events of duration d are defined”. As suggested, we will provide a schematic example in the supplementary material.
- In the revised manuscript, we will provide a better explanation and illustration of how the changes in temperature T are derived from climate models. In brief: we extract the change in temperature during wet days without matching individual rainfall events at a sub-hourly scale with daily temperatures.
- The text will be revised to simplify and clarify how we evaluated the TENAX model over two different periods.
- The information regarding the RCM-GCM members and the model family relationships will be added. Additionally, we will provide more information on how the quantile mapping was performed at the climate stations.
- In general, we agree with the reviewer's recommendation not to use the median/mean obtained from the climate models as a guide. The purpose of using it here is to demonstrate the capability of using the TENAX model in climate change studies. We will add a text to elaborate on this approach and its limitations in the discussion section, indicating that future work with the model may benefit from using the interval approach.
Citation: https://doi.org/10.5194/hess-2023-226-AC2
-
AC2: 'Reply on RC1', Nadav Peleg, 26 Oct 2023
-
RC2: 'Comment on hess-2023-226', Anonymous Referee #2, 22 Oct 2023
General comments:
The authors provide a physical based statistical approach to estimate future sub-hourly extreme rainfall. The main idea is using an event based non-stationary Metastatistical Extreme Value (MEV) distribution for rainfall, the Generalized Gaussian distribution for the conditioning temperature of the events and accounting separately for the frequency of events. The change in temperature and occurrence of events is provided by climate models. They validate the approach on a hindcast experiment, assess uncertainties and finally apply the framework in a case study to project 10-min extreme rainfall for 8 climate stations in Switzerland.
The idea is novel and significant, clearly explained as well as objectively validated. The manuscript is short, well written to the point avoiding unnecessary text burden. The developed software is freely provided. From my point of view, this is an excellent paper and I have only minor suggestions for improvement (see below).
Minor specific comments:
- Line 80: “Using current methods it is thus impossible …” I would not be so strong and recommend to replace “impossible” by “highly uncertain” or a similar term.
- Section 2.1/ 4: It is known, that the performance of the MEV approach depends strongly on the correct selection of the underlaying probability distribution for the ordinary events. Here the Weibull distribution is selected without much discussion. I would propose at least to provide goodness of fit test results for the case study.
- Section 2.2/ 4: Similarly, I also would propose to show goodness of fit test results for the applied Generalized Gaussian distribution for the temperature data.
- All text: The authors use “Montecarlo” in the text. This is quite uncommon. I would propose to write this term as “Monte-Carlo (MC)”.
- Line 282: … “the fitted SEMV models to both periods (dashed lines in Fig. 7b)”. As I understood you fitted the SMEV to both periods. Accordingly, there should be two dashed lines in this figure. I can see only one dashed blue line? Is the line for the future period beneath the red line and or did you forget to put it in the figure?
Citation: https://doi.org/10.5194/hess-2023-226-RC2 -
AC1: 'Reply on RC2', Nadav Peleg, 26 Oct 2023
We would like to thank the reviewer for providing a favorable assessment of the manuscript and our new model. Below we respond briefly to five minor specific comments raised by the reviewer, which will be addressed in the revised version of the manuscript.
- The text will be revised as suggested.
- We will better rationalize the choice of the Weibull distribution. In general, our models are derived from physical reasoning rather than empirical fitting (e.g., the emergence of the Generalized Gaussian from the seasonal distributions). In our view, the fact that TENAX can reproduce the distribution of extremes based on these physics-backed models for a range of cases is stronger than the information a goodness of fit test could provide in light of the limited availability of empirical observations.
- We kindly refer to the point above.
- The text will be revised as suggested.
- Thank you for pointing this out. Originally, we plotted both SMEV lines but then removed the one from the second period since it overlapped with the TENAX line and made the figure less readable. The text will be revised accordingly.
Citation: https://doi.org/10.5194/hess-2023-226-AC1