the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Diel streamflow cycles suggest more sensitive snowmelt-driven streamflow to climate change than land surface modeling does
Sebastian A. Krogh
Lucia Scaff
James W. Kirchner
Beatrice Gordon
Gary Sterle
Adrian Harpold
Download
- Final revised paper (published on 05 Jul 2022)
- Preprint (discussion started on 20 Aug 2021)
Interactive discussion
Status: closed
-
RC1: 'Comment on hess-2021-437', Jessica Lundquist, 05 Oct 2021
Krogh et al. present an interesting analysis comparing climate change sensitivity impacts on streamflow in the western United States between space for time substitution (which they term STS) and more traditional modeling techniques, where they focus on NoahMP-WRF pseudo-global-warming simulations (termed PGW). They introduce a new metric based on diurnal fluctuations in streamflow that are lag-correlated with solar radiation, and then calculate the day of year when 20% of all days with well-correlated diurnal fluctuations have passed. I like the idea and the premise of the paper, but I feel that major revisions are necessary to disentagle all the possible ways that errors in the analysis could lead to misconceptions in the results. I also feel that the number of acronyms and metrics in the paper (STS, PGW, DOS_20, etc.) make the written text hard to follow, and I strongly recommend that the authors minimize their use of acronyms, perhaps provide a table of acronyms and metrics, and overall work to increase clarity. I have organized my comments into requests for Major and Minor revisions below. The authors are welcome to contact me directly if they have questions: Jessica Lundquist, jdlund@uw.edu
Major:
1) You need a clear analysis of how well your diurnal-cycle-correlation metric works across a range of streams.
1a. line 199-200 “more variable mean annual autocorrelation that ranges between roughly 0.1 and 0.6, with a mean value around 0.4” —need to explain what different mean annual autocorrelations refer to. These numbers are really new to most people. It would be helpful to tie this metric to the examples in Figure 1, as well as a discussion of rain vs. snow — a lot of the “snowmelt days” marked with purple circles in Figure 1 look like rain storms to me. The South Fork of the Tolt mostly gets rain, but also rain on snow. How do diurnal cycles that are identified but aren't really snow melt impact your results?
1b. As an alternate approach to when snowmelt is significant, you could look at the power spectra of your time series. See Figure 6 in Lundquist and Cayan 2002. The days with a sharp increase in power at the once per day cycle indicate snowmelt, whereas rain exhibits a much more red spectra. I know that power spectra are commonly used by oceanographers and not hydrologists, so your method is likely easier to understand, but it would be nice to have an independent method to check.
1c. In particular, I recommend clearer discussion about the strengths and weaknesses of this approach. It will miss rain-on-snow (signal dominated by rain), as well as early melt into dry soil (no streamflow response). It may also misclassify rain with a diurnal structure to it as snowmelt. Therefore (and you allude to this multiple times in the manuscript but should make it clearer), the method is best at detecting melt in non-rainy locations with fairly-saturated soils. With that in mind, which of your basins do you trust the signal the most.
1d. Section 3.1 explains how well the DOS_20 is related to simpler magnitude metrics (DOQ_25 and DOQ_50) but doesn’t really justify why the DOS_20 is helpful beyond those metrics — can you better explain what we gain by doing this extra analysis. This section also identifies some rain-dominated rivers wherein these metrics appear less correlated. Is this because the method breaks down? Or can we learn important information from this change in relationship?
2) You need to more explicitly discuss the difference between a stream's climate sensitivity of snowfall changing to rainfall vs. a climate sensitivity of earlier snowmelt.
2a. Many of the earlier papers on streamflow sensitivity to climate change highlighted basins in the transitional rain-snow zone as being most sensitive because snowfall shifts to rainfall. From my own experience, the diurnal cycle in streamflow is particularly hard to detect in these basins because rain-induced runoff is such a larger signal than snow-induced runoff, especially when both happen more or less at the same time. Therefore, I imagine that your snowmelt index uniquely does not work well in these basins (e.g., the Tolt example in your paper, or the NF American River example in Lundquist and Cayan 2002 Fig. 6). I could imagine that for these basins, you could even get DOS_20 moving later in the season with warming if early season events are all rain and only a later, non-rainy period exhibits snowmelt.
2b. I imagine that including rain-on-snow or rain-dominated basins would bias your correlations with humidity because these tend to be more humid basins but also may have spurious results.
2c. I encourage the authors to think about rainfall vs snowfall and snowmelt sensitivities separately and to decide if they want to address both in this paper or only focus on the latter. Then, be very clear about this decision in the paper discussion.
3) You need to more clearly evaluate how well your NoahMP-WRF model set up is simulating streamflow timing in the current climate before examining the results of its climate sensitivity.
3a. It appears that you have a biased simulation of NoahMP-WRF — if the historic runoff date is off by 50 days (see line 260), the model is either simulating too much rain and too little snow or melting snow way too early. It’s hard to draw conclusions on sensitivity when using a biased model. Of course, if the model has less snow than the real world, it will be less sensitive to that snow disappearing. The paper would be much more meaningful if you included some evaluation of your NoahMP-WRF simulations — how do they compare to baseline observations and to other models run over the domain (similar western US climate-change papers).
3b. Also, if the NoahMP-WF simulations perform better in certain regions (if I’m correct, these were only carefully vetted for Colorado), you may also want to focus your analysis on those regions separately. Do you get closer agreement in areas where the model represents snow processes more accurately? Might a check for space-for-time sensitivity against model sensitivity be a good check for model fidelity?
4. Discussion should be better streamlined and organized. This may be a good place to address major comments 1-3 above.
Minor:
Abstract: 1st sentence, “may cause” — I think the literature is pretty conclusive that warming does cause snow to melt earlier. Abstract should define what you mean by the 20th percentile of snowmelt days — this is meaningless to someone only reading the abstract. What do you mean by colder places are more sensitive than warmer places? In what way? Earlier snowmelt? If there’s no snow, of course it wouldn’t be sensitive to that.
Line 120: “DAYMET dataset (daymet.ornl.gov), which in turn is based on ground observations” — it’s interpolated from existing ground observations — worth specifying as sometimes this is far from truth.
lines 202-205 The percent of streamflow volume by a certain date vs temperature has been well established in the early literature (Stewart et al. 2005). Also see Lundquist et al. 2004 for a review of different ways to define the “spring onset” from snow pillows and from a hydrograph: https://doi.org/10.1175/1525-7541(2004)005<0327:SOITSN>2.0.CO;2
line 215: Yes, these sites are low elevation, receiving primarily rain, and I think your methodology is identifying rain events as having a diurnal cycle.
line 259: “greatly underestimated” — I think you mean than it’s modeled as earlier than observed, right? Underestimated makes me think that the magnitude of the streamflow is too low.
Citation: https://doi.org/10.5194/hess-2021-437-RC1 -
AC2: 'Reply on RC1', Sebastian Krogh, 13 Nov 2021
Answers provided in Bold
Krogh et al. present an interesting analysis comparing climate change sensitivity impacts on streamflow in the western United States between space for time substitution (which they term STS) and more traditional modeling techniques, where they focus on NoahMP- WRF pseudo-global-warming simulations (termed PGW). They introduce a new metric based on diurnal fluctuations in streamflow that are lag-correlated with solar radiation, and then calculate the day of year when 20% of all days with well-correlated diurnal fluctuations have passed. I like the idea and the premise of the paper, but I feel that major revisions are necessary to disentangle all the possible ways that errors in the analysis could lead to misconceptions in the results. I also feel that the number of acronyms and metrics in the paper (STS, PGW, DOS_20, etc.) make the written text hard to follow, and I strongly recommend that the authors minimize their use of acronyms, perhaps provide a table of acronyms and metrics, and overall work to increase clarity. I have organized my comments into requests for Major and Minor revisions below. The authors are welcome to contact me directly if they have questions: Jessica Lundquist, jdlund@uw.edu
Dear Prof. Jessica Lundquist, we greatly appreciate your critical feedback, and we will address your concerns as much as possible (see details below). Regarding the use of acronyms, we will reconsider and reduce how many we use. In particular, we will stop using STS and PGW; however, we believe that DOS20 and DOQ25 and DOQ50 are necessary to shorten the length of sentences, figure captions, figures labels etc.
Major:
1) You need a clear analysis of how well your diurnal-cycle-correlation metric works across a range of streams.
1a. line 199-200 “more variable mean annual autocorrelation that ranges between roughly 0.1 and 0.6, with a mean value around 0.4” —need to explain what different mean annual autocorrelations refer to. These numbers are really new to most people. It would be helpful to tie this metric to the examples in Figure 1, as well as a discussion of rain vs. snow — a lot of the “snowmelt days” marked with purple circles in Figure 1 look like rain storms to me. The South Fork of the Tolt mostly gets rain, but also rain on snow. How do diurnal cycles that are identified but aren't really snow melt impact your results?
We agree that the auto-correlation metric can be confusing and can be better explained. We will add clarifications about its meaning in the context of Figure 1. About the effect of rainstorms, we have set up several checks in our method to limit false positive snowmelt days. First, we apply a more restricted monthly and site-specific window of lagged correlations based on clear-sky snowmelt-driven diel cycles only (see lines 140-145). This limits rainfall coming at a time different from typical snowmelt (or ET) causing a false positive melt day. Second, the rainstorm needs to have a specific diel cycle that will highly correlate with solar radiation. On a complete cloudy day, solar radiation will have a diurnal cycle like a clear sky day, so a rainstorm that produces a snowmelt-like response (depending on watershed’s surface and subsurface connectivity and rainfall histogram) may potentially produce a false positive. On a partly cloudy day, where rainfall occurred but either before or after the event there were clear sky conditions, the chances to have a highly correlated rainfall-induced diel cycle with solar radiations are likely minimal as the shape of the solar radiation diel cycle can have several discrete changes.
However, our method cannot guarantee that rainfall-driven diel cycle will not be picked up (though we argue that the chances are small). To address the reviewer’s concern, we propose screening the days that our method classifies as snowmelt-driven by whether precipitation occurred that day or not, using daily NLDAS precipitation. This will allow detecting whether there is a chance that the snowmelt-day was wrongly picked up. We think this screen will show an unbiased effect from rainfall to the streamflow diel from our method. The Tolt River example is an important one because it has a lower number of snowmelt days. We will better highlight in Figure 1E the two examples of diel patterns that are not recognized at snowmelt and screened by our method. Figure 1F may be misleading because the diel cycles are not observable in the line graph. We will better discuss this figure and highlight the strengths and weaknesses of the method.
1b. As an alternate approach to when snowmelt is significant, you could look at the power spectra of your time series. See Figure 6 in Lundquist and Cayan 2002. The days with a sharp increase in power at the once per day cycle indicate snowmelt, whereas rain exhibits a much more red spectra. I know that power spectra are commonly used by oceanographers and not hydrologists, so your method is likely easier to understand, but it would be nice to have an independent method to check.
We appreciate the recommendation of the reviewer about the power spectra, but do not feel that it will be an improvement from our custom method that adjusts for seasonal and basin-specific lags. We think that a power spectra method would also struggle to distinguish between rainfall and snowmelt diel cycle. Please, see previous comment about figure 1E, and how we propose to check our method for rainy days.
1c. In particular, I recommend clearer discussion about the strengths and weaknesses of this approach. It will miss rain-on-snow (signal dominated by rain), as well as early melt into dry soil (no streamflow response). It may also misclassify rain with a diurnal structure to it as snowmelt. Therefore (and you allude to this multiple times in the manuscript but should make it clearer), the method is best at detecting melt in non-rainy locations with fairly-saturated soils. With that in mind, which of your basins do you trust the signal the most.
The reviewer makes good points about what the method can and cannot do. As detailed in previous answers, we will add an independent method to check for rainstorms. One way that we will strengthen our argument is to subset the basins where we are most confident that rain is not occurring and streamflow is tightly couple to snowmelt. This additional analysis will be discussed in the text and shown in a supplemental Figure or Table.
1d. Section 3.1 explains how well the DOS_20 is related to simpler magnitude metrics (DOQ_25 and DOQ_50) but doesn’t really justify why the DOS_20 is helpful beyond those metrics — can you better explain what we gain by doing this extra analysis. This section also identifies some rain-dominated rivers wherein these metrics appear less correlated. Is this because the method breaks down? Or can we learn important information from this change in relationship?
DOS20 aims to capture snowmelt-streamflow connectivity; however, it does not imply anything about the contribution (volume) of snowmelt to streamflow. As such, this metric can potentially be implemented as a relatively easy way to benchmark -hourly- hydrological and land-surface models beyond typical daily streamflow metrics or point-scale continuous SWE measurements. Specifically, we see potential to use this information to validate snowmelt dynamics of a model. We will be more specific about the value of DOS20 in the discussion.
About the value of section 3.1, we believe there are two key points to be stated. First, the diel method is more uncertain under rainier conditions as it may potentially misclassify snowmelt events due to rainfall (we now propose checking those), and second, under rainier conditions the timing of streamflow volume is likely to be more strongly controlled by the timing of rainfall as opposed to the timing of snowmelt, and thus those sites deviate from the 1:1 line in DOS20 vs DOQ25 and DOQ50.
2) You need to more explicitly discuss the difference between a stream's climate sensitivity of snowfall changing to rainfall vs. a climate sensitivity of earlier snowmelt.
2a. Many of the earlier papers on streamflow sensitivity to climate change highlighted basins in the transitional rain-snow zone as being most sensitive because snowfall shifts to rainfall. From my own experience, the diurnal cycle in streamflow is particularly hard to detect in these basins because rain-induced runoff is such a larger signal than snow- induced runoff, especially when both happen more or less at the same time. Therefore, I imagine that your snowmelt index uniquely does not work well in these basins (e.g., the Tolt example in your paper, or the NF American River example in Lundquist and Cayan 2002 Fig. 6). I could imagine that for these basins, you could even get DOS_20 moving later in the season with warming if early season events are all rain and only a later, non- rainy period exhibits snowmelt.
We agree that a better discussion of the effects of changes from snow to rain on our results is merited. We train a simple model to predict the date of DOS20 based only on basic climatological information. This model shows, as the reviewer suggests, smaller sensitivity in DOS20 to climate variation (Figure 5) in warmer and more cloudy locations. However, it shows consistent trends to earlier DOS20 from our simple inter-annual regression-based metric, even in the warmest and rainiest basins.
We also agree that this effect could be better discussed with regards to Figure 6. The largest difference between NoahMP and the STS method were in the sunny, cold basins that would be least likely to see changes to rain and be biased by the rainier basins.
2b. I imagine that including rain-on-snow or rain-dominated basins would bias your correlations with humidity because these tend to be more humid basins but also may have spurious results.
We try to incorporate as much site and inter-annual variability in the dataset to increase the predictive power of the space-for-time approach, as historically cold sites will transition into warmer and more humid site as those with rainier conditions. That’s being said, we recognize the challenges in reliably capturing snowmelt events where rainfall is important (as discussed in previous major comment). It’s relevant to highlight that those sites in the Pacific Northwest (#24, 25 and 31) that have low snowfall contributions (as highlighted in Figure 3) are ultimately not used for the sensitivity analysis, and thus, do not impact the conclusions drawn in the study. Nonetheless, this will be further clarified and discussed in the revised version of the manuscript that includes an analysis of the importance of rainfall-cased diel fluctuations.
2c. I encourage the authors to think about rainfall vs snowfall and snowmelt sensitivities separately and to decide if they want to address both in this paper or only focus on the latter. Then, be very clear about this decision in the paper discussion.
It is not easy to disentangle the two, but we agree that our method is better suited to answer questions about snowmelt sensitivity and that should be the focus of the paper. However, we recognize our empirical analysis reflects both the effect of changing precipitation partitioning and snowmelt sensitivities.
3) You need to more clearly evaluate how well your NoahMP-WRF model set up is simulating streamflow timing in the current climate before examining the results of its climate sensitivity.
3a. It appears that you have a biased simulation of NoahMP-WRF — if the historic runoff date is off by 50 days (see line 260), the model is either simulating too much rain and too little snow or melting snow way too early. It’s hard to draw conclusions on sensitivity when using a biased model. Of course, if the model has less snow than the real world, it will be less sensitive to that snow disappearing. The paper would be much more meaningful if you included some evaluation of your NoahMP-WRF simulations — how do they compare to baseline observations and to other models run over the domain (similar western US climate-change papers).
The reviewer makes a good point, and we will improve and highlight better the description of the model performance. Just to clarify, these simulations made by the National Center for Atmospheric Research (NCAR) presented by (Liu et al., 2017) have been previously tested in terms of its meteorology and snow components (Liu et al., 2017; Scaff et al., 2020). We do agree with Dr. Lundquist in that one should make sure the model represents reliably a particular system before looking at its sensitivity to climate change. Nonetheless, this type of simulations have been used for climate change analyses (Musselman et al., 2017, 2018), but its runoff component has not been tested to our knowledge. Furthermore, the NoahMP model is the under the US National Water Model (https://water.noaa.gov/about/nwm) and thus its relevance to policy and research are high.
Detailing the exact biases of NoahMP simulations in the past is beyond the scope of this study, but we will detail previous efforts in this arena. We will improve our discussion and analysis to demonstrate that the NoahMP-WRF is predicting an earlier historical DOQ25 compared to our STS method and historical observations (current Figure 6A), whereas prediction of DOQ50 is more similar between the methods and observations historically (Figure 6B). A key finding is that NoahMP DOQ50 is less sensitive to change than the STS method in the snowier basins where the STS methods should be more reliable.
3b. Also, if the NoahMP-WF simulations perform better in certain regions (if I’m correct, these were only carefully vetted for Colorado), you may also want to focus your analysis on those regions separately. Do you get closer agreement in areas where the model represents snow processes more accurately? Might a check for space-for-time sensitivity against model sensitivity be a good check for model fidelity?
For the historical DOQ25 the NoahMP-WRF model actually performed the best in rainier sites (see circled blue symbols in Figure 6a) and a few other sites classified as ‘cloudy’ and ‘partly cloudy’, whereas the Rocky sites are characterized by ‘sunny’ snowmelt events were the most biased (see circles in Fig6a). This suggests that the timing of streamflow volume is better represented in areas where snowmelt processes are less important, though other variables like topographic (and thus climatic) gradient can also be important.
- Discussion should be better streamlined and organized. This may be a good place to address major comments 1-3 above.
We will improve the discussion based on Dr. Lundquist suggestions, which will hopefully address her main concerns.
Minor:
Abstract: 1st sentence, “may cause” — I think the literature is pretty conclusive that warming does cause snow to melt earlier. Abstract should define what you mean by the 20th percentile of snowmelt days — this is meaningless to someone only reading the abstract. What do you mean by colder places are more sensitive than warmer places? In what way? Earlier snowmelt? If there’s no snow, of course it wouldn’t be sensitive to that.
We will change the abstract to read “climate change will cause …”, and provide a more meaningful introduction to DOS20. We will clarify what we mean by “cold sites are more sensitive”, which refers to the fact that the timing of early streamflow volume changes the most at cold sites compared to warmer sites.
Line 120: “DAYMET dataset (daymet.ornl.gov), which in turn is based on ground observations” — it’s interpolated from existing ground observations — worth specifying as sometimes this is far from truth.
We will change it to read as suggested by the reviewer.
lines 202-205 The percent of streamflow volume by a certain date vs temperature has been well established in the early literature (Stewart et al. 2005). Also see Lundquist et al. 2004 for a review of different ways to define the “spring onset” from snow pillows and from a hydrograph: https://doi.org/10.1175/1525-7541(2004)005<0327:SOITSN>2.0.CO;2
We appreciate the references. Stewart is already mentioned in the discussion, and we will add Lundquist et al (2004).
line 215: Yes, these sites are low elevation, receiving primarily rain, and I think your methodology is identifying rain events as having a diurnal cycle.
We appreciate the confirmation. These sites are not used later on the sensitivity analysis and, as previously discussed, or method does not guarantee that rainfall-driven diel cycles are excluded, but we are now adding an independent check.
line 259: “greatly underestimated” — I think you mean than it’s modeled as earlier than observed, right? Underestimated makes me think that the magnitude of the streamflow is too low.
We mean the date DOQ50 is underestimated by the model, but to avoid confusions we will change it to “earlier than observed” as suggested by Dr. Lundquist.
References:
Liu, C., Ikeda, K., Rasmussen, R., Barlage, M., Newman, A. J. A. J. A. J. A. J., Prein, A. F. A. F., Chen, F., Chen, L., Clark, M., Dai, A., Dudhia, J., Eidhammer, T., Gochis, D., Gutmann, E., Kurkute, S., Li, Y., Thompson, G. and Yates, D.: Continental-scale convection-permitting modeling of the current and future climate of North America, Clim. Dyn., 49(1–2), 71–95, doi:10.1007/s00382-016-3327-9, 2017.
Musselman, K. N., Clark, M. P., Liu, C., Ikeda, K. and Rasmussen, R.: Slower snowmelt in a warmer world, Nat. Clim. Chang., 7(3), 214–219, doi:10.1038/nclimate3225, 2017.
Musselman, K. N., Lehner, F., Ikeda, K., Clark, M. P., Prein, A. F., Liu, C., Barlage, M. and Rasmussen, R.: Projected increases and shifts in rain-on-snow flood risk over western North America, Nat. Clim. Chang., 8(September), doi:10.1038/s41558-018-0236-4, 2018.
Scaff, L., Prein, A. F., Li, Y., Liu, C., Rasmussen, R. and Ikeda, K.: Simulating the convective precipitation diurnal cycle in North America’s current and future climate, Clim. Dyn., 55(1–2), 369–382, doi:10.1007/s00382-019-04754-9, 2020.
Citation: https://doi.org/10.5194/hess-2021-437-AC2
-
AC2: 'Reply on RC1', Sebastian Krogh, 13 Nov 2021
-
RC2: 'Comment on hess-2021-437', Anonymous Referee #2, 08 Oct 2021
The authors present a new means of considering the sensitivity of snowmelt timing and streamflow response under warming climate conditions based on space for time substitutions. Their metric (DOS_20) is based on diel fluctuations in streamflow that correlate with solar radiation (after a time lag of 6-18 hours). They use this metric to assess regional sensitivity to warming across an array of small montane basins in the western U.S. They compare their approach to one using a physically-based modeling framework, highlighting differences in snowmelt-streamflow sensitivities derived from each method.
I think the approach presented here can provide valuable insights into the implications climate warming holds for water forecasting and management. However, I found the paper somewhat difficult to follow. I believe significant revisions are necessary to improve the clarity of the analysis. These are enumerated below.
1. Devote more space to background information. Numerous concepts are discussed with minimal introduction (e.g. space for time substitution, mean annual autocorrelation, diel streamflow cycles, etc). I understand that the authors are snow hydrologists writing for other snow hydrologists, but the paper would be significantly easier to follow with a proper setup for many of the concepts being discussed.
2. Streamline extremely dense figures and captions. There is a ton of information included in each figure--particularly Figures 1-3. I think it would be beneficial to break some of these into multiple figures in order to make them more digestible. At the very least, the authors should consider changes such as increasing the font size (overall, but particularly in the tiny inset histograms) and increasing the clarity of the captions, even if that means making them longer. It took me a long time to understand that the "thick line" referenced in the Figure 1 caption referred to the border of the text box itself.
3. Reduce the number of abbreviations in the text. Overall, there are a lot of abbreviations in this manuscript. Certain sections (e.g. Section 3.3) are particularly dense with abbreviations, and correspondingly hard to follow. I would recommend cutting down on the number of abbreviations for clarity.
4. Elaborate on the NoahMP-WRF simulations. It's hard to draw conclusions on this section of the analysis, because relatively little information is given about these simulations. An important feature of NoahMP is that it has multiple options for simulating rain-snow partitioning and snowpack albedo. It also has multiple snowpack-related parameters to which both snow and streamflow are quite sensitive. Without knowing the model physics options and parameters used, it is difficult to conclude whether the biases the authors observed is a structural problem with the model or just a poor setup.
5. Rain on snow. This seems like an important point to discuss in a paper about snowpack and streamflow under climate warming. How well does this new metric handle rain-on-snow events? Can they be resolved and included/excluded? Or are they a confounding factor?
Citation: https://doi.org/10.5194/hess-2021-437-RC2 -
AC1: 'Reply on RC2', Sebastian Krogh, 13 Nov 2021
Answers provided in bold
The authors present a new means of considering the sensitivity of snowmelt timing and streamflow response under warming climate conditions based on space for time substitutions. Their metric (DOS_20) is based on diel fluctuations in streamflow that correlate with solar radiation (after a time lag of 6-18 hours). They use this metric to assess regional sensitivity to warming across an array of small montane basins in the western U.S. They compare their approach to one using a physically-based modeling framework, highlighting differences in snowmelt-streamflow sensitivities derived from each method.
I think the approach presented here can provide valuable insights into the implications climate warming holds for water forecasting and management. However, I found the paper somewhat difficult to follow. I believe significant revisions are necessary to improve the clarity of the analysis. These are enumerated below.
We greatly appreciate the positive comments.
- Devote more space to background information. Numerous concepts are discussed with minimal introduction (e.g. space for time substitution, mean annual autocorrelation, diel streamflow cycles, etc). I understand that the authors are snow hydrologists writing for other snow hydrologists, but the paper would be significantly easier to follow with a proper setup for many of the concepts being discussed.
We appreciate the reviewer´s feedback and we will provide a more detailed introduction to the terms highlighted by the reviewer.
- Streamline extremely dense figures and captions. There is a ton of information included in each figure--particularly Figures 1-3. I think it would be beneficial to break some of these into multiple figures in order to make them more digestible. At the very least, the authors should consider changes such as increasing the font size (overall, but particularly in the tiny inset histograms) and increasing the clarity of the captions, even if that means making them longer. It took me a long time to understand that the "thick line" referenced in the Figure 1 caption referred to the border of the text box itself.
We appreciate the comment and agree with the reviewer that figures are quite dense in information. We will do our best to increase readability by increasing font size, extent, and split some of them if necessary.
- Reduce the number of abbreviations in the text. Overall, there are a lot of abbreviations in this manuscript. Certain sections (e.g. Section 3.3) are particularly dense with abbreviations, and correspondingly hard to follow. I would recommend cutting down on the number of abbreviations for clarity.
This comment was also provided by Prof. Lundquist, and we are reducing the number of acronyms in the manuscript. In particular, STS and PGW will no longer be used and spelled out instead. However, we do believe that DOS20, DOQ25 and DOQ50 are necessary to avoid making the paper already longer, and are also easier to follow in our opinion.
- Elaborate on the NoahMP-WRF simulations. It's hard to draw conclusions on this section of the analysis, because relatively little information is given about these simulations. An important feature of NoahMP is that it has multiple options for simulating rain-snow partitioning and snowpack albedo. It also has multiple snowpack-related parameters to which both snow and streamflow are quite sensitive. Without knowing the model physics options and parameters used, it is difficult to conclude whether the biases the authors observed is a structural problem with the model or just a poor setup.
We will provide more details about key information relevant to our work about this simulation as suggested by the reviewer, in particular we will add more details about snow-related processes. However, the details about simulations are provided by Li et al (2017).
Liu, C., Ikeda, K., Rasmussen, R., Barlage, M., Newman, A. J. A. J. A. J. A. J., Prein, A. F. A. F., Chen, F., Chen, L., Clark, M., Dai, A., Dudhia, J., Eidhammer, T., Gochis, D., Gutmann, E., Kurkute, S., Li, Y., Thompson, G. and Yates, D.: Continental-scale convection-permitting modeling of the current and future climate of North America, Clim. Dyn., 49(1–2), 71–95, doi:10.1007/s00382-016-3327-9, 2017.
- Rain on snow. This seems like an important point to discuss in a paper about snowpack and streamflow under climate warming. How well does this new metric handle rain-on-snow events? Can they be resolved and included/excluded? Or are they a confounding factor?
As also noted by reviewer 1, rain on snow events are problematic in our method as we have no explicit way to address the impact of rainfall due to lack of reliable rain/snow observations. It is likely that our method does not capture rain-on-snow events due to the lack (or unlikelihood) of a diurnal shape in the streamflow response, and a solar radiation cycle that can have discrete hourly changes due to changing between clear sky and cloudy conditions (and backwards), resulting in very low correlations. We will improve our discussion to incorporate the reviewer’s suggestion. Also, please note the new screening method for rainy days that we are proposing in the answer to Reviewer 1 first mayor comment.
Citation: https://doi.org/10.5194/hess-2021-437-AC1
-
AC1: 'Reply on RC2', Sebastian Krogh, 13 Nov 2021