Many relatively minor edits have been used in the abstract and introduction to make the objectives and approach clearer. Much of the writing seems more direct this time, as well. Perhaps it is only because I am more familiar with the article and issues it addresses, but comparison of a few sentences shows more direct and frank language.
I think I better understand how one could argue that concordance implies confidence, but given how the authors eventually apply it, I’m not sure that it is through the “top-down” framing they imply through analogy to the intermodel comparisons. Rather it seems to emerge from and be applied in a sense that there is an expectation being tested, and more evidence is better. This would still require clarifying the argument in order to have the paper presenting a coherent “framework.” I would disagree with a simple “rebuttable presumption” that those projections that have concordance are more trustworthy, but I could agree that specific comparisons between different “projections” (or specifically trends) can rule out particular errors in reasoning. Then the more inconsistencies that can be ruled out, the more useful the information is.
That being said, the presentation of results seems to hinge less on the “concordance implies confidence” assumption than it did before. In fact, it is not clear that it needs to be carried as a thread or assertion throughout the paper. The paper does a good job of showing all three approaches and how they agree or disagree in different places. It also gives some examples of reconciliation between approaches – examples that do not require a presumption that similar projections are more likely to be correct. Taken as an example of showing and comparing different projections by different logical bases across geographically disparate areas, the paper is excellent. It may only require minor revisions to subtly reframe the concordance argument.
The point of bringing up Luce et al. (2009) and Luce et al. (2013) was to illustrate that the 2009 discussion (as you note) was naïve in taking a trend in runoff and using it to project into the future. In making this leap, we drew on a history of trend studies that looked backward through time and assumed that the changes that had occurred to date were a result of “climate change” writ large, and would likely continue. Ensuing discussion contrasting these results to precipitation data trends (now known to be over-extrapolated) and GCM projections, led some to believe that the declining flow trend was a result of increased ET due primarily to increased forest vegetation over the region (caused by wildfire suppression). However, the fact that up to 50% of some of the basins examined were burned during the period but suffered almost as great of a flow decline suggested otherwise. The more rigorous examination in 2013, revealed that the trend could not reasonably be explained by ET, rather via precipitation. We were able to demonstrate the linkage between zonal winds and precipitation and further between zonal winds and meridional pressure gradients. We also noted that the great majority of GCMs show projections of reduced zonal wind speeds because of reduced meridional pressure/temperature gradients. The sensitivity across different climate models matched the slope of the interannual relationship between zonal wind speed and meridional pressures. This then provides physically consistent understanding of what has occurred in the past and relates it to what might happen in the future contingent upon different hypotheses embedded in the climate models that affect development of meridional temperature gradients.
In this case the annual streamflow trends stood apart from both the accepted historical precipitation estimates and the GCM precipitation projection trends, but those differences were a clue that something needed to be reconciled more thoroughly. Indeed the original low flow trends, climate trends contributing to the low flows, and projections of climate agreed about the direction and magnitude of likely change, and annual flows provided the clue that something was amiss in this tidy story. In this case the problems were a lack of high elevation precipitation data and a simplistic conceptual model relating low elevation and high elevation precipitation. It also turns out there were some incorrect temperature data contributing to an overestimate of temperature trends at elevation (Oyler et al., 2015). We further delved into the geographic and temporal sensitivity of snowpack storage (Luce et al., 2014, which also contrasts the space for time and temporal sensitivity approaches) and annual low flows (Kormos et al., 2016) to demonstrate that the hypothesis that precipitation has declined over the region better explains the trends, interannual variations, and geographic variations in trends than does the hypothesis that historical temperature increases have driven the trends.
These results argue against the kind of direct extrapolation we implied in 2009 and that others had used before. While temperatures are increasing with high certainty (paraphrasing the IPCC), here and in many other places, projections of precipitation are notably uncertain. What we demonstrated for this corner of the globe is that unless there is strong reason to suspect additional declines in precipitation, past trends may provide a poor basis for extrapolating to future conditions.
Perhaps there is a common perception in the literature, or in practice, that historical trends provide a reasonable basis for projection – if trends we are seeing now result from climate change, perhaps we should expect to see more in the future. A framework comparing pillars is one way of saying we should formally test that expectation by comparing it to historical trends in the weather and information from GCM projections. Then we should also add that we should keep testing against consistency with other temporal trends and geographic patterns to ensure logical consistency. This frames pillar 1 as a sort of default “hypothesis” about climate change effects that can be compared to other ways of projecting and estimating sensitivity. As long as there is consistency, it’s acceptable. However, any disagreement is a flag that more thought is needed.
In contrast, the analogy between the multi-model ensembles used for climate simulation and the three approaches discussed in this paper is weak. The models used in ensembles, like CMIP5, are parallel in purpose and basic design … they are all circulation models that rely on fluid dynamics theory. They differ in detail and in specific hypotheses about particular processes, especially precipitation. Without capacity to discern among these different hypotheses about some aspects of future outcomes, the community has chosen to support all of them, so the diversity of answers is taken to reflect our epistemic uncertainty. In contrast, the three approaches used in this paper do not reflect alternative hypotheses about process, they represent calculations that would be expected to provide equivalent answers if a supposition that past trends will continue were true. In this framing, disagreement between any of the three could result from errors in data (as in the trend example earlier), errors in the hydrologic modeling, errors in the downscaling, errors in the climate models, or a future that diverges from past trends (also possible in the earlier trend example). For example, if pillars 1 & 3 disagree, there is a specifically an isolation of errors in flow or weather data, errors in the hydrologic model, or non-comparable trend periods.
The argument that concordance implies confidence is most heavily presented in introductory material and is comparatively limited in the results and discussion. Reframing from the perspective of testing a simple “expectation” may be more consistent with the author’s intent, which recommends further examination in the event of disagreement rather than an averaging approach, as has been used with climate model ensembles. It also seems like it would require little change in text other than in one section of the introductory material.
New References:
Kormos, P., Luce, C., Wenger, S. J., Berghuijs, W. R., 2016, Trends and Sensitivities of Low Streamflow Extremes to discharge Timing and Magnitude in Pacific Northwest Mountain Streams, Water Resour. Res, 10.1002/2015WR018125.
Luce, C. H., Lopez‐Burgos, V., Holden, Z., 2014, Sensitivity of snowpack storage to precipitation and temperature using spatial and temporal analog models, Water Resour. Res., 50, 9447-9462, 10.1002/2013WR014844.
Oyler, J. W., Dobrowski, S. Z., Ballantyne, A. P., Klene, A. E., Running, S. W., 2015, Artificial amplification of warming trends across the mountains of the western United States, Geophysical Research Letters, 42, 153-161. |