Technical Note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol ′ sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol ′ method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol ′ method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol ′ method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.


Introduction
Distributed hydrologic models aim to improve simulations of watershed behavior by allowing forcing data and model parameters to vary across a spatial grid.Recent advances in hydrologic data collection and computing power have increased the appeal of distributed models while also allowing further increases in complexity (Smith et al., 2004(Smith et al., , 2012)).This added complexity is not without cost; a typical distributed model usually contains thousands more parameters than a lumped model, causing a commensurate leap in computational requirements as well as challenges in diagnosing model behavior (van Griensven et al., 2006;Gupta et al., 2008).Calibration of such highly parameterized models remains difficult, not only due to the computation involved, but also because of their highly interactive parameter spaces and nonlinear, multimodal objective spaces (Gupta et al., 1998;Carpenter et al., 2001).To address these challenges, this study explores diagnostic methods capable of characterizing the complex relationships between distributed model parameters and objectives efficiently and accurately.
Sensitivity analysis has long been used to derive diagnostic insight from hydrologic models by identifying the key input factors controlling model performance (Hornberger and Spear, 1981;Franchini et al., 1996;Freer et al., 1996;Wagener et al., 2001;Muleta and Nicklow, 2005;Sieber and Uhlenbrook, 2005;Bastidas et al., 2006;Demaria et al., 2007;Cloke et al., 2008;Van Werkhoven et al., 2008a, 2009;Wagener et al., 2009;Reusser et al., 2011;Reusser and Zehe, 2011;Herman et al., 2013).The most common applications of sensitivity analysis include factor fixing, in which insensitive inputs are assigned fixed values to simplify further analysis; factor prioritization, in which the most sensitive inputs are identified; and factor mapping, which identifies the regions of the input space in which a particular input is most sensitive (Saltelli et al., 2008).A number of input factors can be explored in a sensitivity analysis, including forcing variables, but in diagnostic applications it is common to analyze model parameters directly.In this study, we aim to analyze the ranking of sensitive model parameters (i.e., both those that are sensitive and insensitive) as well as to compare their quantitative measures of sensitivity.
Sensitivity methods can be broadly divided into local methods and global methods.Local methods provide measures of importance around a single point in the parameter space.Global methods aim to reflect the importance of a parameter throughout the full multivariate space of a model.Relatively few studies have performed global sensitivity analysis for spatially distributed models due to the severe computational demands posed by sampling their highdimensional parameter spaces.Distributed sensitivity studies in hydrology and land surface modeling have often addressed this problem by aggregating parameter values across the model grid or subgrids (e.g., Carpenter et al., 2001;Hall et al., 2005;Sieber and Uhlenbrook, 2005;Zaehle et al., 2005;Alton et al., 2006).Fewer still are studies which have performed a sensitivity analysis on a full set of spatially distributed parameters (e.g., Muleta and Nicklow, 2005;van Griensven et al., 2006;Tang et al., 2007a;Van Werkhoven et al., 2008b).These studies clearly show the benefits of performing a global sensitivity analysis on a distributed model without sacrificing resolution in the parameter space.This study hypothesizes that the need for such sacrifices (i.e., to reduce computational demands) can be reduced with a careful choice of sensitivity analysis method.
This study compares the efficiency and effectiveness of two state-of-the-art global sensitivity analysis methods, Sobol sensitivity analysis (Sobol , 2001;Saltelli, 2002) and the method of Morris (1991).Sobol sensitivity analysis is a variance-based method that attributes variance in the model output to individual parameters and their interactions.In a comparison of several widely used sensitivity methods, the Sobol method was found to provide the most accurate and robust sensitivity indices, particularly in nonlinear models with strong parameter interactions (Tang et al., 2007b;Yang, 2011).However, the number of model evaluations required for the Sobol indices to converge increases rapidly with the number of parameters, making its efficiency questionable in the distributed case.The method of Morris (1991) measures global sensitivity using a set of local derivatives (elementary effects) taken at points sampled throughout the parameter space.The method of Morris can estimate parameter interactions by considering both the mean and variance of the elementary effects.
The two sensitivity analysis methods are implemented for the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM) (Koren et al., 2004;Reed et al., 2004;Smith et al., 2004;Moreda et al., 2006), developed by the United States National Weather Service (NWS).The model is used to simulate the Blue River watershed, Oklahoma, USA, over a 6 month period using hourly time steps and forcing data.Sensitivity results from the Sobol and Morris methods are compared spatially and statistically to determine the extent to which the method of Morris provides computational savings while maintaining sensitivity indices sufficiently similar to those of the Sobol method.In turn, we investigate whether the method of Morris is a promising candidate to overcome the challenges to diagnostic analysis posed by the high-dimensional parameter spaces of distributed hydrologic models.

HL-RDHM model
The HL-RDHM, developed by the United States NWS, is a modeling framework for building lumped, semi-distributed, and fully distributed hydrologic models (Koren et al., 2004;Reed et al., 2004;Smith et al., 2004;Moreda et al., 2006).The model is structured using a 4 km × 4 km grid resolution derived from the Hydrologic Rainfall Analysis Project (HRAP), which corresponds to the NEXRAD (Next Generation Weather Radar) precipitation products developed by the US NWS.The water balance within each grid cell is modeled with the Sacramento Soil Moisture Accounting (SAC-SMA) model (Burnash and Singh, 1995).Figure 1c shows the water balance components of the SAC-SMA model in each grid cell.Routing between grid cells is modeled with a kinematic wave approximation to the St. Venant equations.This study performs sensitivity analysis on 14 parameters of the SAC-SMA model within each cell of the HRAP grid as shown in Fig. 1c.Since the model contains 78 grid cells, a total of 78 × 14 = 1092 parameters are required to perform sensitivity analysis without spatial aggregation.The sampling ranges for these parameters are derived from prior work (Van Werkhoven et al., 2008b) and in consultation with the National Weather Service.Note that the correct choice of sampling ranges is critical to ensure representative model performance in sensitivity analyses (Sobol , 2001;Nossent and Bauwens, 2012a).

Study area: Blue River, Oklahoma
The computational experiments in this study were performed for the Blue River basin in southern Oklahoma, one of the basins included in the Distributed Model Intercomparison Project Phase 2 (DMIP2) (Smith et al., 2012).Figure 1a shows the location of the Blue River.The watershed is represented by 78 HRAP grid cells, as shown in  Fig. 1b, resulting in a total basin area of 1248 km 2 .The model was forced using hourly NEXRAD precipitation data over the 6 month period from 16 November 2000 to 15 May 2001, preceded by a 3 week warmup period.Figure 2 shows the hourly precipitation and streamflow data for the Blue River during the selected simulation period.As Fig. 2 indicates, the Blue River remains at low flow during much of the simulation period, punctuated by a series of large rainfall events.
3 Sensitivity analysis methods

Sobol sensitivity analysis
Sobol sensitivity analysis (Sobol , 2001;Saltelli, 2002) is a global, variance-based method that attributes variance in the model output to individual parameters and the interactions between parameters.In general, the attribution of total output variance to individual model parameters and their interactions can be written as where D(f ) represents the total variance of the output metric f ; D i is the first-order variance contribution of the i-th parameter, D ij is the second-order contribution of the interaction between parameters i and j ; and D 12...p contains all interactions higher than third-order, up to p total parameters.
The first-order and total-order sensitivity indices are defined as follows.
First-order index: Total-order index: The first-order index measures the fraction of the total output variance caused by the parameter i apart from interactions with other parameters.The total order index is one minus the fraction of total variance attributed to D ∼i , which represents all parameters except i.The total order index removes parameter i from the analysis and attributes the resulting reduction in variance to that parameter (Homma and Saltelli, 1996).The difference between a parameter's first and total order indices represents the effects of its interactions with other parameters.In this study, we analyze the total order indices to determine the ranking of the most sensitive model parameters and compare these to the related µ * statistic from the method of Morris.Sobol sensitivity indices were calculated according to the methods proposed by Sobol and Saltelli (Sobol , 2001;Saltelli, 2002;Saltelli et al., 2008), in which sensitivity indices are approximated using numerical integration in a Monte Carlo framework.A global sample of the parameter space is taken using a quasi-random Sobol sequence of values to achieve a uniform coverage of the space (Sobol , 2001).The parameter sets generated from these sampling ranges are evaluated in the model, creating a distribution of output values, f , which have a total variance D as follows: Here, f 0 is the mean of the distribution of model outputs and θ s represents the parameter set associated with sample s.Equations ( 4) and ( 5) represent the mean and variance calculations proposed in Saltelli et al. (2008).For adaptations to these calculations, which aim to improve the convergence of Sobol indices, the reader is referred to Saltelli (2002) and Nossent and Bauwens (2012b).
The variance contributions D i and D ∼i are calculated according to Sobol (2001) and Saltelli et al. (2008).First, two matrices A and B are each assigned N sampled parameter sets.The sample set A is used to calculate the total variance as shown in Eqs. ( 4) and (5).The sample set B is used to resample or fix each parameter as necessary in the following expressions: In Eqs. ( 6) and ( 7), the parameter sets θ i are superscripted to indicate which parameters are sampled from which set.The sample set is denoted by the superscript A or B; the parameters taken from that set are denoted either by i (the i-th parameter) or ∼ i (all parameters except i).This scheme allows the estimation of first and total order sensitivity indices with a total of N (p + 2) model evaluations, where p is the number of parameters for which indices are to be calculated.

Method of Morris
The method of Morris (1991) derives measures of global sensitivity from a set of local derivatives, or elementary effects, sampled on a grid throughout the parameter space.It is based on one-at-a-time (OAT) methods, in which each parameter x i is perturbed along a grid of size i to create a trajectory through the parameter space.For a given model with p parameters, one trajectory will contain a sequence of p such perturbations.Each trajectory yields one estimate of the elementary effect for each parameter (i.e., the ratio of the change in model output to the change in that parameter).Equation ( 8) shows the calculation of a single elementary effect for the i-th parameter.
where f (x) represents the prior point in the trajectory.In alternative formulations, both the numerator and denominator are normalized by the values of the function and parameter x i , respectively, at the prior point x ( van Griensven et al., 2006).Using the single trajectory shown in Eq. ( 8), one can calculate the elementary effects of each parameter with only p + 1 model evaluations.However, by using only a single trajectory, this OAT method is highly dependent on the location of the initial point x in the parameter space and does not account for interactions between parameters.For this reason, the method of Morris (1991) performs the OAT method over N trajectories through the parameter space.This study employs the sampling approach originally proposed by Morris (1991), in which trajectories through the parameter space are generated by perturbing one factor at a time, beginning at a randomly sampled point.Recent advances in this area by Campolongo et al. (2007Campolongo et al. ( , 2011) ) and Ruano et al. (2012) provide trajectories that maximize coverage of the parameter space, ensuring that the sampled elementary effects yield accurate estimates of global sensitivity.These improvements suggest promising directions for future investigation.Once trajectories are sampled, the resulting set of elementary effects is then averaged to give µ, which serves as an estimate of total-order effects.Similarly, the standard deviation of the set of elementary effects σ describes the variability throughout the parameter space and thus the extent to which parameter interactions are present.This study uses the improvement of Campolongo et al. (2007) in which an estimate of totalorder sensitivity of the i-th parameter, µ * i , is computed from the mean of the absolute values of the elementary effects over the set of N trajectories as shown in Eq. ( 9).

Computational experiment
The sensitivity analyses were performed on the 14 SAC-SMA model parameters as indicated in Fig. 1.The lower and upper bounds for each parameter are based on the a priori gridded parameter values derived by the NWS (Koren et al., 2004) and extended for sensitivity analysis by Van Werkhoven et al. (2008b).These parameter ranges are included in the Supplement.Parameter values for each grid cell were sampled separately from uniform distributions.Rather than measure the sensitivity of the output streamflow directly, we measure the sensitivity of the root mean squared error (RMSE) metric, calculated using the known hourly streamflow values over the 6 month simulation period.This ensures that our sensitivity indices are grounded relative to the observed streamflow and describe the controls on model performance.
The sample sizes and corresponding number of model evaluations required for both the Sobol and Morris methods are shown in Table 1.For the Sobol method, sample sizes of N = 1000 and N = 6000 were used, resulting in just over 1 million and 6 million model evaluations, respectively.The latter value represents the limit of computational feasibility for this model at an hourly time step, to derive maximally accurate baseline values of the sensitivity indices.The two sample sizes were employed to verify convergence of the Sobol indices.Confidence intervals for the sensitivity indices derived from the bootstrap method (Efron and Tibshirani, 1994;Archer et al., 1997) were monitored to ensure convergence of the Sobol method at the N = 6000 level.Convergence was considered acceptable if the 95 % confidence interval represented less than 10 % of the sensitivity index value for the most sensitive parameters.For the method of Morris, sample sizes ranging from N = 20 to N = 100 were chosen to determine if the approach can provide suitable results with orders of magnitude fewer model evaluations.Open-source implementations were used for the methods of Morris (Pujol et al., 2013) and Sobol sensitivity analysis (Hadka and Reed, 2012).The sensitivity analyses were performed using the CyberSTAR high-performance cluster at Penn State University, which contains a combination quad-core AMD Shanghai processors (2.7 GHz) and Intel Nehalem processors (2.66 GHz).Approximately 50 000 computing hours were required to complete the experiment.

Results and discussion
The results of the sensitivity analyses can be addressed through the lens of two primary questions: (1) what is the sample size required for the Sobol method to return reliable sensitivity indices; and (2) how suitable are the indices returned by the method of Morris relative to the baseline created by the Sobol method.

Convergence of Sobol indices
Figure 3 shows the spatial maps of total-order Sobol sensitivity indices for the sample sizes N = 1000 and N = 6000.The four most sensitive parameters of the SAC-SMA model are shown, as well as the cell-level sum of sensitivity indices.The total-order indices vary over a small range since the output variance must be distributed across the full set of distributed parameters, 1092 in total.
Figure 3 reveals several interesting spatial patterns of sensitivity.First, the most sensitive parameters are primarily upper and lower storage zone maxima.The lower-zone storage maxima, LZFPM (lower-zone free primary maximum) and LZFSM (lower-zone free water supplemental maximum), are most sensitive in the headwater portion of the basin, while the upper-zone storage maximum UZFWM (upper-zone free water storage) is most sensitive toward the outlet of the basin.The resulting summation of sensitivity indices shows a division of the most active cells, with one group in the headwaters and another near the outlet.
From Fig. 2, it is clear that most precipitation events during the simulation period are distributed across nearly all grid cells in the watershed.This suggests that much of the spatial variability of sensitivity in Fig. 3 is due to processes within the model itself rather than forcing patterns.The RMSE metric is most sensitive to errors in peak flows, so the sensitivity indices in Fig. 3 can be interpreted in the context of the several high-flow events shown in the hydrograph in Fig. 2. Toward the outlet of the basin, the primary runoff-generating mechanisms in the model are overflow exceeding UZFWM and drainage from the upper zone (controlled by UZK, the upper-zone drainage coefficient).The fact that the lowerzone drainage constants LZPK (lower-zone primary coefficient) and LZSK (lower-zone secondary coefficient) are not sensitive indicates that they act on a slower timescale and thus do not affect RMSE.In the headwaters, the lower zone storage maxima (LZFPM and LZFSM) and the rate constant UZK are most sensitive, likely because these parameters must not allow too much direct runoff from the headwater region to prevent the model from overshooting the observed flow peaks and causing poor RMSE performance.While the temporal distribution of forcing can affect the sensitivity indices shown in Fig. 3, the spatial distribution can be restricted to the processes occurring within the model.
Also visible in Fig. 3 is the difference in Sobol sensitivity indices as a function of sample size.At a sample size of N = 1000, the most sensitive cells are identified, but it is clear that cells with intermediate sensitivity values largely remain unidentified.For example, it is common to see sensitive cells (red) adjacent to insensitive cells.Intuitively, we should expect to see a smoother spatial gradient of sensitivity in which the most sensitive cells are adjacent to intermediatesensitivity cells, which in turn are adjacent to low-sensitivity cells.This is achieved to a larger extent with a sample size of N = 6000.Here, the sensitivity indices vary more smoothly in space, indicating that the N = 6000 case provides a baseline for total-order sensitivity indices.The bootstrap confidence intervals confirm convergence for the N = 6000 sample size.The N = 1000 case would not be sufficient to capture the full range of sensitivity, a fact that underscores the high computational requirements of the Sobol method.It is worth noting that the slow convergence of the Sobol method for this model is related to the large number of parameters over which variance must be decomposed, leading to small sensitivity values and a correspondingly narrow range of acceptable confidence bounds (see Nossent et al., 2011).

Comparison of Sobol and Morris indices
The Sobol sensitivity indices from the N = 6000 case form a set of target values against which the method of Morris will be compared.Figure 4 compares this target to the lowestsample Morris experiment, N = 20, for all 14 of the SAC-SMA parameters and the sums of parameter indices for each of the 78 grid cells.The Sobol indices offer a quantitative interpretation as a fraction of total variance, but the Morris indices do not; the latter are mapped from the range (0, 0.1) to (0, 1) to avoid this misinterpretation.
Figure 4 shows that the total-order indices calculated by the method of Morris with only N = 20 samples successfully capture the spatial patterns of the Sobol indices with N = 6000 samples.The Morris indices are able to isolate the most sensitive parameters, along with their correct locations in the watershed: LZFPM, LZFSM, and UZK in the headwaters, and UZFWM, UZK, and ADIMP (additional impervious area) near the outlet.It also correctly identifies the parameters that are insensitive over the simulation period: LZTWM (lower zone tension water maximum), PC-TIM (percent of impervious area), PFREE (percolation coefficient), UZTWM (upper-zone tension water maximum), and RIVA (riparian vegetation area).The sums of indices are also comparable between the Sobol and Morris methods, with sensitive areas near the headwaters and outlet, and intermediate sums of sensitivity in the rest of the basin.In general, the Morris indices follow smooth spatial patterns, which aligns with intuition regarding sensitive regions of the watershed.From the sensitivity maps in Fig. 4, the method of Morris with a sample size of N = 20 is able to correctly identify sensitive and insensitive parameters, as well as their spatial  patterns, at greatly reduced computational expense relative to the Sobol method.
The Morris sensitivity indices can also be compared statistically to the Sobol indices for the N = 6000 case to ensure sufficient similarity.Figure 5 compares the sensitivity indices for each method, as well as the sensitivity ranks (1-1092), for all of the Morris sample sizes from N = 20 to N = 100.The sensitivity indices are compared using a nonlinear Spearman correlation coefficient, because a one-to-one correspondence between Sobol and Morris indices is not necessary.The rankings are compared with a linear correlation coefficient, because ideally these will exhibit a one-toone correspondence.
The top panels in Fig. 5 show that the Morris µ* values for all sample sizes are well-correlated with the Sobol indices with a sample size of N = 6000.Importantly, there appears to be little benefit in running the method of Morris for sample sizes greater than N = 20, since the correlation remains similar for higher sample sizes.The relationship between Morris µ* values and Sobol indices is approximately linear for low-sensitivity parameters.However, the relationship becomes nonlinear for high-sensitivity parameters, where the Morris µ* values appear to flatten out.This suggests that the method of Morris cannot reliably reproduce the precise ranking of high-sensitivity parameters provided by the Sobol method.However, the method of Morris successfully distinguishes sensitive from insensitive parameters, and a sample size of N = 20 is clearly sufficient to achieve this.
The bottom panels in Fig. 5 show that the sensitivity rankings given by the method of Morris are well-correlated with those given by the Sobol method with N = 6000.Again, Hydrol.Earth Syst.Sci., 17, 2893-2903, 2013 www.hydrol-earth-syst-sci.net/17/2893/2013/ a sample size of N = 20 for the method of Morris appears sufficient to achieve a good correlation, and little is gained by increasing the sample size further.Of particular interest are the clusters of highly correlated parameters ranked near the most and least sensitive (ranks 1 and 1092, respectively).This indicates that the method of Morris can isolate the most and least sensitive parameters with high reliability, reinforcing its utility as a screening method.The outliers in the bottom panels in Fig. 5 reinforce the difficulty for the method of Morris to distinguish between sensitive parameters; it correctly identifies them as sensitive, but struggles to rank them quantitatively.The largest outliers occur in the upper-left of each plot, where the method of Morris attributes erroneously high rankings to certain parameters.These outliers correspond to parameters of average rank, whose low (but non-zero) sensitivity values are extremely difficult to differentiate from one another.Thus, these points highlight the limitations of the method of Morris for this model, but they do not detract from the success of the approach in correctly classifying the parameters with the highest and lowest sensitivity values.
Given that both the spatial and statistical comparisons between the Sobol and Morris sensitivity indices indicate the success of the method of Morris, it is worth exploring the amount of computation saved to achieve a highly similar set of sensitivity results.Figure 6 shows the location of each experiment in the space defined by the computation time and storage required.The largest Sobol experiment, with N = 6000, required over 6 million model evaluations, leading to more than 30 000 h of computation time and approximately 180 gigabytes of storage space to store the model output.By contrast, the smallest Morris experiment, with N = 20, required roughly 100 h of computation and 1 gigabyte of storage space.This represents a factor of 300 savings in both the runtime and storage dimensions relative to the Sobol method.As shown in Figs. 4 and 5, the sensitivity indices calculated by this lowest-sample Morris experiment are spatially and statistically comparable to those calculated by the highest-sample Sobol experiment.The Sobol method confers several advantages, including the first order sensitivity indices, and a large ensemble of model evaluations to be used in an uncertainty analysis or likelihood-based optimization framework, which the method of Morris does not provide.However, for the purpose of obtaining an accurate distinction between sensitive and insensitive parameters, it is clear that the method of Morris provides significant computational savings without significant degradation of solution quality.

Conclusions
The method of Morris is able to correctly screen the most and least sensitive parameters for a highly parameterized, spatially distributed watershed model with 300 times fewer model evaluations than the Sobol method.Even for this complex model, the efficient factorial sampling scheme of the method of Morris is sufficient to isolate the controls on model performance without any prior assumptions on the form of the model output.For many distributed modeling applications, the Sobol method requires a prohibitive number of model evaluations.In light of these results, the method of Morris proves to be a promising way forward for efficient global sensitivity analysis of distributed models.It also holds promise as a screening technique, identifying parameters that can safely be removed prior to more complex analyses such as the Sobol method or model calibration.Future work will include an investigation of time-varying sensitivity to determine the extent to which spatial sensitivity patterns change during wet and dry periods.The increasing use of spatially distributed hydrologic models requires that diagnostics such as these sensitivity analysis methods be evaluated not only in terms of their statistical effectiveness but also by their efficiency, to ensure that hydrologic modelers can obtain maximally reliable diagnostic insights at a reasonable computational cost.

Fig. 1 .
Fig. 1. (A) Location of the Blue River basin in southern Oklahoma, USA.(B) The 78 HRAP grid cells of the Blue River basin (shaded).(C) The Sacramento Soil Moisture Accounting (SAC-SMA) model, which simulates the water balance in each grid cell.

Fig. 2 .
Fig. 2. The hourly hydrograph of the 6 month simulation period for the Blue River basin, with a 3 week warm up period.The precipitation amounts are based on the mean value across the 78 HRAP grid cells in the basin.The colors of the precipitation bars indicate the fraction of grid cells receiving more than 0.1 mm precipitation, representing the spatial distribution of each hourly rainfall value.

Fig. 3 .
Fig. 3. Maps of the total-order Sobol sensitivity indices for the four most sensitive parameters as well as the total sum in each grid cell.The maps are shown for the N = 1000 and N = 6000 sample sizes.The lower sample size shows a coarse identification of sensitive and insensitive cells.The N = 6000 sample size shows smoother spatial patterns of sensitivity indices, suggesting that this level of sampling is required for reliable Sobol indices.

Fig. 4 .Fig. 5 .
Fig. 4. Total-order sensitivity indices calculated by the Sobol method, with sample size N = 6000, and the method of Morris, with N = 20.The Morris indices are normalized to the range (0, 1) since they do not offer a quantitative interpretation of percent variance.The method of Morris is able to correctly identify sensitive and insensitive parameters, as well as their spatial patterns, with far fewer model evaluations.The sums of sensitivity indices represent the addition of parameter indices within each grid cell.

Fig. 6 .
Fig. 6.Computation time (hours) and storage (gigabytes) required for each experiment.The method of Morris with N = 20 represents a factor of 300 computational savings compared to the Sobol method with N = 6000.

Table 1 .
Sample sizes and number of model runs performed for each of the sensitivity analysis methods.