Interactive comment on “Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization” by S. J. Noh et al

1. The paper, as it stands, does not specify the generation of gridded (250 m resolution) model forcing and parameters as well as the connectivity between neighboring grids (flow direction in individual grids, the conflux of flow from different grids). Additionally, soil moisture states at different layers and different grids are perturbed using a same error definition (Equations 22 and 23) (so is the overland flow state). Those


Introduction
Data assimilation is a way to integrate information from a variety of sources to improve prediction accuracy, taking into consideration of the uncertainty in both a measurement system and a prediction model.There have been considerable advances in hydrologic data assimilation for streamflow prediction (e.g.Kitanidis and Bras, 1980;Georgakakos, 1986;Vrugt et al., 2006;Clark et al., 2008;Seo et al., 2003Seo et al., , 2009)).State-space filtering methods based on variations of the Kalman filter (KF) approach have been proposed and implemented because of their potential ability to explicitly handle uncertainties in hydrologic predictions.However, the KF approaches for a non-linear system such as the extended Kalman filter (EKF) have limitations in practical application due to their instability for strong non-linearity and the high computational cost of model derivative equations, especially for high-dimensional state-vector problems such as spatially distributed models.To cope with the drawbacks of EKF, the ensemble Kalman filter (EnKF) was introduced by Evensen (1994).The EnKF is computationally efficient because it has no need for model covariance estimation, but it is still based on the assumption that all probability distributions involved are Gaussian.Further reviews of Kalman filterbased applications for hydrologic models are shown in Vrugt et al. (2006), Moradkhani et al. (2005b), Moradkhani (2008), and Evensen (2009).
Another approach to data assimilation is variational assimilation (VAR), which has achieved widespread application in weather and oceanographic prediction models.In hydrologic investigations, VAR is implemented for estimating spatial soil-moisture distributions by Reichle et al. (2001) and for assimilating potential evaporation and real-time observations Published by Copernicus Publications on behalf of the European Geosciences Union.
of streamflow and precipitation to improve streamflow forecasts by Seo et al. (2003Seo et al. ( , 2009)).Although variational methods are more computationally efficient than KF-based methods, the derivation of the adjoint model needed for minimisation of a cost function is difficult, especially in the case of non-linear, high dimensional hydrological applications (e.g.Liu and Gupta, 2007).
Among data assimilation techniques, the sequential Monte Carlo (SMC) methods, known as particle filters, are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions being made about the nature of the distributions (Gordon et al., 1993;Musso et al., 2001;Arulampalam et al., 2002;Johansen, 2009).Unlike the various Kalman filter-based methods that are basically limited to the linear correction step and the assumption of Gaussian distribution errors, SMC methods have the advantage of being applicable to non-Gaussian state-space models.The application of these powerful and versatile methods has been increasing in various areas, including pattern recognition, target tracking, financial analysis, and robotics.
In recent years, these methods have received considerable attention in hydrology and earth sciences (e.g.Moradkhani et al., 2005a;Weert and El Serafy, 2006;Zhou et al., 2006;van Delft et al., 2009;van Leeuwen, 2009;Karssenberg et al., 2010).Since their first introduction to the rainfallrunoff model of Moradkhani et al. (2005a), Weert and El Serafy (2006) compared ensemble Kalman filtering and particle filtering for state updating of hydrological conceptual rainfall-runoff models.The SMC methods have also been applied to parameter estimation and uncertainty analysis of hydrological models.Smith et al. (2008) evaluate structural inadequacy in hydrologic models, Qin et al. (2009) estimate both soil moisture and model parameters, and Rings et al. (2010) implement hydrogeophysical parameter estimation.Uncertainty of a distributed hydrological model is analyzed by Salamon andFeyen (2009, 2010), and dual stateparameter updating of a conceptual hydrologic model is applied to flood forecasting by Noh et al. (2011).The diversity of assimilated data and models has been increasing; a snow water equivalent prediction model (Leisenring and Moradkhani, 2010) and assimilation with remote sensing-derived water stages (Montanari et al., 2009) have been investigated.However, the framework to deal with the delayed response, which originates from different time scales of hydrologic processes, routing and spatial heterogeneity of catchment characteristics, and forcing data, especially in a distributed hydrologic model, has not been thoroughly addressed in hydrologic data assimilation.Furthermore, alternative methods proposed in the literature to mitigate loss of sample diversity (e.g.Musso et al., 2001;Arulampalam et al., 2002), which may cause collapse of the filtering system, have not been studied in hydrology.In this paper, we apply particle filters for a distributed hydrologic model in support of short-term hydrologic forecasting.A lagged particle filtering approach is proposed to consider different response times of internal states in a distributed hydrologic model.The regularized particle filter with the Markov chain Monte Carlo (MCMC) move step is also adopted to improve sample diversity under the lagged filtering approach.A process-based distributed hydrologic model, WEP (Jia and Tamai, 1998;Jia et al., 2001Jia et al., , 2009)), is implemented for sequential data assimilation through state updating of internal hydrologic variables.Particle filtering is parallelized and implemented in the multi-core computing environment via an open message passing interface (MPI).
The paper is organized thus: Sect. 2 outlines the Bayesian filtering theory and particle filters.In Sect.3, a lagged filtering approach is introduced with an additional regularization step to reflect different responses of internal processes in sequential data assimilation.Section 4 presents the case study results, demonstrating the applicability of the proposed particle filtering approach.The lagged regularized particle filter (LRPF) and the sequential importance resampling (SIR) particle filter are evaluated for hindcasting of streamflow in the Katsura River catchment using the WEP model.Section 5 summarizes the results and conclusions.

Method of particle filters
In this section, we briefly describe the theory of Bayesian filtering and sequential Monte Carlo (SMC) filtering for its suboptimal solution in non-linear and non-Gaussian cases.We describe several variants of SMC filters, including sequential importance resampling (SIR) and regularized particle filter (RPF), which are based on sequential importance sampling (SIS).Detailed descriptions of sequential Monte Carlo methods can be found in Arulampalam et al. (2002) and Moradkhani et al. (2005a).

Bayesian filtering theory and basic particle filtering methods
To define the problem of the Bayesian filtering, consider a general dynamic state-space model, which is described as:  (11) idth h are chosen to minimise the mean integrated square error posterior density and the corresponding regularized weighted ), which is defined as n y express the measurement function, having parameters θ .ω k and ν k represent the model error and the measurement error, respectively, and W k and V k represent the covariance of the error.
In the Bayesian recursive estimation, if the system and measurement models are non-linear and non-Gaussian, it is not possible to construct the posterior probability density function (PDF) of the current state x k given the measurement y 1:k = {y i , i = 1, ..., k} analytically.When the analytic solution is intractable, an optimal solution can be approximated by SMC filters.
Sequential Monte Carlo (SMC) filters are a set of simulation-based methods that provide a flexible approach to computing posterior distribution without any assumptions being made about the nature of the distributions.The key idea of SMC filters is based on point mass ("particle") representations of probability densities with associated weights: where x i k and w i k denote the i-th posterior state ("particle") and its weight, respectively, and δ(•) denotes the Dirac delta function.Since it is usually impossible to sample from the true posterior PDF, an alternative is to sample from a proposal distribution, also called importance density, denoted by q(x k |y k ).After several steps of computation, the recursive weight updating can be derived as: The choice of importance density is one of the most critical issues in the design of SMC methods.The most popular choice is the transitional prior: By substituting Eq. ( 5) into Eq.( 4), the weight updating becomes: The sequential importance sampling (SIS) algorithm shown above is a Monte Carlo filter that forms the basis for most SMC filters.A common problem with the SIS algorithm is the degeneracy phenomenon, in which after a few iterations, all but one particle will have negligible weight.A suitable measure of the degeneracy is the effective sample size n eff estimated as (Kong et al., 1994): If the weights is uniform (i.e.w i k = 1/n for i = 1, ..., n), then n eff = n.If all but one particle have 0 weight, then n eff = 1.The ratio of the effective particle number n ratio is estimated as follows: The maximum of n ratio is 1 when the weights are uniform.
Small n ratio indicates a severe degeneracy and vice versa.n ratio is used as an indicator of degeneracy in this study because it can be used easily regardless of the particle number.
The degeneracy phenomenon can be reduced by performing the resampling step whenever a significant degeneracy is observed.Thus, the SIR particle filter is derived from the SIS algorithm by performing the resampling step at every time index.The idea of resampling is simply that particles with very low weights are abandoned, while multiple copies of particles are kept with the uniformly weighted measure {x i k , n −1 }, which still approximates the posterior PDF, p(x k |y 1:k ) (van Leeuwen, 2009).
Resampling is one of the key issues in the SMC filters, and various resampling approaches have been introduced in the literature, such as multinomial resampling, residual resampling, stratified resampling, and systematic resampling.A comparative analysis and review of resampling approaches can be found in Douc et al. (2005) and van Leeuwen (2009).Systematic resampling, also known as stochastic universal sampling, is often preferred due to its computational simplicity and good empirical performance.It has also been shown that systematic resampling has the lowest sampling noise (Kitagawa, 1996).Hence, we use systematic resampling for all particle filtering cases in this study.It is worth noting that there are several choices in resampling methods, and the proper method may be different, depending on the characteristics of hydrologic models.See Weert and El Serafy (2006), Rings et al. (2010), and Salamon and Feyen (2009) for residual resampling; see also Salamon and Feyen (2010) and Moradkhani et al. (2005a) for systematic resampling.Although the SIR method has the advantage that the importance weights are easily evaluated, because resampling is applied at every iteration, this filter may lead to a sudden loss of diversity in particles and is sensitive to outliers (Ristic et al., 2004).

Regularized particle filter
The positive effects of the resampling step are to automatically concentrate particles in regions of interest of the statespace and to reduce particle degeneracy.However, the particles resampled from high weights are statistically selected many times.This leads to another problem, known as sample impoverishment, which means a loss of diversity among the particles because the resultant sample will contain many repeated points (Ristic et al., 2004).Some systematic techniques have been proposed to solve the problem of sample impoverishment.An alternative solution is to introduce the regularization step when the sample impoverishment becomes severe.The regularized particle filter (RPF) is based on regularization of the empirical distribution associated with the particle system using the kernel method (Musso et al., 2001).The main idea of RPF consists of changing the discrete approximation of posterior distribution to a continuous approximation, so the resampling step is changed into simulating an absolutely continuous distribution, hence producing a new particle system with n different particle locations.The concept of discrete and continuous approximation of particle density is illustrated in Fig. 1.If the weights are concentrated on the limited number of particles, the resampling in the discrete approximation (e.g. the SIR particle filter) may lead to a poor representation of the posterior density, while a continuous approximation in regularized measure improves the diversity in the resampling step.

Assign the particle weights uniformly
Propagate the n model states forward in time through model operator f(.) Predict the system output through the operator h(.) Update the particle weights Estimate the likelihood Forecast based on updated particles t < j+1 t = t + 1 Calculate lagged weight and likelihood In RPF, posterior particles are drawn from the approximation where is the rescaled kernel density K(•), h > 0 is the bandwidth, and n x is the dimension of the state vector x.The kernel density is a symmetric probability density function on 7 resampling step are to automatically concentrate particles in te-space and to reduce particle degeneracy.However, the particles hts are statistically selected many times.This leads to another impoverishment, which means a loss of diversity among the nt sample will contain many repeated points (Ristic et al., 2004).(11) dth h are chosen to minimise the mean integrated square error posterior density and the corresponding regularized weighted , which is defined as The kernel K(•) and bandwidth h are chosen to minimise the mean integrated square error (MISE) between the true posterior density and the corresponding regularized weighted empirical measure in Eq. ( 9), which is defined as where p(•|•) denotes the approximation to p(x k y 1:k ) given by the right-hand side of Eq. ( 9).In the special case of equally weighted samples, w i = 1/n for i = 1, ..., n, the optimal choice of the kernel is the Epanechnikov kernel, where c n x is the volume of the unit sphere of

Regularized particle filter
The positive effects of the resampling step are to automatically concentrate particles in regions of interest of the state-space and to reduce particle degeneracy.However, the particles resampled from high weights are statistically selected many times.This leads to another problem, known as sample impoverishment, which means a loss of diversity among the particles because the resultant sample will contain many repeated points (Ristic et al., 2004).
Some systematic techniques have been proposed to solve the problem of sample impoverishment.An alternative solution is to introduce the regularization step when the sample impoverishment becomes severe.The regularized particle filter (RPF) is based on regularization of the empirical distribution associated with the particle system using the kernel method (Musso et al., 2001).The main idea of RPF consists of changing the discrete approximation of posterior distribution to a continuous approximation, so the resampling step is changed into simulating an absolutely continuous distribution, hence producing a new particle system with n different particle locations.The concept of discrete and continuous approximation of particle density is illustrated in Fig. 1.If the weights are concentrated on the limited number of particles, the resampling in the discrete approximation (e.g., the SIR particle filter) may lead to a poor representation of the posterior density, while a continuous approximation in regularized measure improves the diversity in the resampling step.
In RPF, posterior particles are drawn from the approximation where is the rescaled kernel density is the bandwidth, and x n is the dimension of the state vector x.The kernel density is a symmetric probability density function on The kernel ) (⋅ K and bandwidth h are chosen to minimise the mean integrated square error (MISE) between the true posterior density and the corresponding regularized weighted empirical measure in Eq. ( 9), which is defined as n x .It is worth noting that the use of kernel approximation becomes increasingly less appropriate as n x (dimensionality of the state) increases.The optimal bandwidth with unit covariance matrix is www.hydrol-earth-syst-sci.net/15/3237/2011/ Hydrol.Earth Syst.Sci., 15, 3237-3251, 2011 with ( 14) RPF differs from SIR only in additional regularization steps when sample impoverishment happens.The key step is where x i * k is a new particle generated from kernel density, D k is estimated from S k , which is the empirical covariance matrix such that D k D T k = L k , and ε i is the random noise from the kernel.Note that the calculation of the empirical covariance matrix L k is carried out prior to the resampling and is therefore a function of both the x i k and w i k .The theoretical disadvantage of RPF is that its samples are no longer guaranteed to asymptotically approximate those from the posterior.This can be mitigated by including the Markov chain Monte Carlo (MCMC) move step (Gilks and Berzuini, 2001) based on the Metropolis-Hastings algorithm (Robert and Casella, 1999).The key idea is that a resampled particle is moved to a new state, according to Eq. ( 15), only if u ≤ α, where u ∼ U [0, 1] and α is the acceptance probability.Otherwise, the move is rejected.
In Eq. ( 16), α becomes 1.0 when the likelihood of new particle is greater than that of the previous particle.That means that the MCMC move step contributes to screening bad particles in the regularization step, thus ensuring that particles asymptotically approximate samples from the posterior.
A single cycle of RPF with the MCMC move step is illustrated in Fig. 2. The basic procedure of RPF is the same with SIR before resampling.After the resampling step, entirely new samples are drawn from the continuous kernel.If a new particle is rejected in the MCMC move step, the particle resampled before regularization is used.Therefore, the efficiency of RPF depends on how many particles are preserved Hydrol.Earth Syst.Sci., 15,[3237][3238][3239][3240][3241][3242][3243][3244][3245][3246][3247][3248][3249][3250][3251]2011 www.hydrol-earth-syst-sci.net/15/3237/2011/ in the MCMC move step.Although this approach is frequently found to improve performance with a less rigorous deviation, RPF has not been introduced in hydrologic data assimilation.

Particle filter with lag time approach
Many hydrological processes operate -in response to precipitation -at similar length scales, but the time scales are delayed (Bloschl and Sivapalan, 1995).In a distributed hydrologic model, there are many types of state variables, and each variable interacts with others based on different time scales.For example, in catchment modelling, internal state variables may refer to two-dimensional distribution of soil moisture content, evapotranspiration, and overland flow; and an observable state may refer to streamflow flux at the monitoring sites.There is a time lag until the changes of soil moisture distribution affect infiltration and sub-surface/surface runoff processes and generated runoff is routed as streamflow into the measurement site.Hydrologic components in a hydrologic model have usually different time scales, which need to be considered in the data assimilation process.
As stated by Salamon and Feyen (2010), this response time is usually greater than the high-frequency discharge measurements.One simple approach is to use delayed updating, which utilizes longer time intervals before updating state variables.However, delayed updating leads to omitting large quantities of measurement information, and a fixed delay assumption may result in inappropriate estimation, because a response time always changes, depending on the current spatial distributions of the state and forcing variables.Furthermore, when system behaviour is relatively fast (e.g.hourly based hydrologic or hydraulic modelling cases), delayed updating may lead to missing proper timing of assimilation.That can make it hard to implement sequential data assimilation techniques into hydrologic modelling.Thus, we propose a new lagged particle filtering approach based on the regularized particle filter, not only for considering different catchment responses, but also for using whole measurement information for data assimilation.Figure 3 shows an example of a newly proposed lagged particle filtering approach with RPF.Here, k is the current time step, and j is the lag time required for responses of internal state variables to be transmitted into the observable variables.Note that it is better to set the lag time j large enough to cover plausible ranges because the system response is time-variant.
The assimilation window of the lagged filtering is defined from k − j to k time step.The procedure of the lagged filtering is as follows: (1) to have prediction at the time step k, simulation starts from the time step k − j .(2) When particles arrive at the current time k, the lagged weights are estimated according to the measurement.(3) Resampling is executed according to the lagged weights.Note that state variables at the time step k − j + 1 are resampled simultaneously with those at the current time step.(5) If the effective particle number n eff is less than the threshold (n eff < n thr ), the regularization step is executed from the time step k − j with new particle members generated from kernel.( 6) When each particle arrives in the current time step k, acceptance probability α is calculated according to the lagged likelihood, as shown in Eq. ( 16).If a particle is rejected (u > α), state variables before regularization will be used without kernel perturbation.(7) For the next time step k + 1, simulation starts from time step k − j + 1 and follows the same procedure as from 1) to 6).In this way, sequential data assimilation procedure is implemented at every time step without loss of measurement information.Compared to conventional particle filtering, an additional procedure needed in the lagged regularized particle filtering is only that state variables at the time step k − j + 1 should be stored and resampled according to lagged weights.
Lagged weight, w i lag , and lagged likelihood, L i lag , can be calculated through various methods, including the aggregation of the past weight.However, in this study, the weight and likelihood at the last time step k (w i k , L i k ) are simply used as lagged weight and likelihood, respectively.Note that the use of weights without aggregation can show better results in cases of short-term forecasting.
Figure 4 summarises one cycle of the algorithm of RPF with the Markov chain Monte Carlo (MCMC) move step under the lagged filtering approach.The procedure connected with the dashed line means the regularization step.It is worth mentioning that the regularization step can be executed not just in the sample impoverishment, but also in the particle collapse case, which means all particles have negligible weights that fall outside the measurement PDF.In this case, the regularization step is used effectively for re-initialization of the particle system.

Study area
The SMC methods are applied to the Katsura River catchment (Fig. 5) to show the applicability of the proposed particle filtering approach.This catchment is located in Kyoto, Japan, and covers an area of 1100 km 2 (887 km 2 at the Katsura station) (see Fig. 5).Topography in the catchment is characterized by a mountainous upstream in the north and a flatter plain in the south.The elevation in the catchment ranges from 4 to 1158 m, with an average of about 325 m.The land use consists of forest (76.7 %), agricultural area (9.3 %), residential area (7.5 %), water body (2.0 %), public area (2.7 %), vacant land (1.2 %), and road (0.6 %), respectively.There are 13 rainfall observation stations, 1 meteorological observation station, and 4 river flow observation stations.Annual precipitation and temperature are about 1422 mm and 16.2 • in Kyoto city (2001 ∼ 2010).Precipitation is concentrated in the summer season from May to September.The Hiyoshi dam is located upstream.The controlled outflow record from the dam reservoir is given as inflow to the hydrologic model, and the model simulates rainfall-runoff processes for the downstream of the dam.

Hydrological model and particle filtering
The hydrologic model used is the water and energy transfer processes (WEP) model, which was developed for simulating spatially variable water and energy processes in catchments with complex land covers (Jia and Tamai, 1998;Jia et al., 2001).State variables of WEP include soil moisture content, surface runoff, groundwater tables, discharge and water stage in rivers, heat flux components, etc. (Fig. 6).The spatial calculation unit of the WEP model is a square or rectangular grid.Runoff routing on slopes and in rivers is carried out by applying a one-dimensional kinematical wave approach from upstream to downstream.The WEP model has been applied in several watersheds in Japan, Korea, and China with different climate and geographic conditions (Jia et al., 2001(Jia et al., , 2009;;Kim et al., 2005;Qin et al., 2008).
The model setup uses 250 m grid resolution and an hourly time step.We use hourly observed rainfall from 13 observation stations organized by the Ministry of Land, Infrastructure, Transport and Tourism in Japan (http://www1.river.go.jp/) and hourly observed meteorological data including air temperature, relative humidity, wind speed, and duration of sunlight from the Kyoto station, which is organized by Japan Meteorological Agency (http://www.jma.go.jp/jma/ index.html).The nearest neighbour interpolation method is used for representation of spatial distribution of rainfall.
An SRTM 90 m digital elevation map (DEM) is adopted (http://srtm.csi.cgiar.org/)and converted to 250 m resolution.Soil distribution is obtained from the website of the Food Hydrol.Earth Syst.Sci., 15,[3237][3238][3239][3240][3241][3242][3243][3244][3245][3246][3247][3248][3249][3250][3251]2011 www.hydrol-earth-syst-sci.net/15/3237/2011/ and Agriculture Organization of the United Nations (http: //www.fao.org/nr/land/soils/en/). Physical property of soil is derived from soil texture information using the ROSETTA model (Schaap et al., 2001).However, the saturated hydraulic conductivity of several soils is roughly adjusted for the data period of 2007, since soil property estimated from large-scale soil maps varies greatly.For other parameters related to aquifers and vegetation, we apply parameter ranges from the earlier studies mentioned above.No flux boundary condition is specified at the catchment boundary for the groundwater flow.Artifical water use is approximately estimated as 3 m 3 s −1 and subtracted directly from simulated discharge at the Katsura station.Ensemble simulation of 192 particles is conducted on a multi-processing computer (96 cores in the supercomputing system of Kyoto University) via parallel-computing techniques of open message passing interface (MPI) (http://www.open-mpi.org/).The parallel programming code is written using a single-program multiple-data (SPMD) approach, which means the same modelling procedure with different state variables.A master process aggregates particle statistics and controls resampling/regularization steps.Message passing commands of MPI is used effectively to transfer spatially distributed state variables from one particle to another in the resampling step.

Process and measurement error models
Particle filters perform suboptimal estimation of the system states by considering the uncertainty in both the measurement and modelling systems.Therefore, the choice of the error models is crucial to obtaining a better estimation (Weert and El Serafy, 2006).Another important point is to choosing hidden state variables for filtering.Since there are numerous state variables in a distributed hydrologic model, it is not practical to consider the uncertainty of all state variables with a limited number of particles.Therefore, it is necessary to choose a limited number of state variables, which process error of the modelling system is aggregated in, and is easily updated by observable variables.In this study, we select soil moisture content and overland flow in each grid as hidden state variables and streamflows at the Katsura station as an observable variable for data assimilation.Global multipliers are introduced to perturb state variables stochastically and effectively.In the case of soil moisture content, the total soil moisture depth at the previous time step S k−1 is aggregated for three soil layers within the catchment as: where θ l j and d l j are the volumetric soil moisture content (m 3 m −3 ) and the soil depth (m) in each layer, and l and m represent the number of soil layers and the total number of grids within the catchment, respectively.Then, process noise of the soil moisture content w soil k is added to the aggregated state variable S k−1 as: w soil k is assumed as Gaussian distribution N(0, σ 2 soil k ) having a heteroscedastic standard deviation as: In the above, α soil and β soil are adaptable parameters that can be obtained from sensitivity analysis.Although proper tuning of these adaptable parameters is important, their optimum value changes according to different data periods, which is another source of uncertainty in data assimilation.We will discuss the effects of adaptable parameters, especially α soil , on two different particle filters later.The value of β soil is set as 50 mm for the whole simulation.When the process error of soil moisture content w soil k is generated for each particle, the perturbed states of soil moisture θl j are calculated using multiplicative factor γ s as follows: In the above equations, if perturbed soil moisture at each grid and layer, θl j , becomes greater or smaller than the physical limitation, θl j is adjusted at its maximum (i.e.porosity) or minimum (i.e.wilting point).It is also worth noting that non-linearity of the distributed hydrologic model can alleviate loss of spatial diversity in the pertubation process, which is one of the disadvantages of global multipliers.For example, even if the same noise is applied, the spatial pattern of state variables can become different due to antecedent soil moisture and the non-linear system response for that.Similar noise definition for soil moisture has been applied for state updating of a distributed hydrologic model in the study of Kim et al. (2007).The perturbation of overland flow is also applied in a multiplicative way as: where q ov j and qov j are overland flow with and without process noise w ov k , respectively, which is assumed as a Gaussian distribution N (0, σ 2 ov k ).The standard deviation of overland flow noise σ ov k is parameterized as follows: where α ov and β ov are adaptable parameters with settings of −10 and 5 m 3 s −1 , respectively, as obtained from sensitivity analysis.y sim k−1 is the simulated discharge of data assimilation at the previous time step.c ov is the constant coefficent.The value of c ov is estimated through the sensitivity analysis and set as 0.02 for the whole simulation.This formulation was originally proposed by Seo et al. (2009) to enhance the forecast in periods of low flow.Equation ( 23) specifies progressively smaller uncertainty if the simulated flow falls below the threshold, β ov (m 3 s −1 ).We adopt this error formulation because an error of overland flow routing is expected to decline in low flow periods.The measurement error of the discharge is assumed as a Gaussian distribution N(0, σ 2 obs k ) similar to previous studies (Georgakakos, 1986;Weert and El Serafy, 2006;Salamon  and Feyen, 2010).The standard deviation of the measurement error is chosen as: In Eq. ( 24), α obs is set as 0.1, which means 10 % of the measurement error, and the constant coefficient β obs is applied as 5 m 3 s −1 to consider uncertainty in periods of low flow such as artificial water use and dam reservoir control.The uncertainty of forcing data is not considered in this study to make it easy to evaluate the difference of each particle filter.Fifteen percent of perturbation from the uniform distribution is applied for the initial soil moisture condition.

Results and discussion
We implement two kinds of particle filters, SIR and lagged RPF (LRPF), for the hindcasting of streamflow using the WEP model.The resampling step is implemented in both SIR and LRPF.An additional regularization step is executed only in LRPF when sample impoverishment occurs or the ensemble mean falls outside 20 % of the observed discharge.Simulation periods and observation are shown in Table 1.
Hourly observed discharges at the Katsura station are used for the data assimilation, and observation at the Kameoka station is used for comparison.A five-day warm-up period is added before the data assimilation starts.Deterministic simulation results and 6-h-lead forecasts of each particle filter at the Katsura station for the years 2007, 2004, and 2003  Compared with results of other years, the differences of confidential intervals between two filters are small in the year of 2004, shown in Fig. 8, since the deterministic modelling results show better agreement with observation, relatively.Elapsed simulation time for the year of 2007 is about 11 h in SIR and 16 h in LRPF for a 2-month period simulation with 24-h-lead forecast at every time step, respectively.
Various ranges of proccess noise, α soil , are simulated for each particle filter to assess the effects of process noise on the forecast.The mean and 90 % confidential intervals of 6-h-lead forecasts for varying parameter value of α soil are illustrated in Fig. 10.In the case of SIR, confidential intervals of forecast widen rapidly, and the ensemble mean becomes unstable when the value of α soil increases.On the other hand, those of LRPF show stable results regardless of the process noise.
Figure 11 illustrates streamflow forecast of varying lead times at the Katsura station via LRPF and SIR.Two particle filters show different patterns, especially in the rising limb of the hydrograph from 1000 to 1010 time step.When the lead time becomes shorter, forecasts via LRPF show better results compared to SIR.Conversely, two particle filters show similar forecasts from 1050 to 1180 time step, and the varying pattern is relatively smooth.When the observed flows change sharply, even if the heteroscedastic error assumption is applied, the process error becomes too small in a moment for the prior distribution to cover the observation distribution, which leads to sample impoverishment.In the case of LRPF, new particles, generated from the kernel and selected in the lagged time window, mitigate the loss of sample diversity, while the recovery of particle diversity needs more time steps in the case of SIR.
Figure 12 shows the sensitivity of the lag time of LRPF and process noise parameter, α soil , for each particle filter are estimated for varying lead times in the year of 2007 using Nash-Sutcliffe efficiency calculated as: where y is observation, ȳ is the mean of observation, y sim k is the forecasted streamflow at the measurement site, and T is the total number of time steps.
When the lag time is larger than 4 h, the difference of Nash-Sutcliffe efficiency (NSE) for varying lead times becomes negligible, as shown in Fig. 12a.Eight hours of the lag time are applied to the other simulations by LRPF.NSE scores for varying lead times show different behaviours for each particle filter (Fig. 12b).While NSE of LRPF shows a consistent behaviour regardless of error assumption, with all the red lines overlapping along the lead time, that of SIR changes according to the values of α soil .Overall, LRPF shows improved NSE for any range of α soil .NSE shows rather significant differences between the two particle filters when plotted for the high flows (not shown).
Figure 13 shows NSE of each particle filter for varying lead times in the years 2004 and 2003.Overall, LRPF forecasts show less variation compared to SIR forecasts, except the forecast of 2003 at Kameoka.Similarly to the year 2007 (Fig. 12b), NSE scores of SIR in 2004 and 2003 drop sharply when the process error α soil increases.Although NSE scores of LRPF show less change than does SIR, NSE differences of LRPF of 2003 increase according to the lead time.Relatively excessive perturbation in the regularization step for the smoothly varied flood events may be one potential reason.However, differences of NSE appear to be neglible within 8h lead times.The forecasts at Kameoka show reduced NSE scores in both particle filters.In the case of 2004, LRPF shows better forecasts within 4-h lead times, while SIR outperforms for other lead times in 2004 and 2003.Since the H-Q relationship of Kameoka is made with limited data, the Kameoka station appears to have larger uncertainty than does Katsura.Due to the lack of data, more extensive comparison is beyond the scope of this study.Nevertheless, we can observe that the statistical stability of LRPF is superior to that of SIR in terms of confidention intervals and accuracy for uncertain process noise, α soil (not shown), similar to the results of 2007 (Fig. 10).
Table 2 shows statistics of streamflow forecasts with varying lead times at Katsura including NSE, root mean square error (RMSE) and correlation coefficient (COR) for a given process noise (α soil = 0.03).Statistics shown in Table 2 indicate that LRPF is somewhat better than SIR especially in the years 2007 and 2004.The improvement by LRPF over SIR is larger for shorter lead times and the high flows (not shown).COR shows high values for both cases in overall periods.It is worth noting that SIR has different optimum values of process noise for data periods, and thus it shows large variation of statistics depending on the process noise (not shown) as the patterns shown in Figs. 12 and 13.

Conclusions
A lagged particle filtering approach was proposed as a framework to deal with the delayed response, which originates from different time scales of hydrologic processes in a distributed hydrologic model.The regularized particle filter with the MCMC move step was implemented to preserve sample diversity under the lagged filtering approach.As a process-based distributed hydrologic model, WEP was implemented to illustrate the strength and weakness of the lagged regularized particle filter (LRPF) compared to SIR for short-term streamflow forecast.
Two particle filters showed significantly improved forecasts compared to deterministic modelling cases in different simulation periods.Various ranges of process noise related to soil moisture were simulated for varying lead times.While SIR has different values of optimal process noise and shows sensitive variation of confidential intervals according to the process noise, LRPF shows consistent forecasts regardless of the process noise assumption.Due to the preservation of particle diversity by the kernel, LRPF showed enhanced forecasts, especially when the discharge changed sharply in a short time (the year 2007) and flood peak was high (the year 2004).However, the relatively large perturbation by the kernel could produce negative effects when the flood peak was relatively small and the hydrograph varied smoothly (the year 2003).
SMC methods have significant potential for high nonlinearity problems, especially for process-based distributed models in hydrologic investigation.However, the computational cost and marginal adequacy of SMC methods for distributed modelling have been bottlenecks to their practical implementation.As shown in this study, a particle filtering process can be effectively parallelized and implemented in the multi-core computing environment via MPI library.The LRPF is expected to be used as one of the frameworks for sequential data assimilation of process-based distributed modelling.The main benefits of LRPF are the improved forecasts for rapidly varied high floods and the stability of confidential intervals for uncertainty of process noise.More extended implementation for multi-site forecasting and effective sequential estimation of model parameters remain open problems, indeed.

Fig. 1 .
Fig. 1.The concept of discrete and continuous approximation of particle density: (a) weighted empirical measure, and (b) regularized measure by kernel.Adapted from Musso et al. (2001).

Fig. 2 .
Fig. 2. A single cycle of a regularized particle filter.
the n x dimensional vector denoting the system state at time k.The operator f : 7 n associated with the particle system using the kernel ain idea of RPF consists of changing the discrete o a continuous approximation, so the resampling step ly continuous distribution, hence producing a new e locations.The concept of discrete and continuous trated in Fig. 1.If the weights are concentrated on the pling in the discrete approximation (e.g., the SIR sentation of the posterior density, while a continuous proves the diversity in the resampling step.the particle system using the kernel e main idea of RPF consists of changing the discrete tion to a continuous approximation, so the resampling step solutely continuous distribution, hence producing a new article locations.The concept of discrete and continuous s illustrated in Fig. 1.If the weights are concentrated on the resampling in the discrete approximation (e.g., the SIR representation of the posterior density, while a continuous ure improves the diversity in the resampling step.minimise the mean integrated square error ior density and the corresponding regularized weighted h is defined as n x expresses the system transition in response to the forcing data u k (e.g.rainfall, weather data) and parameters θ. h: 7 s severe.The regularized particle filter (RPF) is based on tribution associated with the particle system using the kernel he main idea of RPF consists of changing the discrete ution to a continuous approximation, so the resampling step bsolutely continuous distribution, hence producing a new particle locations.The concept of discrete and continuous is illustrated in Fig. 1.If the weights are concentrated on the resampling in the discrete approximation (e.g., the SIR r representation of the posterior density, while a continuous sure improves the diversity in the resampling step.The regularized particle filter (RPF) is based on cal distribution associated with the particle system using the kernel 01).The main idea of RPF consists of changing the discrete distribution to a continuous approximation, so the resampling step an absolutely continuous distribution, hence producing a new erent particle locations.The concept of discrete and continuous nsity is illustrated in Fig. 1.If the weights are concentrated on the s, the resampling in the discrete approximation (e.g., the SIR a poor representation of the posterior density, while a continuous d measure improves the diversity in the resampling step.re drawn from the approximation i )

Fig. 3 .
Fig. 3. Particle traces in the regularization step under the lagged filtering approach.

Fig. 4 .
Fig. 4. The flow diagram of regularized particle filter with MCMC move step in the lagged filtering approach.
es have been proposed to solve the problem of sample ative solution is to introduce the regularization step when the comes severe.The regularized particle filter (RPF) is based on al distribution associated with the particle system using the kernel 1).The main idea of RPF consists of changing the discrete istribution to a continuous approximation, so the resampling step an absolutely continuous distribution, hence producing a new erent particle locations.The concept of discrete and continuous nsity is illustrated in Fig.1.If the weights are concentrated on the s, the resampling in the discrete approximation (e.g., the SIR poor representation of the posterior density, while a continuous measure improves the diversity in the resampling step.re drawn from the approximation ) the bandwidth, and x n is the dimension of the state is a symmetric probability density function on x n ℜ , such that

Fig. 7 .
Fig. 7. Observed versus 6-h-lead forecasts at the Katsura station via LRPF and SIR (1 June-31 July 2007): (a) a deterministic modelling case; (b) LRPF; and (c) SIR.The blue line and area represent the mean value and 90 % confidential intervals, respectively.A gray dashed line represents a deterministic modelling case.The black dots represent observed discharge.

Fig. 8 .
Fig. 8. Observed versus 6-h-lead forecasts at the Katsura station via LRPF and SIR (1 August-30 October 2004): (a) a deterministic modelling case; (b) LRPF; and (c) SIR.The blue line and area represent the mean value and 90 % confidential intervals, respectively.A gray dashed line represents a deterministic modelling case.The black dots represent observed discharge.

Fig. 9 .
Fig. 9. Observed versus 6-h-lead forecasts at the Katsura station via LRPF and SIR (1 June-31 August 2003): (a) a deterministic modelling case; (b) LRPF; and (c) SIR.The blue line and area represent the mean value and 90 % confidential intervals, respectively.A gray dashed line represents a deterministic modelling case.The black dots represent observed discharge.

Fig. 10 .
Fig. 10.Observed versus 6-h-lead forecasts at the Katsura station via LRPF and SIR for varying parameter values of the process error variance, α soil (11 to 17 July 2007).The blue line and area represent the mean value and 90 % confidential intervals, respectively.A gray dashed line represents a deterministic modelling case.The black dots represent observed discharge.

Fig. 11 .
Fig. 11.Observed versus forecasts of varying lead times at the Katsura station via LRPF and SIR with α soil of 0.05 (11 to 17 July 2007): (a) the lagged regularization particle filter (LRPF); and (b) the SIR particle filter.The blue lines represent forecasts of varying lead times.A gray dashed line represents a deterministic modelling case.The black dots represent observed discharge.

Fig. 12 .
Fig. 12. Nash-Sutcliffe model efficiency for varying parameter values of the process error variance, α soil .The red lines represent the lagged regularized particle filter.The dashed lines represent the SIR particle filter.A dotted line represents a deterministic modelling case.

Fig. 13 .
Fig. 13.Nash-Sutcliffe model efficiency for varying parameter values of the process error variance, α soil .The red lines represent the lagged regularized particle filter.The dashed lines represent the SIR particle filter.A dotted line represents a deterministic modelling case.
are shown in Figs. 7, 8, and 9, respectively.The lag time of 8 h is applied in LRPF.The applied values of α soil are 0.05 for 2007 and 0.03 for 2004 and 2003.The forecasted streamflow via two particle filters shown in Figs.7 and 9 indicates good conformity between observation and simulation, while the deterministic modelling shows significant underestimation, especially in the high flood regime.Ninety percent of confidential intervals of SIR are larger than those of LRPF, although the same error assumption is used.

Table 1 .
Simulation periods and observed flow.