Rainfall-Runoff Prediction at Multiple Timescales with a Single Long Short-Term Memory Network

Long Short-Term Memory Networks (LSTMs) have been applied to daily discharge prediction with remarkable success. Many practical scenarios, however, require predictions at more granular timescales. For instance, accurate prediction of short but extreme flood peaks can make a life-saving difference, yet such peaks may escape the coarse temporal resolution of daily predictions. Naively training an LSTM on hourly data, however, entails very long input sequences that make learning hard and computationally expensive. In this study, we propose two Multi-Timescale LSTM (MTS-LSTM) architectures that jointly predict multiple timescales within one model, as they process long-past inputs at a single temporal resolution and branch out into each individual timescale for more recent input steps. We test these models on 516 basins across the continental United States and benchmark against the US National Water Model. Compared to naive prediction with a distinct LSTM per timescale, the multi-timescale architectures are computationally more efficient with no loss in accuracy. Beyond prediction quality, the multi-timescale LSTM can process different input variables at different timescales, which is especially relevant to operational applications where the lead time of meteorological forcings depends on their temporal resolution.

set of target timescales is sufficient. Lastly, in our exploratory experiments, using ODE-LSTMs to predict at timescales that were not part of training was actually worse (and much slower) than (dis-)aggregating the fixed-timescale predictions of our multi-timescale LSTM. For these reasons, we excluded ODE-LSTMs from the main evaluation in this study (nevertheless, Appendix C details some of our exploratory ODE-LSTM experiments).
In this paper, we show how LSTM-based architectures can jointly predict discharge at multiple timescales in one model. We make the following contributions: -First, we outline two LSTM architectures that predict discharge at multiple timescales (Sect. 2.3). We capitalize on the fact that watersheds are damped systems: while the history of total mass and energy inputs are important, the impact of high-frequency variation becomes less important at long lead times. Our approach to providing multiple output timescales processes long-past input data at coarser temporal resolution than recent time steps. This shortens the input sequences, since high-resolution inputs are only necessary for the last few time steps. We benchmark their daily and hourly predictions against (i) a naive solution that trains individual LSTMs for each timescale and (ii) a traditional hydrologic model, the US National Water Model (Sect. 3.1). Our results show that all LSTM solutions predict at significantly higher Nash-Sutcliffe efficiency than NWM at all timescales. While there is only little accuracy difference among the LSTMs, the naive model has much higher computational overhead than multi-timescale LSTMs.
-Second, we propose a regularization scheme that reduces inconsistencies across timescales as they arise from naive, per-timescale prediction (Sect. 2.3.2). According to our results, the regularization not only reduces inconsistencies but also results in slightly improved predictions overall (Sect. 3.2).
-Third, we demonstrate that a multi-timescale LSTM can ingest individual and multiple sets of forcings for each target timescale, which closely resembles operational forecasting use-cases where forcings with high temporal resolution often have shorter lead times than forcings with low resolution. (Sect. 3.4).

Data
In order to maintain some degree of continuity and comparability, we conducted our experiments in a way that is as comparable as possible with previous benchmarking studies Kratzert et al., 2019) on the CAMELS dataset (Addor et al., 2017). Out of the 531 CAMELS basins used by previous studies, 516 basins have hourly stream gauge data available from the USGS Water Information System through the Instantaneous Values REST API. 2 This service provides historical measurements at varying sub-daily resolutions (usually every 15 to 60 minutes), which we averaged to hourly and daily time steps for each basin. Since our forcing data and benchmark model data use UTC timestamps, we converted the USGS streamflow timestamps to UTC.  (Xia et al., 2012 While CAMELS provides only daily meteorological forcing data, we needed hourly forcings. To maintain some congruence with previous CAMELS experiments, we used the hourly NLDAS-2 product, which contains meteorological data since 1979 (Xia et al., 2012). We spatially averaged the forcing variables listed in Table 1 for each basin. Additionally, we averaged these basin-specific hourly meteorological variables to daily values.
We trained our models on the period from October 1, 19901, to September 30, 2003 All LSTM models used in this study take as inputs the eleven forcing variables listed in Table 1, concatenated at each time step with the same 27 static catchment attributes from the CAMELS dataset that Kratzert et al. (2019) used.

Benchmark Models
We used two groups of models as baselines for comparison with the proposed architectures: the LSTM proposed by Kratzert et al. (2019), naively adapted to hourly streamflow modeling, and the US National Water Model (NWM), a traditional hydrologic model used operationally by the National Oceanic and Atmospheric Administration. 3

Traditional Hydrologic Model
The US National Oceanic and Atmospheric Agency (NOAA) generates hourly streamflow predictions with the National Water Model, which is a process-based model based on WRF-Hydro (Gochis et al., 2020). Specifically, we benchmarked against the NWM v2 reanalysis product, which includes hourly streamflow predictions for the years 1993 through 2018.

Naive LSTM
Long Short-Term memory (LSTM) networks (Hochreiter and Schmidhuber, 1997) are a flavor of recurrent neural networks designed to model long-term dependencies between input and output data. LSTMs maintain an internal memory state which is updated at each time step by a set of activated functions called gates. These gates control the input-state relationship (through the input gate), the state-output relationship (through the output gate), and the memory timescales (through the forget gate).
Figure 1 illustrates this architecture. For a more detailed description of LSTMs, especially in the context of rainfall-runoff modeling, we refer to Kratzert et al. (2018).
LSTMs can cope with longer time series than classic recurrent neural networks because they are not susceptible to vanishing gradients during the training procedure (Bengio et al., 1994;Hochreiter and Schmidhuber, 1997). Since LSTMs process input sequences sequentially, longer time series result in longer training and inference time. For daily predictions, this is not a problem, since look-back windows of 365 days appear to be sufficient for most basins, at least in the CAMELS dataset.
Therefore, our baseline for daily predictions is the LSTM model from Kratzert et al. (2019), which was trained on daily data with an input sequence length of 365 days.
For hourly data, even half of a year corresponds to more than 4300 time steps, which results in very long training and inference runtime, as we will detail in Sect. 3.3. In addition to the computational overhead, the LSTM forget gate makes it hard to learn long-term dependencies, because it effectively reintroduces vanishing gradients into the LSTM (Jozefowicz et al., 2015). Yet, we cannot simply remove the forget gate-both empirical LSTM analyses (Jozefowicz et al., 2015;Greff et al., 2017) and our exploratory experiments showed that this deteriorates results. To address this, Gers et al. (1999) proposed to initialize the bias of the forget gate to a small positive value (we used 3). This starts training with an open gate and enables gradient flow across more time steps.
We used this bias initialization trick for all our LSTM models, and it allowed us to include the LSTM with hourly inputs as the naive hourly baseline for our proposed models. The architecture for this naive benchmark is identical to the daily LSTM, except that we ingested input sequences of 4320 hours (180 days). Further, we tuned the learning rate and batch size for the naive hourly LSTM, since it receives 24 times the amount of samples than the daily LSTM. The extremely slow training impedes a more extensive hyperparameter search. Appendix D details the grid of hyperparameters we evaluated to find a suitable configuration, as well as further details on the final hyperparameters.

Using LSTMs to Predict Multiple Timescales
We evaluated two different LSTM architectures that are capable of simultaneous predictions at multiple timescales. For the sake of simplicity, the following explanations use the example of a two-timescale model that generates daily and hourly predictions.
Nevertheless, the architectures we describe here generalize to other timescales and to more than two timescales, as we will show in an experiment in Sect. 3.5.
The first model, shared multi-timescale LSTM (sMTS-LSTM), is a simple extension of the naive approach: We generate a daily prediction as usual-the LSTM ingests an input sequence of T D time steps at daily resolution and outputs a prediction at the last time step (i.e., sequence-to-one prediction). Next, we reset the hidden and cell states to their values from time step T D − T H /24 and ingest the hourly input sequence of length T H to generate 24 hourly predictions that correspond to the last daily prediction. In other words, during each prediction step, we perform two forward passes through the same LSTM: one that generates a daily prediction and one that generates 24 corresponding hourly predictions. We add a one-hot timescale encoding to the input sequence such that the LSTM can distinguish daily from hourly inputs. The key insight with this model is that the hourly forward pass starts with LSTM states from the daily forward pass, which act as a summary of long-term information up to that point. In effect, the LSTM has access to a large look-back window but, unlike the naive hourly LSTM, it does not suffer from the performance impact of extremely long input sequences.
The second architecture, illustrated in Fig. 2, is a more general variant of the sMTS-LSTM that is specifically built for multi-timescale predictions, hence, we call it the multi-timescale LSTM (MTS-LSTM). It works just like the shared version but splits the LSTM into two branches, one for each timescale. We first generate a prediction with an LSTM acting at the coarsest timescale (here: daily) using a full input sequence of length T D (e.g., 365 days). Next, we reuse the daily hidden and cell states from step T D − T H /24 as the initial states for an LSTM at a finer timescale (here: hourly), which generates the corresponding

Per-Timescale Input Variables
An important advantage of the MTS-LSTM over the sMTS-LSTM architecture arises from its more flexible input dimensionality. As each timescale is processed in an individual LSTM branch, we can ingest different input variables to predict the different timescales. This can be a key differentiator in operational applications when, for instance, there exist daily weather forecasts with a much longer lead time than the available hourly forecasts, or when using remote sensing data that is available only at certain overpass frequencies. In these cases, the MTS-LSTM can process the daily forcings in its daily LSTM branch and the hourly forcings in its hourly LSTM branch. As such, this per-timescale forcings strategy allows for using different inputs at different timescales.
To evaluate this capability, we used two sets of forcings as daily inputs: the Daymet and Maurer forcing sets that are included in the CAMELS dataset (Newman et al., 2014). For lack of other hourly forcing products, we conducted two experiments: in one, we continued to use only the hourly NLDAS forcings. In the other, we additionally ingested the corresponding day's Daymet and Maurer forcings at each hour. Since the Maurer forcings only range until 2008, we conducted this experiment on the validation period from October 2003 to September 2008.

Cross-Timescale Consistency
Since the MTS-LSTM and sMTS-LSTM architectures generate predictions at multiple timescales simultaneously, we can incentivize predictions that are consistent across timescales. Unlike in other domains (e.g., computer vision, Zamir et al. (2020)), consistency is well-defined in our application: predictions are consistent if the mean of every day's hourly predictions is the same as that day's daily prediction. Hence, we can explicitly state this constraint as a regularization term in our loss function. Building on the basin-averaged NSE loss from Kratzert et al. (2019), our loss function averages the losses from each individual timescale and regularizes with the mean squared difference between daily and day-averaged hourly predictions. Note that, although we describe the regularization with only two simultaneously predicted timescales, the approach generalizes to more timescales, as we can add the mean squared difference between each pair of timescale τ and next-finer timescale τ . All taken together, we used the following loss for a two-timescale daily-hourly model: Here, B is the number of basins, N τ b is the number of samples for basin b at timescale τ , y τ t andŷ τ t are observed and predicted discharge values for the tth time step of basin b at timescale τ . σ b is the observed discharge variance of basin b over the whole training period, and is a small value that guarantees numeric stability. We predicted discharge in the same unit (mm h −1 ) at all timescales, so that the daily prediction is compared with an average across 24 hours.

Predicting More Timescales
While all our experiments so far considered daily and hourly input and output data, the MTS-LSTM architecture generalizes to other timescales. We showed this capability in a setup similar to the operational National Water Model, 4 albeit in a reanalysis setting: We trained an MTS-LSTM to predict the last 18 hours (hourly), 10 days (three-hourly), and 30 days (six-hourly) of discharge-all within one MTS-LSTM. Table 2 details the sizes of the input and output sequences for each timescale. To achieve a sufficiently long look-back window without using exceedingly long input sequences, we additionally predicted one day of daily streamflow but did not evaluate these daily predictions. In this setup, the quality of predictions remained high across all target timescales, as our results in Sect. 3.5 show.
Related to the selection of target timescales, a practical note: The state transfer from lower to higher timescales requires careful coordination of timescales and input sequence lengths. To illustrate, consider a toy setup that predicts two-and three-hourly discharge. Each timescale uses an input sequence length of 20 steps (i.e., 40 and 60 hours, respectively). The initial two-hourly LSTM state should then be the (60 − 40)/3 = 6.67th three-hourly state-in other words, the step widths are asynchronous at the point in time where the LSTM branches split. Of course, one could simply select the 6th or 7th step, but then either two hours remain unprocessed or one hour is processed twice (once in the three-and once in the two-hourly LSTM branch). Table 2. Input sequence length and prediction window for each predicted timescale. The daily input merely acts as a means to extend the look-back window to a full year without generating overly long input sequences; we do not evaluate the daily predictions. Instead, we suggest selecting sequence lengths such that the LSTM splits at points where the timescales' steps align. In the above example, sequence lengths of 20 (three-hourly) and 21 (two-hourly) would align correctly: (60 − 42)/3 = 6.

Evaluation Criteria
Following previous studies that used the CAMELS datasets (Kratzert et al., 2020;Addor et al., 2018), we benchmarked our models with respect to the metrics and signatures listed in Table 3. Hydrologic signatures are statistics of hydrographs, and thus not natively a comparison between predicted and observed values. For these values, we calculated the Pearson correlation coefficient between the signatures of observed and predicted discharge across the 516 CAMELS basins used in this study.
To quantify the cross-timescale consistency of the different models, we calculated the root mean squared deviation between daily predictions and hourly predictions when aggregated to daily values: To account for randomness in LSTM weight initialization, all the LSTM results reported are calculated on hydrographs that result from averaging the predictions of ten independently trained LSTM models.
3 Results Table 4 compares the median test period evaluation metrics of the MTS-LSTM and sMTS-LSTM architectures with the benchmark naive LSTM models and the process-based National Water Model (NWM). Figure 3 illustrates the cumulative distributions of per-basin NSE values of the different models. By this metric, all LSTM models, even the naive ones, outperform NWM at both hourly and daily time steps. All models perform slightly worse on hourly predictions than on daily predictions. Accordingly, the results in Table 4 list differences in median NSE values between NWM and the LSTM models ranging from 0.11 to 0.16 (daily) and around 0.19 (hourly). Among the LSTM models, the differences in all metrics are comparatively small. The sMTS-LSTM achieves the best daily and hourly median NSE at both timescales. The naive models produce NSE values that Court (1962) High flow dur. Average duration of high-flow events (number of consecutive steps > 9 times the median daily/hourly flow) Clausen and Biggs (2000), Table 2 in Westerberg and McMillan (2015) High flow freq. Frequency of high-flow days/hours (> 9 times the median daily/hourly flow) Clausen and Biggs (2000), Table 2 in Westerberg and McMillan (2015) Low flow dur. Average duration of low-flow events (number of consecutive days/hours < 0.2 times the mean flow) Olden and Poff (2003), Table 2 in Westerberg and McMillan (2015) Low flow freq. Frequency of low-flow days/hours (< 0.2 times the mean daily/hourly flow) Olden and Poff (2003), Table 2    are slightly lower, but the difference is not statistically significant at α = 0.001 (Wilcoxon signed-rank test: p = 2.7 × 10 −3 (hourly), p = 8.7 × 10 −2 (daily)). The NSE distribution of the MTS-LSTM is significantly different from the sMTS-LSTM (p = 3.7 × 10 −19 (hourly), p = 3.2 × 10 −29 (daily)), but the absolute difference is small. We leave it to the judgment of the reader to decide whether or not this difference is negligible from a hydrologic standpoint.

Benchmarking
Beyond NSE, all LSTM models exhibit lower peak-timing errors than NWM. For hourly predictions, the median peaktiming error of the sMTS-LSTM is around three and a half hours, compared to more than six hours for NWM. Naturally, the peak-timing errors for daily predictions are smaller values since the error is measured in days. Consequently, the sMTS-LSTM Note the different color scales for daily and hourly peak-timing error.
work that indicate there is improvement to be made in arid climate regimes (Kratzert et al., 2020;Frame et al., 2020). Similarly, the lower section of Table 4 lists correlations between hydrologic signatures of observed and predicted discharge: NWM has the highest correlations for frequencies of high flows, low flows, and zero-flows, as well as for flow duration curve slopes. The peak-timing error shows similar spatial patterns, however, especially the hourly peak-timing error also shows higher values along the southeastern coastline.

Cross-Timescale Consistency
Since the (non-naive) LSTM-based models jointly predict discharge at multiple timescales, we can incentivize predictions that are consistent across timescales. As described in Sect. 2.3.2, this happens through a regularized NSE loss function that penalizes inconsistencies.
To gauge the effectiveness of this regularization, we compare inconsistencies between timescales in the best benchmark model, the sMTS-LSTM, with and without regularization. As a baseline, we also compare against the cross-timescale inconsistencies from two independent naive LSTMs. Table 5 lists the mean, median, and maximum root mean squared deviation between the daily predictions and the hourly predictions when aggregated to daily values. Without regularization, simultaneous prediction with the sMTS-LSTM yields smaller inconsistencies than the naive approach (i.e., separate LSTMs at each timescale). Cross-timescale regularization further reduces inconsistencies and results in a median root mean squared deviation of 0.376 mm d −1 .
Besides reducing inconsistencies, the regularization term appears to have a small but beneficial influence on the overall skill of the daily predictions: With regularization, the median NSE increases slightly from 0.755 to 0.762. Judging from the hyperparameter tuning results, this appears to be a systematic improvement (rather than a fluke), because for both sMTS-LSTM and MTS-LSTM, at least the three best hyperparameter configurations use regularization.

Computational Efficiency
In addition to differences in accuracy, the different LSTM architectures have rather large differences in computational overhead, and therefore runtime. The naive hourly model must iterate through 4320 input sequence steps for each prediction it makes, whereas the MTS-LSTM and sMTS-LSTM only require 365 daily and 336 hourly steps. Consequently, where the naive hourly LSTM takes more than one day to train on one NVIDIA V100 GPU, the MTS-LSTM and sMTS-LSTM take just over 6 (MTS-LSTM) and 8 hours (sMTS-LSTM).
Moreover, while training is a one-time effort, the runtime advantage is even larger during inference: The naive model requires around 9 hours runtime to predict 10 years of hourly data for 516 basins on an NVIDIA V100 GPU. This is about 40 times slower than the MTS-LSTM and sMTS-LSTM models, which both require around 13 minutes for the same task on the same hardware-and the multi-timescale models generate daily predictions in addition to hourly ones.

Per-Timescale Input Variables
While the MTS-LSTM yields slightly worse predictions than the sMTS-LSTM in our benchmark evaluation, it has the important ability to ingest different input variables at each timescale. The following two experiments show how harnessing this feature can increase the accuracy of the MTS-LSTM beyond its shared version. In the first experiment, we used two daily input forcing sets (Daymet and Maurer) and one hourly forcing set (NLDAS). In the second experiment, we additionally ingested the daily forcings into the hourly LSTM branch.  Kratzert et al. (2020), we expect that additional hourly forcings datasets will have further positive impact on the hourly accuracy (beyond the improvement we see with additional daily forcings).

Predicting More Timescales
In the above experiments, we evaluated our models on daily and hourly predictions only. The MTS-LSTM architectures, however, generalize to other timescales.

Discussion and Conclusion
The purpose of this work was to generalize LSTM-based rainfall-runoff modeling to multiple timescales. This task is not as trivial as simply running different deep learning models at different timescales due to long look-back periods, associated memory leaks, and computational expense. With MTS-LSTM and sMTS-LSTM, we propose two LSTM-based rainfall-runoff models that make use of the specific (physical) nature of the simulation problem.
The results show that the advantage LSTMs have over process-based models on daily predictions extends to sub-daily predictions. An architecturally simple approach, what we here call the sMTS-LSTM, can process long-term dependencies at Table 6. Median validation metrics across all 516 basins for the MTS-LSTMs trained on multiple sets of forcings (multi-forcing A uses daily Daymet and Maurer forcings as additional inputs into the hourly models, multi-forcing B uses just NLDAS as inputs into the hourly models).
For comparison, the much smaller computational overhead than a naive hourly LSTM. Nevertheless, the high accuracy of the naive hourly model shows that LSTMs, even with a forget gate, can cope with very long input sequences. Additionally, the LSTMs produce hourly predictions that are almost as good as their daily predictions, while the NWM's accuracy drops significantly from daily to hourly predictions. A more extensive hyperparameter tuning might even increase the accuracy of the naive model, however, this is difficult to test because of the large computational expense of training LSTMs with high-resolution input sequences that are long enough to capture all hydrologically relevant history.
The high quality of the sMTS-LSTM model indicates that the "summary" state between daily and hourly LSTM components contains as much information as the naive model extracts from the full hourly input sequence. This is an intuitive assumption, since the informative content of high-resolution forcings diminishes as we look back farther.
The MTS-LSTM adds to that the ability to use distinct sets of input variables for each timescale. This is an important operational use-case, as forcings with high temporal resolution often have shorter lead times than low-resolution products. In addition, per-timescale input variables allow for input data with lower temporal resolution, such as remote sensing products, without interpolation. Besides these conceptual considerations, this feature boosts model accuracy beyond the best singleforcings model, as we can ingest multiple forcing products at each timescale. The results from ingesting mixed input resolutions into the hourly LSTM branch (hourly NLDAS and daily Daymet and Maurer) highlight the flexibility of machine learning models and show that daily forcing histories can contain enough information to support hourly predictions. Yet, there remain a number of steps to be taken before these models can be truly operational: -First, like most academic rainfall-runoff models, our models operate in a reanalysis setting rather than performing actual lead-time forecasts.
-Second, operational models predict other hydrologic variables in addition to streamflow. Hence, multi-objective optimization of LSTM water models is an open branch of future research.
-Third, the implementation we use in this paper carries out the predictions of more granular timescales only whenever it predicts a low-resolution step, where it predicts multiple steps to offset the lower frequency. For instance, at daily and hourly target timescales, the LSTM predicts 24 hourly steps once a day. In a reanalysis setting, this does not matter, but in a forecasting setting, one needs to generate hourly predictions more often than daily predictions. Note, however, that this is merely a restriction of our implementation, not an architectural one: By allowing variable-length input sequences, we could produce one hourly prediction each hour (rather than 24 each day).
This work represents one step toward developing operational hydrologic models based on deep learning. Overall, we believe that the MTS-LSTM is the most promising model for future use. It can integrate forcings of different temporal resolutions, generates accurate and consistent predictions at multiple timescales, and its computational overhead both during training and inference is far smaller than that of individual models per timescale.
Code and data availability. We trained all our machine learning models with the neuralhydrology Python library (https://github.com/ neuralhydrology/neuralhydrology). All code to reproduce our models and analyses is available at https://github.com/gauchm/mts-lstm.

Appendix A: A Peak-Timing Error Metric
Especially for predictions at high temporal resolutions, it is important that a model not only captures the correct magnitude of a flood event, but also its timing. We measure this with a "peak-timing" metric that quantifies the lag between observed and predicted peaks. First, we heuristically extract the most important peaks from the observed time series: starting with all observed peaks, we discard all peaks with topographic prominence smaller than the observed standard deviation and subsequently remove the smallest remaining peak until all peaks have a distance of at least 100 steps. Then, we search for the largest prediction within a window of one day (for hourly data) or three days (for daily data) around the observed peak. Given the pairs of observed and predicted peak time, the peak-timing error is their mean absolute difference.

Appendix B: Negative Results
Throughout our experiments, we found LSTMs to be a highly resilient architecture: We tried many different approaches to multi-timescale prediction, and a large fraction of them worked reasonably well, albeit not quite as well as the MTS-LSTM models we present in this paper. Nevertheless, we believe it makes sense to report some of these "negative" results-models that turned out not to work as well as the ones we finally settled on. Note, however, that the following reports are based on exploratory experiments with a few seeds and no extensive hyperparameter tuning.

B1 Delta Prediction
Extending our final MTS-LSTM, we tried to facilitate hourly predictions for the LSTM by ingesting the corresponding day's prediction into the hourly LSTM branch and only predicting each hour's deviation from the daily mean. If anything, however, this approach slightly deteriorated the prediction accuracy (and made the architecture more complicated).
We then experimented with predicting 24 weights for each day that distribute the daily streamflow across the 24 hours.
This would have yielded the elegant side-effect of guaranteed consistency across timescales: the mean hourly prediction would always be equal to the daily prediction. Yet, the results were clearly worse, and, as we show in Sect. 3.2, we can achieve near-consistent results by incentive (regularization) rather than enforcement. One possible reason for the reduced accuracy is that it may be harder for the LSTM to learn two different things-predicting hourly weights and daily streamflow-than to predict the same streamflow at two timescales.

B2 Cross-Timescale State Exchange
Inspired by residual neural networks (ResNets) that use so-called skip connections to bypass layers of computation and allow for a better flow of gradients during training (He et al., 2016), we devised a "ResNet-multi-timescale LSTM" where after each day, we combine the hidden state of the hourly LSTM branch with the hidden state of the daily branch into the initial daily and hourly hidden states for the next day. This way, we hoped, the daily LSTM branch might obtain more fine-grained information about the last few hours than it could infer from its daily inputs. While the daily NSE remained roughly the same, the hourly predictions in this approach became much worse.

B3 Multi-Timescale Input, Single-Timescale Output
For both sMTS-LSTM and MTS-LSTM, we experimented with ingesting both daily and hourly data into the models, but only training them to predict hourly discharge. In this setup, the models could fully focus on hourly predictions rather than trying to satisfy two possibly conflicting goals. Interestingly, however, the hourly-only predictions were worse than combined multitimescale predictions. One reason for this effect may be that the state summary that the daily LSTM branch passes to the hourly branch is worse, as the model obtains no training signal for its daily outputs.

Appendix C: Time-Continuous Prediction with ODE-LSTMs
As we already alluded to in the introduction, the combination of ordinary differential equations (ODEs) and LSTMs presents another approach to multi-timescale prediction-one that is more accurately characterized as time-continuous prediction (as opposed to our multi-objective learning approach that predicts at arbitrary but fixed timescales). The ODE-LSTM passes each input time step through a normal LSTM, but then post-processes the resulting hidden state with an ODE that has its own learned weights. In effect, the ODE component can adjust the LSTM's hidden state to the time step size. For operational streamflow prediction, we believe that our MTS-LSTM approach is better suited, since ODE-LSTMs cannot directly process different input variables for different target timescales. That said, from a scientific standpoint, we think that the idea of training a model that can then generalize to arbitrary granularity is of great interest (e.g., toward a more comprehensive and interpretable understanding of hydrologic processes).
Although the idea of time-continuous predictions seemed promising, in our exploratory experiments it was better to use an MTS-LSTM and aggregate (or dis-aggregate) its predictions to the desired target temporal resolution. Note that, due to the slow training of ODE-LSTMs, we carried out the following experiments on ten basins (training one model per basin; we used smaller hidden sizes, higher learning rates, and trained for more epochs to adjust the LSTMs to this setting). Table C1 gives examples for predictions at untrained timescales: Table section (A) shows the mean and median NSE values across the ten basins when we trained the models on daily and 12-hourly target data but then generated hourly predictions (for the MTS-LSTM, we obtained hourly predictions by uniformly spreading the 12-hourly prediction across 12 hours). Table section (B) shows the results when we trained on hourly and three-hourly target data but then predicted daily values (for the MTS-LSTM, we aggregated eight three-hourly predictions into one daily time step). These initial results show that, in almost all cases, it is better to (dis-)aggregate MTS-LSTM predictions than to use an ODE-LSTM.

Appendix D: Hyperparameter Tuning
For the multi-timescale models, we performed a two-stage tuning. In the first stage, we trained architectural model parameters (regularization, hidden size, sequence length, dropout) for 30 epochs at a batch size of 512 and a learning rate that starts at 0.001, reduces to 0.0005 after ten epochs, and to 0.0001 after 20 epochs. We selected the configuration with the best median NSE (we considered the average of daily and hourly median NSE) and, in the second stage, tuned its learning rate and batch size. Table D1 lists the parameter combinations we explored.