<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing with OASIS Tables v3.0 20080202//EN" "https://jats.nlm.nih.gov/nlm-dtd/publishing/3.0/journalpub-oasis3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:oasis="http://docs.oasis-open.org/ns/oasis-exchange/table" xml:lang="en" dtd-version="3.0">
  <front>
    <journal-meta><journal-id journal-id-type="publisher">hess</journal-id><journal-title-group>
    <journal-title>Hydrology and Earth System Sciences</journal-title>
    <abbrev-journal-title abbrev-type="publisher">HESS</abbrev-journal-title>
  </journal-title-group><issn>1607-7938 </issn><issn pub-type="discussion">1812-2116 </issn><publisher>
    <publisher-name>Copernicus GmbH (Copernicus Publications)</publisher-name>
    <publisher-loc>Göttingen, Germany</publisher-loc>
  </publisher></journal-meta>
    <article-meta>
      <title-group><article-title>Strategies for incorporating static features into global deep learning models</article-title><alt-title>Strategies for incorporating static features into global DL models</alt-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author" corresp="yes" rid="aff1">
          <name><surname>Liesch</surname><given-names>Tanja</given-names></name>
          <email>tanja.liesch@kit.edu</email>
        <ext-link>https://orcid.org/0000-0001-8648-5333</ext-link></contrib>
        <contrib contrib-type="author" corresp="no" rid="aff1">
          <name><surname>Ohmer</surname><given-names>Marc</given-names></name>
          
        <ext-link>https://orcid.org/0000-0002-2322-335X</ext-link></contrib>
        <aff id="aff1"><label>1</label><institution>Institute for Applied Geosciences (AGW), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany</institution>
        </aff>
      </contrib-group>
      <author-notes><corresp id="corr1">Tanja Liesch (tanja.liesch@kit.edu)</corresp></author-notes><pub-date><day>9</day><month>April</month><year>2026</year></pub-date>
      
      <volume>30</volume>
      <issue>7</issue>
      <fpage>1877</fpage><lpage>1890</lpage>
      <history>
        <date date-type="received"><day>19</day><month>August</month><year>2025</year></date>
           <date date-type="rev-request"><day>3</day><month>December</month><year>2025</year></date>
           <date date-type="rev-recd"><day>18</day><month>February</month><year>2026</year></date>
           <date date-type="accepted"><day>30</day><month>March</month><year>2026</year></date>
      </history>
      <permissions>
        <copyright-statement>Copyright: © 2026 Tanja Liesch</copyright-statement>
        <copyright-year>2026</copyright-year>
      </permissions>
      <abstract><title>Abstract</title>

      <p id="d2e88">Global deep learning (DL) models are increasingly used in hydrology and hydrogeology to model time series data across multiple sites simultaneously. To account for site-specific behavior, static input features are commonly included in these models. Although the method of integration of static features into model architectures can influence performance, this aspect is seldom systematically evaluated. In this study, we systematically compare four strategies for incorporating static features into a global DL model for groundwater level prediction, including approaches commonly used in water science (repetition, concatenation) and two adopted from related disciplines (attention, conditional initialization). The models are evaluated using a large-scale groundwater dataset from Germany, tested under both in-sample (temporal generalization) and out-of-sample (spatiotemporal generalization) settings, and with both environmental and time-series-derived static features.</p>

      <p id="d2e91">Our results show that all integration methods perform rather similarly in terms of average metrics, though their performance varies across wells and settings. The repetition approach achieves slightly better overall performance but is computationally inefficient due to the redundant replication of static features. Therefore, it may be worthwhile to explore alternative integration strategies that can offer comparable results with lower computational cost. Importantly, model performance is influenced more strongly by the informativeness and process relevance of the static features than by the specific integration method. These findings underscore the importance of careful feature selection and provide practical guidance for the design of global deep learning models in hydrologic applications.</p>
  </abstract>
    </article-meta>
  </front>
<body>
      

<sec id="Ch1.S1" sec-type="intro">
  <label>1</label><title>Introduction</title>
      <p id="d2e103">In recent years, so-called global or regional deep learning models have gained increasing popularity in hydrology <xref ref-type="bibr" rid="bib1.bibx16 bib1.bibx17" id="paren.1"/> and related fields such as hydrogeology <xref ref-type="bibr" rid="bib1.bibx4 bib1.bibx12" id="paren.2"/>. In contrast to traditional local models, i.e., models trained on individual basins or wells, these approaches leverage multiple time series simultaneously within a single model. This has two key advantages: First, global models are, at least in theory, capable of generalizing to ungauged basins or sites (e.g., generating spatially continuous groundwater level surfaces). Second, it has been shown that they can achieve superior performance not only in terms of average metrics <xref ref-type="bibr" rid="bib1.bibx18" id="paren.3"/>, but also in predicting extreme events <xref ref-type="bibr" rid="bib1.bibx6 bib1.bibx18" id="paren.4"/>.</p>
      <p id="d2e118">To account for the fact that each unit (e.g., a basin or groundwater observation well) may respond differently to the same dynamic inputs (such as meteorological drivers), global models typically incorporate a set of static input features that describe the unique properties of each unit. Without such static features, a global model can only learn an average response to the dynamic inputs, which often results in reduced predictive performance <xref ref-type="bibr" rid="bib1.bibx17 bib1.bibx12" id="paren.5"/>. These static features commonly include characteristics that influence how the output variable (e.g., surface runoff or groundwater level) reacts to meteorological forcing, such as land use, topography, soil type, or geological and aquifer properties.</p>
      <p id="d2e124">From a technical perspective, it is not trivial how dynamic and static inputs should be jointly processed in a global deep learning model. While several options exist, this aspect has received little attention in hydrology and hydrogeology. Comparative studies are rare, and methodological discussions are mostly lacking. An exception is <xref ref-type="bibr" rid="bib1.bibx15" id="text.6"/>, who evaluated three static–dynamic fusion methods as part of their hyperparameter tuning. However, this was a secondary aspect of their study, which primarily focused on reconstructing daily runoff.</p>
      <p id="d2e130">Most existing studies adopt the simplest solution: replicating the static features at each time step to match the shape of the dynamic inputs, and then feeding both into a time series model such as a long short-term memory network (LSTM). One of the first to implement this approach were <xref ref-type="bibr" rid="bib1.bibx16" id="text.7"/>, and since it yielded good results (and a later attempt to use static features as inputs to the static input gate of the LSTM performed worse, <xref ref-type="bibr" rid="bib1.bibx17" id="altparen.8"/>), many subsequent studies likely followed this strategy without further experimentation <xref ref-type="bibr" rid="bib1.bibx19 bib1.bibx9" id="paren.9"/>.</p>
      <p id="d2e143">However, it seems intuitive that feeding time-invariant data directly into a time series model may not be optimal – or at the very least, not particularly efficient. An alternative approach, increasingly found in water-related studies, involves processing static features in a separate, non-sequential model component, such as a simple feed-forward network or multi-layer perceptron (MLP). The resulting representation is then concatenated with the output of the dynamic (time-series) model component – typically an LSTM – at a later stage <xref ref-type="bibr" rid="bib1.bibx12 bib1.bibx24 bib1.bibx23 bib1.bibx2" id="paren.10"/>.</p>
      <p id="d2e149">Despite this emerging diversity, no study in the water sciences has, to our knowledge, systematically compared different strategies for integrating static features into global deep learning models. In contrast, the integration of dynamic and static inputs has been widely recognized as a methodological challenge in other disciplines, where it has been shown that the choice of integration strategy can significantly influence model performance.</p>
      <p id="d2e152">For instance, <xref ref-type="bibr" rid="bib1.bibx32" id="text.11"/> combined static user profile data (e.g., demographic attributes, preferences) with sequential dynamic input (design decisions over time) in a deep recurrent neural network to predict human design behavior. They explored different fusion strategies in a deep recurrent neural network, including early fusion (concatenation at the input level), mid-level fusion (after temporal encoding), and late fusion (combining representations just before output). <xref ref-type="bibr" rid="bib1.bibx25" id="text.12"/> explored blood glucose forecasting using both static patient information (e.g., age, disease duration, treatment plan) and dynamic time-series data (glucose levels). They used a modular architecture with parallel processing of static and dynamic inputs, followed by concatenation. <xref ref-type="bibr" rid="bib1.bibx27" id="text.13"/> explored static feature integration strategies in the context of energy consumption forecasting, using short, regular time series (e.g., temperature, occupancy) alongside static building attributes (e.g., size, type, insulation). They systematically compared repetition, concatenation, conditional initialization, and feature-wise transformations of static inputs in RNNs. <xref ref-type="bibr" rid="bib1.bibx22" id="text.14"/> applied early fusion by concatenating static features with dynamic meteorological time series for crop yield prediction. Similarly, <xref ref-type="bibr" rid="bib1.bibx37" id="text.15"/>  used concatenation of dense and LSTM-based encodings for predicting medical crowdfunding outcomes and found additional benefit from temporal attention in some cases.</p>
      <p id="d2e170">The goal of this study is to systematically compare different approaches for processing dynamic and static input data within a global deep learning model for groundwater level prediction. To this end, we reviewed relevant literature from various disciplines and incorporated both the commonly used methods in water sciences as well as additional approaches that have shown promising results in other fields.</p>
      <p id="d2e173">We used a subset of a recently published, machine-learning-ready long-term groundwater level dataset for Germany <xref ref-type="bibr" rid="bib1.bibx30 bib1.bibx29" id="paren.16"/>. The subset includes 667 groundwater wells, selected from the full dataset based on their Nash–Sutcliffe Efficiency (NSE) in an initial single-well benchmark model. Only wells with an <inline-formula><mml:math id="M1" display="inline"><mml:mrow><mml:mtext>NSE</mml:mtext><mml:mo>&gt;</mml:mo><mml:mn mathvariant="normal">0.7</mml:mn></mml:mrow></mml:math></inline-formula> were included, in order to exclude those clearly influenced by non-meteorological factors such as pumping. The dataset comprises groundwater level data, meteorological forcings and static environmental features for each well.</p>
      <p id="d2e191">In addition, we conducted a parallel analysis using so-called time series features. i.e. statistical descriptors derived directly from the groundwater level time series (e.g., periodicity, seasonality) in place of the environmental static features provided by the dataset. This decision was motivated by previous findings showing that environmental static features, unlike time series features, were not well suited for spatial generalization <xref ref-type="bibr" rid="bib1.bibx12" id="paren.17"/>.</p>
      <p id="d2e198">We tested four different approaches for integrating static features, each evaluated in both an in-sample (IS) setting (i.e., generalization in time) and an out-of-sample (OOS) setting (i.e., generalization in space and time), using both environmental and time-series-based static inputs. The aim  was to investigate whether the method used to incorporate static features influences model performance, whether this effect differs between the IS and OOS settings, and whether the type and quality of static features impact the results.</p>
      <p id="d2e201">In both settings, we also included a baseline model without static features. While in the IS setting, static features may serve primarily as identifiers or allow the model to benefit from training data associated with similar wells, in the OOS setting, static features are crucial for spatial generalization. They provide the only mechanism by which the model can infer that wells with similar static properties should respond similarly to identical dynamic inputs. By comparing the performance of models with and without static features – especially in the OOS setting – we assess to what extent each integration strategy is able to leverage static input features to improve generalization.</p>
      <p id="d2e204">While time series features have shown superior performance in practice, they rely on past groundwater level observations and therefore do not allow for true spatial generalization to unmonitored sites. A trained model using these features can only be applied to wells where historical groundwater data are available – since such data are required to compute the features. In these cases, the wells could also be included directly in the training process, rendering spatial generalization unnecessary. In contrast, true spatial generalization in groundwater modeling is only possible when using spatially continuous environmental static features. In this study, we use time series features as a proxy for informative static inputs, because previous work has shown that commonly available environmental static features in groundwater applications are often not informative enough to support generalization <xref ref-type="bibr" rid="bib1.bibx12" id="paren.18"/>.</p>
</sec>
<sec id="Ch1.S2">
  <label>2</label><title>Data</title>
<sec id="Ch1.S2.SS1">
  <label>2.1</label><title>Groundwater level data</title>
      <p id="d2e225">We used a subset of a recently published, machine-learning-ready long-term groundwater level dataset for Germany <xref ref-type="bibr" rid="bib1.bibx30 bib1.bibx29" id="paren.19"/>. The subset comprises 667 groundwater wells, each with a total record length of 32 years of weekly data from 1991–2022. Wells were selected from the full dataset based on their Nash–Sutcliffe Efficiency (NSE) in an initial single-well benchmark model. Only wells with an <inline-formula><mml:math id="M2" display="inline"><mml:mrow><mml:mtext>NSE</mml:mtext><mml:mo>&gt;</mml:mo><mml:mn mathvariant="normal">0.7</mml:mn></mml:mrow></mml:math></inline-formula> were included, in order to exclude those that are clearly influenced by non-meteorological factors such as pumping. As a result, it can be assumed that the groundwater dynamics of the selected wells are primarily driven by meteorological forcing. The wells are spatially well distributed across Germany (Fig. <xref ref-type="fig" rid="F1"/>) and represent a wide range of hydrogeological and climatic conditions <xref ref-type="bibr" rid="bib1.bibx20" id="paren.20"><named-content content-type="pre">see provided time series plots in</named-content></xref>. To provide an illustrative overview of the variability in groundwater dynamics across wells, Fig. <xref ref-type="fig" rid="F2"/> shows example time series from representative monitoring sites. The selected examples highlight differences in seasonal amplitude, trend behavior, and long- and short-term variability, illustrating the heterogeneity that global models must capture.</p>

      <fig id="F1" specific-use="star"><label>Figure 1</label><caption><p id="d2e254">Location of the 667 selected groundwater wells, along with their mean groundwater level from 1991–2022.</p></caption>
          <graphic xlink:href="https://hess.copernicus.org/articles/30/1877/2026/hess-30-1877-2026-f01.png"/>

        </fig>

      <fig id="F2" specific-use="star"><label>Figure 2</label><caption><p id="d2e265">Example groundwater level time series from monitoring wells included in the dataset. The examples illustrate differences in seasonal amplitude, trends, and long- and short-term variability, highlighting the heterogeneity of groundwater dynamics across sites.</p></caption>
          <graphic xlink:href="https://hess.copernicus.org/articles/30/1877/2026/hess-30-1877-2026-f02.png"/>

        </fig>

</sec>
<sec id="Ch1.S2.SS2">
  <label>2.2</label><title>Meteorological forcings as dynamic input data</title>
      <p id="d2e282">The meteorological forcings were also obtained from the dataset mentioned above. They include weekly aggregated variables such as mean, maximum, and minimum air temperature, precipitation sum, and relative humidity, all derived from the HYRAS dataset provided by the German Meteorological Service (DWD) <xref ref-type="bibr" rid="bib1.bibx33" id="paren.21"/>. Additional variables – also from DWD – include actual, potential, and reference evapotranspiration (FAO), as well as soil moisture and soil temperature at 5 cm depth <xref ref-type="bibr" rid="bib1.bibx5" id="paren.22"/>. Further inputs, such as snow water equivalent, snowfall, snowmelt, and surface and subsurface runoff, were sourced from the ERA5-Land reanalysis dataset <xref ref-type="bibr" rid="bib1.bibx28" id="paren.23"/>.</p>
</sec>
<sec id="Ch1.S2.SS3">
  <label>2.3</label><title>Environmental features as static input data</title>
      <p id="d2e303">The selection of static environmental features was based on the dataset by <xref ref-type="bibr" rid="bib1.bibx30 bib1.bibx29" id="text.24"/>. These features include hydrogeological and soil characteristics (e.g., aquifer type, hydraulic conductivity, soil type, recharge), topographic attributes (elevation, slope, aspect, flow direction, etc.), and land use information.</p>
      <p id="d2e309">From the full set of static features provided in the dataset <xref ref-type="bibr" rid="bib1.bibx30 bib1.bibx29" id="paren.25"/>, variables related to well depth, screen characteristics, pumping, and pressure state were excluded, as these were sparsely available for the majority of monitoring wells. All categorical static features were label encoded for use in the machine learning models.</p>
</sec>
<sec id="Ch1.S2.SS4">
  <label>2.4</label><title>Time series features as static input data</title>
      <p id="d2e323">Time series features are quantitative metrics derived from groundwater level time series that describe specific aspects of their temporal dynamics. In this study, we adopt the feature definitions introduced by <xref ref-type="bibr" rid="bib1.bibx38" id="text.26"/> and also applied in <xref ref-type="bibr" rid="bib1.bibx12" id="text.27"/>. The resulting set consists of a redundancy-reduced selection of nine time series features, which has previously been used successfully to cluster large sets of groundwater wells based on their dynamic behavior <xref ref-type="bibr" rid="bib1.bibx38" id="paren.28"/>.</p>
      <p id="d2e335">All time series features were recomputed in this study using only information available within the respective training period. No data from the validation or test periods was used for feature derivation, thereby preventing information leakage. A complete list and detailed descriptions of the time series features are provided in Appendix A.</p>
</sec>
</sec>
<sec id="Ch1.S3">
  <label>3</label><title>Methods</title>
<sec id="Ch1.S3.SS1">
  <label>3.1</label><title>Long Short-Term Memory network</title>
      <p id="d2e354">In all model setups, we used Long Short-Term Memory (LSTM) networks <xref ref-type="bibr" rid="bib1.bibx13" id="paren.29"/> to process the dynamic time series data. LSTMs are the most commonly applied machine learning architecture in water science, and as such, we refrain from providing a detailed description here.</p>
</sec>
<sec id="Ch1.S3.SS2">
  <label>3.2</label><title>Incorporation of static input data</title>
      <p id="d2e368">We implemented four different approaches to incorporate static input data into the models (Fig. <xref ref-type="fig" rid="F3"/>). The first two represent the most widely used methods in water science to date, while the latter two were selected based on their promising performance in comparative studies from other disciplines.</p>

      <fig id="F3" specific-use="star"><label>Figure 3</label><caption><p id="d2e375">Model architectures used to incorporate static features into the global deep learning model. <inline-formula><mml:math id="M3" display="inline"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mtext>steps_in</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> denotes the length of the input sequences, <inline-formula><mml:math id="M4" display="inline"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mtext>dyn</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> the number of dynamic input features (<inline-formula><mml:math id="M5" display="inline"><mml:mrow><mml:msub><mml:mi>d</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi mathvariant="normal">…</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>d</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>), and <inline-formula><mml:math id="M6" display="inline"><mml:mrow><mml:msub><mml:mi>n</mml:mi><mml:mtext>stat</mml:mtext></mml:msub></mml:mrow></mml:math></inline-formula> the number of static features (<inline-formula><mml:math id="M7" display="inline"><mml:mrow><mml:msub><mml:mi>s</mml:mi><mml:mn mathvariant="normal">1</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:msub><mml:mi>s</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msub><mml:mo>,</mml:mo><mml:mi mathvariant="normal">…</mml:mi><mml:mo>,</mml:mo><mml:msub><mml:mi>s</mml:mi><mml:mi>n</mml:mi></mml:msub></mml:mrow></mml:math></inline-formula>). Numbers in brackets indicate the number of neurons in LSTM and dense layers. In the attention model, <inline-formula><mml:math id="M8" display="inline"><mml:mi>h</mml:mi></mml:math></inline-formula> represents the hidden state and <inline-formula><mml:math id="M9" display="inline"><mml:mi>a</mml:mi></mml:math></inline-formula> the attention weight. In the conditional model, <inline-formula><mml:math id="M10" display="inline"><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> and <inline-formula><mml:math id="M11" display="inline"><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> denote the hidden and cell states used to initialize the first LSTM layer in the dynamic branch.</p></caption>
          <graphic xlink:href="https://hess.copernicus.org/articles/30/1877/2026/hess-30-1877-2026-f03.png"/>

        </fig>

      <p id="d2e512"><list list-type="bullet">
            <list-item>

      <p id="d2e517"><italic>Repetition of static data</italic>. The static features are replicated at each time step and fed into the Long Short-Term Memory (LSTM) network alongside the dynamic inputs, as first proposed in <xref ref-type="bibr" rid="bib1.bibx16" id="text.30"/>. In the following, we refer to this architecture as the repetition model (rep model).</p>
            </list-item>
            <list-item>

      <p id="d2e528"><italic>Concatenation of separately processed static data</italic>: The static features are processed separately using fully connected layers, and their output is then concatenated with the final hidden state of the LSTM. The architecture corresponds to that used in <xref ref-type="bibr" rid="bib1.bibx30" id="text.31"/> and has previously been applied in studies such as <xref ref-type="bibr" rid="bib1.bibx27" id="text.32"/> and <xref ref-type="bibr" rid="bib1.bibx22" id="text.33"/>. We refer to this architecture as the concatenation model (conc model).</p>
            </list-item>
            <list-item>

      <p id="d2e545"><italic>Attention mechanism on static features</italic>: In this approach, the static features are first processed by a fully connected layer, and the resulting output is used to generate attention weights, which are then applied to the LSTM hidden states to compute a weighted sum as the final representation. This model was proposed by <xref ref-type="bibr" rid="bib1.bibx22" id="text.34"/>, and was inspired by earlier work from <xref ref-type="bibr" rid="bib1.bibx7" id="text.35"/> and <xref ref-type="bibr" rid="bib1.bibx10" id="text.36"/>. We refer to this as the attention model (att model).</p>
            </list-item>
            <list-item>

      <p id="d2e562"><italic>Initialization of hidden and cell states using static features</italic>: Here, the static features are first processed by fully connected layers, and the resulting output is used to initialize the hidden and cell states of the LSTM. This allows the static context to directly influence the dynamic sequence processing from the beginning. This method was employed by <xref ref-type="bibr" rid="bib1.bibx27" id="text.37"/> and <xref ref-type="bibr" rid="bib1.bibx22" id="text.38"/>, and in a slightly modified form by <xref ref-type="bibr" rid="bib1.bibx37" id="text.39"/>. Following the terminology in those studies, we refer to this architecture as the conditional model (cond model).</p>
            </list-item>
          </list></p>
</sec>
<sec id="Ch1.S3.SS3">
  <label>3.3</label><title>Model Setup</title>
<sec id="Ch1.S3.SS3.SSS1">
  <label>3.3.1</label><title>General Settings and Hyperparameters</title>
      <p id="d2e593">All models were implemented with a single LSTM layer of size 128 to ensure consistency across integration strategies. The repetition model was additionally evaluated with 256 units, as the replication of static features at each time step more than doubles the dimensionality of the dynamic input. Throughout the Results section, only the outcomes from the 256-unit configuration are presented, as they yielded slightly better performance than the 128-unit baseline.</p>
      <p id="d2e596">In all model architectures, every neural layer – except for the output layer – is followed by a dropout layer with a dropout rate of 0.3 for regularization. All models were trained using a batch size of 512 and a maximum of 20 epochs, combined with early stopping based on validation loss, with a patience of 5. A learning rate scheduling scheme was applied, targeting a final learning rate of 0.001. All models were trained using mean squared error (MSE) as the loss function.</p>
      <p id="d2e599">No extensive hyperparameter optimization was conducted in this study. Instead, a consistent baseline architecture and a common set of hyperparameters were applied across all integration strategies to ensure comparability. This design choice enables a controlled evaluation of the different strategies for incorporating static features, rather than maximizing absolute predictive performance for individual model configurations.</p>
      <p id="d2e602">The concatenation model includes a second model branch consisting of a Dense layer with 128 neurons to process the static input features. The outputs of this branch are concatenated with the final hidden state of the LSTM branch, followed by a Dense layer of size 256 and a final Dense output layer with one neuron.</p>
      <p id="d2e606">The attention model applies a static-driven attention mechanism where attention weights over the LSTM outputs are computed based on the static features, which are passed through a dense layer of size 128 before. The attention mechanism uses another dense layer to compute attention scores of size equal to the number of time steps. The resulting attention-weighted representation of the LSTM outputs is then passed through a dense layer of size 256, followed by a dropout layer and a final linear output layer.</p>
      <p id="d2e609">The conditional model features a second branch that processes the static features through a dense layer of size 128, followed by a dropout layer and a linear projection to a vector of size <inline-formula><mml:math id="M12" display="inline"><mml:mrow><mml:mn mathvariant="normal">2</mml:mn><mml:mi>H</mml:mi></mml:mrow></mml:math></inline-formula>, where <inline-formula><mml:math id="M13" display="inline"><mml:mi>H</mml:mi></mml:math></inline-formula> denotes the number of LSTM units. This vector is split into two parts of size <inline-formula><mml:math id="M14" display="inline"><mml:mi>H</mml:mi></mml:math></inline-formula>, which are used to initialize the hidden state <inline-formula><mml:math id="M15" display="inline"><mml:mrow><mml:msub><mml:mi>h</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> and cell state <inline-formula><mml:math id="M16" display="inline"><mml:mrow><mml:msub><mml:mi>c</mml:mi><mml:mn mathvariant="normal">0</mml:mn></mml:msub></mml:mrow></mml:math></inline-formula> of the first LSTM layer in the dynamic branch. To improve temporal abstraction, we added a second LSTM layer with 128 units following the first. The first LSTM layer is initialized using the static features (conditional setup), while the second LSTM summarizes the sequence into a single representation. We observed that this additional LSTM layer significantly improved model performance compared to a single-layer configuration.</p>
      <p id="d2e658">An overview of all model architectures is provided in Fig. <xref ref-type="fig" rid="F3"/>.</p>
      <p id="d2e663">All models were evaluated on the final 10 years of the dataset, spanning the period from 2013 to 2022. The preceding years were used for training (1991–2007) and validation with early stopping (2008–2012). The input sequence length for the dynamic inputs was set to 52 weeks (i.e., one year) for all models. Performance metrics were computed based on the median predictions of an ensemble of 10 independently trained global models with identical architecture, data splits, and hyperparameters, differing only in their random weight initialization.</p>
</sec>
<sec id="Ch1.S3.SS3.SSS2">
  <label>3.3.2</label><title>In-sample Setting</title>
      <p id="d2e674">In the in-sample (IS) setting, the models are trained on the training data from all wells, and predictions are made for the designated test period. This setting represents a generalization in time, as the model learns from each well and is evaluated on future data from the same wells.</p>
</sec>
<sec id="Ch1.S3.SS3.SSS3">
  <label>3.3.3</label><title>Out-of-sample Setting</title>
      <p id="d2e685">The out-of-sample (OOS) setting was implemented as a 10-fold cross-validation (CV). The 667 wells were randomly divided into ten folds; in each run, the model was trained on the training period data from nine folds, and the tenth fold was held out for testing – again using the same test period. This setup requires the model to generalize across both space and time, as it is evaluated on wells it has not seen during training.</p>
</sec>
</sec>
</sec>
<sec id="Ch1.S4">
  <label>4</label><title>Results and Discussion</title>
<sec id="Ch1.S4.SS1">
  <label>4.1</label><title>Environmental static features</title>
      <p id="d2e705">In the in-sample (IS) setting using static environmental features, the repetition model achieves the best performance across all error metrics, closely followed by the conditional and concatenation models (Table <xref ref-type="table" rid="T1"/>, Fig. <xref ref-type="fig" rid="F4"/>a). The median NSE is 0.81 for the repetition model, 0.80 for the conditional model, and 0.79 for the concatenation model. The attention model lags slightly behind, with a median NSE of 0.77. As expected, all models incorporating static features outperform the dynamic-only baseline, which yields a median NSE of 0.73.</p>

<table-wrap id="T1" specific-use="star"><label>Table 1</label><caption><p id="d2e715">Overview of metrics for all model approaches: In-sample (IS), out-of-sample (OOS), environmental static features (env), time series static features (ts), models without static features (dynonly), repetition model (rep), concatenation model (conc), attention model (att) and conditional model (cond).</p></caption><oasis:table frame="topbot"><oasis:tgroup cols="13">
     <oasis:colspec colnum="1" colname="col1" align="left"/>
     <oasis:colspec colnum="2" colname="col2" align="right"/>
     <oasis:colspec colnum="3" colname="col3" align="right"/>
     <oasis:colspec colnum="4" colname="col4" align="right"/>
     <oasis:colspec colnum="5" colname="col5" align="right" colsep="1"/>
     <oasis:colspec colnum="6" colname="col6" align="right"/>
     <oasis:colspec colnum="7" colname="col7" align="right"/>
     <oasis:colspec colnum="8" colname="col8" align="right"/>
     <oasis:colspec colnum="9" colname="col9" align="right" colsep="1"/>
     <oasis:colspec colnum="10" colname="col10" align="right"/>
     <oasis:colspec colnum="11" colname="col11" align="right"/>
     <oasis:colspec colnum="12" colname="col12" align="right"/>
     <oasis:colspec colnum="13" colname="col13" align="right"/>
     <oasis:thead>
       <oasis:row>
         <oasis:entry colname="col1">Model Approach</oasis:entry>
         <oasis:entry rowsep="1" namest="col2" nameend="col5" align="center" colsep="1">NSE </oasis:entry>
         <oasis:entry rowsep="1" namest="col6" nameend="col9" align="center" colsep="1">RMSE </oasis:entry>
         <oasis:entry rowsep="1" namest="col10" nameend="col13" align="center"><inline-formula><mml:math id="M17" display="inline"><mml:mrow><mml:msup><mml:mi>R</mml:mi><mml:mn mathvariant="normal">2</mml:mn></mml:msup></mml:mrow></mml:math></inline-formula></oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1"/>
         <oasis:entry colname="col2">min</oasis:entry>
         <oasis:entry colname="col3">mean</oasis:entry>
         <oasis:entry colname="col4">med</oasis:entry>
         <oasis:entry colname="col5">max</oasis:entry>
         <oasis:entry colname="col6">min</oasis:entry>
         <oasis:entry colname="col7">mean</oasis:entry>
         <oasis:entry colname="col8">med</oasis:entry>
         <oasis:entry colname="col9">max</oasis:entry>
         <oasis:entry colname="col10">min</oasis:entry>
         <oasis:entry colname="col11">mean</oasis:entry>
         <oasis:entry colname="col12">med</oasis:entry>
         <oasis:entry colname="col13">max</oasis:entry>
       </oasis:row>
     </oasis:thead>
     <oasis:tbody>
       <oasis:row>
         <oasis:entry colname="col1">IS dynonly</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M18" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.32</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.73</oasis:entry>
         <oasis:entry colname="col5">0.92</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.19</oasis:entry>
         <oasis:entry colname="col9">6.27</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M19" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.76</oasis:entry>
         <oasis:entry colname="col13">0.94</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS env rep</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M20" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.18</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.79</oasis:entry>
         <oasis:entry colname="col4">0.81</oasis:entry>
         <oasis:entry colname="col5">0.94</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.25</oasis:entry>
         <oasis:entry colname="col8">0.16</oasis:entry>
         <oasis:entry colname="col9">5.05</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M21" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.82</oasis:entry>
         <oasis:entry colname="col12">0.84</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS env conc</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M22" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.00</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.77</oasis:entry>
         <oasis:entry colname="col4">0.79</oasis:entry>
         <oasis:entry colname="col5">0.93</oasis:entry>
         <oasis:entry colname="col6">0.06</oasis:entry>
         <oasis:entry colname="col7">0.27</oasis:entry>
         <oasis:entry colname="col8">0.17</oasis:entry>
         <oasis:entry colname="col9">4.84</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M23" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.80</oasis:entry>
         <oasis:entry colname="col12">0.82</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS env att</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M24" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.93</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.74</oasis:entry>
         <oasis:entry colname="col4">0.77</oasis:entry>
         <oasis:entry colname="col5">0.94</oasis:entry>
         <oasis:entry colname="col6">0.06</oasis:entry>
         <oasis:entry colname="col7">0.29</oasis:entry>
         <oasis:entry colname="col8">0.18</oasis:entry>
         <oasis:entry colname="col9">5.86</oasis:entry>
         <oasis:entry colname="col10">0.03</oasis:entry>
         <oasis:entry colname="col11">0.77</oasis:entry>
         <oasis:entry colname="col12">0.80</oasis:entry>
         <oasis:entry colname="col13">0.95</oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">IS env cond</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M25" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.50</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.77</oasis:entry>
         <oasis:entry colname="col4">0.80</oasis:entry>
         <oasis:entry colname="col5">0.93</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.27</oasis:entry>
         <oasis:entry colname="col8">0.17</oasis:entry>
         <oasis:entry colname="col9">5.41</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M26" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.80</oasis:entry>
         <oasis:entry colname="col12">0.82</oasis:entry>
         <oasis:entry colname="col13">0.95</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS dynonly</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M27" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.29</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.73</oasis:entry>
         <oasis:entry colname="col5">0.92</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.20</oasis:entry>
         <oasis:entry colname="col9">6.21</oasis:entry>
         <oasis:entry colname="col10">0.01</oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.76</oasis:entry>
         <oasis:entry colname="col13">0.94</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS env rep</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M28" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.57</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.74</oasis:entry>
         <oasis:entry colname="col5">0.93</oasis:entry>
         <oasis:entry colname="col6">0.06</oasis:entry>
         <oasis:entry colname="col7">0.31</oasis:entry>
         <oasis:entry colname="col8">0.19</oasis:entry>
         <oasis:entry colname="col9">6.29</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M29" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.76</oasis:entry>
         <oasis:entry colname="col13">0.94</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS env conc</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M30" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.85</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.74</oasis:entry>
         <oasis:entry colname="col5">0.93</oasis:entry>
         <oasis:entry colname="col6">0.06</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.19</oasis:entry>
         <oasis:entry colname="col9">6.17</oasis:entry>
         <oasis:entry colname="col10">0.01</oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.76</oasis:entry>
         <oasis:entry colname="col13">0.95</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS env att</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M31" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.17</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.73</oasis:entry>
         <oasis:entry colname="col5">0.94</oasis:entry>
         <oasis:entry colname="col6">0.06</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.19</oasis:entry>
         <oasis:entry colname="col9">6.74</oasis:entry>
         <oasis:entry colname="col10">0.02</oasis:entry>
         <oasis:entry colname="col11">0.73</oasis:entry>
         <oasis:entry colname="col12">0.76</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">OOS env cond</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M32" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.26</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.74</oasis:entry>
         <oasis:entry colname="col5">0.93</oasis:entry>
         <oasis:entry colname="col6">0.06</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.19</oasis:entry>
         <oasis:entry colname="col9">6.34</oasis:entry>
         <oasis:entry colname="col10">0.01</oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.77</oasis:entry>
         <oasis:entry colname="col13">0.95</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS dynonly</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M33" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.37</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.73</oasis:entry>
         <oasis:entry colname="col5">0.92</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.19</oasis:entry>
         <oasis:entry colname="col9">6.27</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M34" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.54</oasis:entry>
         <oasis:entry colname="col13">0.94</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS ts rep</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M35" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.28</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.78</oasis:entry>
         <oasis:entry colname="col4">0.81</oasis:entry>
         <oasis:entry colname="col5">0.95</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.26</oasis:entry>
         <oasis:entry colname="col8">0.16</oasis:entry>
         <oasis:entry colname="col9">5.16</oasis:entry>
         <oasis:entry colname="col10">0.05</oasis:entry>
         <oasis:entry colname="col11">0.63</oasis:entry>
         <oasis:entry colname="col12">0.81</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS ts conc</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M36" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">0.64</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.78</oasis:entry>
         <oasis:entry colname="col4">0.81</oasis:entry>
         <oasis:entry colname="col5">0.96</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.26</oasis:entry>
         <oasis:entry colname="col8">0.16</oasis:entry>
         <oasis:entry colname="col9">4.79</oasis:entry>
         <oasis:entry colname="col10">0.03</oasis:entry>
         <oasis:entry colname="col11">0.66</oasis:entry>
         <oasis:entry colname="col12">0.82</oasis:entry>
         <oasis:entry colname="col13">0.97</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">IS ts att</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M37" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">1.78</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.76</oasis:entry>
         <oasis:entry colname="col4">0.80</oasis:entry>
         <oasis:entry colname="col5">0.96</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.28</oasis:entry>
         <oasis:entry colname="col8">0.17</oasis:entry>
         <oasis:entry colname="col9">5.70</oasis:entry>
         <oasis:entry colname="col10">0.02</oasis:entry>
         <oasis:entry colname="col11">0.67</oasis:entry>
         <oasis:entry colname="col12">0.79</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">IS ts cond</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M38" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">0.25</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.77</oasis:entry>
         <oasis:entry colname="col4">0.79</oasis:entry>
         <oasis:entry colname="col5">0.95</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.27</oasis:entry>
         <oasis:entry colname="col8">0.17</oasis:entry>
         <oasis:entry colname="col9">4.68</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M39" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.68</oasis:entry>
         <oasis:entry colname="col12">0.80</oasis:entry>
         <oasis:entry colname="col13">0.95</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS dynonly</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M40" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.29</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.69</oasis:entry>
         <oasis:entry colname="col4">0.73</oasis:entry>
         <oasis:entry colname="col5">0.92</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.32</oasis:entry>
         <oasis:entry colname="col8">0.20</oasis:entry>
         <oasis:entry colname="col9">6.21</oasis:entry>
         <oasis:entry colname="col10">0.01</oasis:entry>
         <oasis:entry colname="col11">0.72</oasis:entry>
         <oasis:entry colname="col12">0.76</oasis:entry>
         <oasis:entry colname="col13">0.94</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS ts rep</oasis:entry>
         <oasis:entry colname="col2">0.29</oasis:entry>
         <oasis:entry colname="col3">0.80</oasis:entry>
         <oasis:entry colname="col4">0.82</oasis:entry>
         <oasis:entry colname="col5">0.95</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.24</oasis:entry>
         <oasis:entry colname="col8">0.16</oasis:entry>
         <oasis:entry colname="col9">3.89</oasis:entry>
         <oasis:entry colname="col10"><inline-formula><mml:math id="M41" display="inline"><mml:mrow><mml:mo>&lt;</mml:mo><mml:mn mathvariant="normal">0.01</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col11">0.82</oasis:entry>
         <oasis:entry colname="col12">0.85</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS ts conc</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M42" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">0.74</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.77</oasis:entry>
         <oasis:entry colname="col4">0.80</oasis:entry>
         <oasis:entry colname="col5">0.96</oasis:entry>
         <oasis:entry colname="col6">0.05</oasis:entry>
         <oasis:entry colname="col7">0.27</oasis:entry>
         <oasis:entry colname="col8">0.16</oasis:entry>
         <oasis:entry colname="col9">5.18</oasis:entry>
         <oasis:entry colname="col10">0.01</oasis:entry>
         <oasis:entry colname="col11">0.81</oasis:entry>
         <oasis:entry colname="col12">0.83</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS ts att</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M43" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">2.16</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.75</oasis:entry>
         <oasis:entry colname="col4">0.78</oasis:entry>
         <oasis:entry colname="col5">0.96</oasis:entry>
         <oasis:entry colname="col6">0.02</oasis:entry>
         <oasis:entry colname="col7">0.25</oasis:entry>
         <oasis:entry colname="col8">0.36</oasis:entry>
         <oasis:entry colname="col9">8.06</oasis:entry>
         <oasis:entry colname="col10">0.03</oasis:entry>
         <oasis:entry colname="col11">0.78</oasis:entry>
         <oasis:entry colname="col12">0.81</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">OOS ts cond</oasis:entry>
         <oasis:entry colname="col2"><inline-formula><mml:math id="M44" display="inline"><mml:mrow><mml:mo>-</mml:mo><mml:mn mathvariant="normal">0.92</mml:mn></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3">0.77</oasis:entry>
         <oasis:entry colname="col4">0.79</oasis:entry>
         <oasis:entry colname="col5">0.94</oasis:entry>
         <oasis:entry colname="col6">0.04</oasis:entry>
         <oasis:entry colname="col7">0.25</oasis:entry>
         <oasis:entry colname="col8">0.34</oasis:entry>
         <oasis:entry colname="col9">5.99</oasis:entry>
         <oasis:entry colname="col10">0.03</oasis:entry>
         <oasis:entry colname="col11">0.79</oasis:entry>
         <oasis:entry colname="col12">0.82</oasis:entry>
         <oasis:entry colname="col13">0.96</oasis:entry>
       </oasis:row>
     </oasis:tbody>
   </oasis:tgroup></oasis:table></table-wrap>

      <fig id="F4" specific-use="star"><label>Figure 4</label><caption><p id="d2e1925">Comparison of the Nash–Sutcliffe Efficiency (NSE) across all approaches, shown as boxplots and cumulative distribution functions. Results are presented for the in-sample (IS) setting with environmental static features (env) <bold>(a)</bold>, the out-of-sample (OOS) setting with environmental static features <bold>(b)</bold>, the IS setting with time-series-derived static features (ts) <bold>(c)</bold>, and the OOS setting with time-series-derived static features <bold>(d)</bold>. The models are denoted as follows: dynonly (no static features), rep (repetition model), conc (concatenation model), att (attention model), and cond (conditional model).</p></caption>
          <graphic xlink:href="https://hess.copernicus.org/articles/30/1877/2026/hess-30-1877-2026-f04.png"/>

        </fig>

      <p id="d2e1947">A closer look at the results reveals that model performance often depends on the individual well. For the majority of wells, the results align with the overall trend: the repetition model performs best, followed by the conditional, concatenation, and attention models in that order. However, there are also wells that deviate from this pattern. This becomes evident when counting how many wells were best predicted by each model based on NSE, regardless of the margin.</p>
      <p id="d2e1950">Out of the 667 wells, 376 were best modeled using the repetition approach, followed by 109 wells for which the conditional model performed best. The concatenation model was optimal for 81 wells, and the attention model for 51 wells. Interestingly, 50 wells were best modeled by the dynamic-only (dynonly) model, suggesting that these wells did not benefit from the inclusion of static features.</p>
      <p id="d2e1953">We did not analyze this aspect in detail, as it would likely be impractical to deploy different architectures for different wells, even if a relationship to time series dynamics could be established. Nonetheless, this finding is noteworthy, and it would be valuable to investigate in future work whether this variation is merely coincidental or, more likely, as we suspect, related to the underlying temporal behavior of individual wells.</p>
      <p id="d2e1956">In the out-of-sample (OOS) setting, the performance of all models is significantly lower than in the IS setting, with median NSE values decreasing from 0.73–0.81 (IS) to 0.73–0.74 (OOS), as shown in Table <xref ref-type="table" rid="T1"/> and Fig. <xref ref-type="fig" rid="F4"/>b. The performance differences between the various approaches are minimal, and all models perform approximately on par with the dynamic-only baseline. This indicates that none of the models were able to effectively generalize in space using the available environmental static features.</p>
      <p id="d2e1963">The lower performance in the OOS setting, compared to the IS setting, can be attributed to three possible factors:</p>
      <p id="d2e1966"><list list-type="bullet">
            <list-item>

      <p id="d2e1971"><italic>Reduced training data due to cross-validation.</italic> In the 10-fold cross-validation setup, the amount of training data is reduced by 10 % in each fold. While this effect could theoretically be minimized using a leave-one-out cross-validation strategy – training the model on all but one well and testing on the remaining one – this would require <inline-formula><mml:math id="M45" display="inline"><mml:mi>n</mml:mi></mml:math></inline-formula> separate model runs (where <inline-formula><mml:math id="M46" display="inline"><mml:mi>n</mml:mi></mml:math></inline-formula> is the number of wells), making it computationally infeasible. Consequently, 10-fold CV remains a widely accepted and practical alternative.</p>
            </list-item>
            <list-item>

      <p id="d2e1993"><italic>Uncertainties in large-scale static environmental datasets.</italic> Environmental static features derived from large-scale datasets are subject to inherent uncertainties. Many hydrogeologically relevant properties influencing groundwater responses to meteorological inputs – such as hydraulic conductivity – are not measured directly at individual well locations. Instead, they are often interpolated from sparse observations, inferred from aquifer material classifications, or derived from regional-scale models. In addition, coarse spatial resolution may obscure local heterogeneity. These sources of uncertainty do not invalidate the use of such features, but they may limit their ability to accurately represent site-specific subsurface conditions in a global modeling framework.</p>
            </list-item>
            <list-item>

      <p id="d2e2001"><italic>Non-representative well selection with respect to static features.</italic> The set of wells used for training and testing may not be fully representative with respect to their static environmental feature characteristics. While the overall number of wells and their spatial distribution may be sufficient to capture general hydrogeological conditions, the selection of wells in this study was primarily driven by data availability and modeling considerations rather than by an explicit assessment of static feature representativeness. We did not systematically evaluate whether the selected wells span the full range of static feature variability present in the broader dataset. Consequently, certain combinations or extremes of static environmental characteristics may be underrepresented in the training data, potentially limiting the model’s ability to generalize to unseen wells.</p>
            </list-item>
          </list>The latter two points are also likely explanations for the observation that models using environmental static features do not outperform the model without static inputs. This finding is consistent with the results of <xref ref-type="bibr" rid="bib1.bibx12" id="text.40"/>, who reported similar outcomes using a much smaller dataset. Therefore, it appears increasingly likely that the limitation lies in the limited process relevance and inherent uncertainties of the static environmental features, rather than in the representativeness of the well selection.</p>
</sec>
<sec id="Ch1.S4.SS2">
  <label>4.2</label><title>Time series static features</title>
      <p id="d2e2020">As shown by <xref ref-type="bibr" rid="bib1.bibx12" id="text.41"/>, time-series features can lead to improved performance in the OOS setting. Based on this finding, we repeated all analyses using time-series features as static inputs, in order to evaluate whether and to what extent the performance of the different integration approaches changes when informative static features are used. In this context, “informative” refers to features that enable the models to genuinely generalize across different wells based on the information provided.</p>
      <p id="d2e2026">In the in-sample (IS) setting with time-series features as static inputs, the results are particularly noteworthy. While the performance of the repetition model does not improve (median NSE: 0.81 vs. 0.81 previously), the concatenation and attention models appear to benefit from the inclusion of informative static features (median NSE: 0.81, 0.80 vs. 0.79, 0.77 previously) (Table <xref ref-type="table" rid="T1"/>, Fig. <xref ref-type="fig" rid="F4"/>c). The conditional model performs slightly worse (median NSE: 0.79 vs. 0.80 previously).</p>
      <p id="d2e2033">Although the repetition model remains the top performer for 238 wells, the concatenation model catches up, now performing best for 198 wells. The attention model is best for 125 wells. The conditional model falls further behind, being optimal for only 60 wells, while the dynamic-only model remains best for 46 wells.</p>
      <p id="d2e2036">In the IS setting, models appear to benefit even from “meaningless” static inputs, likely by using them as a form of unique identifier <xref ref-type="bibr" rid="bib1.bibx12" id="paren.42"/>. However, when informative static features are provided – such as time-series-derived descriptors – the models gain the ability to generalize more effectively based on these inputs. This ability, however, is not equally pronounced across all integration strategies. While the concatenation and attention models show clear improvements with informative static features, the repetition model's performance remains largely unchanged, and the conditional model even shows a slight decline.</p>
      <p id="d2e2043">In the out-of-sample (OOS) setting, as expected, all models show improved performance compared to when environmental static features were used. All integration approaches now outperform the dynamic-only model across all error metrics. In terms of median NSE, the repetition again performs best (0.82), closely followed by the concatenation model (0.80), conditional model (0.79) and  attention model (0.78). The dynamic-only model remains at a lower performance level, with a median NSE of 0.73 (Table <xref ref-type="table" rid="T1"/>, Fig. <xref ref-type="fig" rid="F4"/>d). Notably, the OOS results using time-series-based static features are only slightly lower than the in-sample (IS) results, and in case of the repetition model even higher. These results confirm that the models are able to extract relevant, i.e. physically interpretable information from time-series-based static features and use it to generalize across space.</p>
      <p id="d2e2050">When evaluating which model performs best for the highest number of wells, the repetition model takes the lead with 378 wells. It is followed by the concatenation model with 134 wells. The conditional and attention models lag behind, performing best for only 64 and 58 wells, respectively. The dynamic-only model comes last, showing the best performance in just 33 wells.</p>
</sec>
<sec id="Ch1.S4.SS3">
  <label>4.3</label><title>Computational effort</title>
      <p id="d2e2061">In terms of computational effort, the different approaches exhibit noticible differences. While we were unable to consistently track exact runtimes – due to parallel execution across machines with varying computational resources to reduce total runtime – distinct trends emerged. The attention model was the fastest overall, closely followed by the concatenation model. In contrast, both the repetition and conditional models were significantly slower, with the repetition model also demanding considerably more memory.</p>
</sec>
<sec id="Ch1.S4.SS4">
  <label>4.4</label><title>Comparison with results from other disciplines</title>
      <p id="d2e2072">When comparing our findings to studies from other domains, several consistent patterns emerge regarding the value and integration of static features in deep learning models. However, the effectiveness of particular integration strategies also appears to be domain-specific and strongly dependent on the nature and informativeness of the static features.</p>
      <p id="d2e2075">In general, studies across domains agree that informative static features can substantially improve predictive performance and spatial generalization – but only when integrated using an appropriate strategy. Our results support this trend: when switching from environmental to time-series-based static features, some of the tested architectures showed clear improvements in both IS and OOS settings, while others remained unchanged or even declined.</p>
      <p id="d2e2078">In design science, <xref ref-type="bibr" rid="bib1.bibx32" id="text.43"/> also showed that informed integration methods outperform simpler schemes. Among these, mid-level fusion performed best, allowing the model to first encode temporal dependencies before incorporating static context. This strategy corresponds most closely to our conditional architecture, which also achieved strong performance, albeit not the highest in our experiments.</p>
      <p id="d2e2084">In medicine, <xref ref-type="bibr" rid="bib1.bibx25" id="text.44"/> found that parallel encoding and late fusion of static and dynamic features improved generalization across patients. This setup is conceptually similar to our concatenation architecture. They reported that this approach significantly improved performance, particularly for unseen patients, emphasizing that parallel processing followed by concatenation was a key to successful generalization. In our case, while the concatenation strategy also performed well under out-of-sample conditions – especially when static features carried meaningful information – the repetition model still outperformed it overall.</p>
      <p id="d2e2091"><xref ref-type="bibr" rid="bib1.bibx27" id="text.45"/> found that both concatenation and conditional initialization strategies achieved the best trade-off between predictive accuracy and computational efficiency. These findings are at least partially consistent with our results, in which both the conditional and concatenation models also achieved strong performance. In particular, the concatenation model represents a favorable compromise between predictive accuracy and computational efficiency.</p>
      <p id="d2e2096">In agricultural modeling, <xref ref-type="bibr" rid="bib1.bibx22" id="text.46"/> compared models with and without static inputs and found that early fusion via concatenation of static and dynamic features yielded the best performance in both cross-validation and out-of-sample prediction. Their findings support our conclusion that concatenation is effective when static inputs are informative.</p>
      <p id="d2e2102"><xref ref-type="bibr" rid="bib1.bibx37" id="text.47"/> found that their best performance came from concatenating static and dynamic representations, with some benefit from temporal attention – which aligns with our finding that attention-based models can benefit from informative static features, though in our case, attention still slightly trailed behind concatenation and repetition.</p>
      <p id="d2e2107">Taken together, these studies confirm that the best-performing integration method depends not only on model architecture, but also on the quality and informativeness of the static features. Our work adds to this understanding by systematically evaluating several approaches in a large-scale groundwater modeling context and confirming that when static features are informative, concatenation offers an effective and efficient compromise, though with slightly lower predictive performance compared to the repetition model.</p>
</sec>
</sec>
<sec id="Ch1.S5" sec-type="conclusions">
  <label>5</label><title>Conclusions</title>
      <p id="d2e2120">To address the central research question – to what extent does the choice of integration strategy for static features influence the performance of global deep learning models for groundwater level prediction? – the answer is: it depends.</p>
      <p id="d2e2123">First, the answer depends on whether the task involves an in-sample (IS) setting, i.e., generalization in time only, or an out-of-sample (OOS) setting, which requires generalization in both space and time. Second, it depends on the type and informativeness of the available static features.</p>
      <p id="d2e2126">If static features provide only limited predictive information for the target variable – as was observed for the large-scale environmental features considered here – they may still improve performance in the IS setting, likely because they help distinguish between wells <xref ref-type="bibr" rid="bib1.bibx12" id="paren.48"/>. Under these conditions, performance differences among the various integration approaches remain relatively small, provided that static features are included at all. The repetition approach performs best in this setting, likely because it feeds the static features directly into the LSTM, enabling the model to exploit site-specific information effectively. Nonetheless, the conditional and concatenation models offer viable alternatives: they achieve nearly the same performance while providing benefits such as faster training, greater stability, and lower memory usage.</p>
      <p id="d2e2132">None of the models demonstrated strong generalization in the OOS setting when relying solely on the environmental static features. We attribute this primarily to the inherent uncertainties and scale limitations associated with large-scale environmental datasets. In such cases, the inclusion of static features with limited process relevance for the prediction task provides little additional benefit, regardless of the integration method.</p>
      <p id="d2e2136">When more informative static features are used – in our case, time-series-derived features that directly summarize groundwater dynamics – performance improves across all model approaches, particularly in the OOS setting. While the repetition model continues to achieve the highest performance, evaluation metrics remain comparable across integration strategies.</p>
      <p id="d2e2139">Overall, we conclude that all tested approaches for incorporating static features into global deep learning models achieve comparable performance, with only subtle differences across integration strategies. No extensive hyperparameter optimization was conducted for any of the model variants. Instead, a consistent baseline architecture and a common set of hyperparameters were used to ensure a controlled and comparable evaluation of their relative performance. While more extensive tuning could potentially improve absolute performance levels, the primary objective of this study was to assess differences between integration strategies under consistent experimental conditions rather than to maximize predictive performance for individual models.</p>
      <p id="d2e2142">More importantly, our results indicate that the informativeness and process relevance of static features exert a stronger influence on model performance than the specific integration strategy, particularly for out-of-sample predictions. While this may appear intuitive in theory, it is often challenging to realize in practice – especially in groundwater modeling, where spatially consistent and process-relevant static data are not always readily available.</p>
      <p id="d2e2145">Finally, when comparing our results to those from other disciplines, we find strong cross-disciplinary support for the conclusion that the optimal approach depends heavily on the amount, diversity (e.g., in terms of time series dynamics), and informativeness of the available data – especially the static inputs. These characteristics vary not only between disciplines but often also across datasets within the same field.</p>
      <p id="d2e2148">As a final remark, we note that the relatively simple repetition approach achieved consistently strong results in our study. However, this method is not particularly efficient, as it involves replicating static features at every time step, which increases memory consumption and computational cost. Depending on the specific characteristics of a given dataset and the available resources, it may therefore be worthwhile to explore alternative integration strategies that offer a better balance between efficiency and performance.</p>
      <p id="d2e2152">While our findings are directly applicable to groundwater level modeling with larger datasets, they may also be relevant to related domains, such as surface water runoff prediction. However, they may not be universally transferable. A careful selection of both, input features and integration strategy, remains essential to achieve the best possible model performance.</p>
</sec>

      
      </body>
    <back><app-group>

<app id="App1.Ch1.S1">
  <label>Appendix A</label><title/>

<table-wrap id="TA1"><label>Table A1</label><caption><p id="d2e2170">Time-series-derived static features used in this study. Feature definitions follow <xref ref-type="bibr" rid="bib1.bibx38" id="text.49"/> and <xref ref-type="bibr" rid="bib1.bibx12" id="text.50"/>. All features were recomputed using only data from the respective training period to prevent information leakage.</p></caption><oasis:table frame="topbot"><oasis:tgroup cols="4">
     <oasis:colspec colnum="1" colname="col1" align="left"/>
     <oasis:colspec colnum="2" colname="col2" align="justify" colwidth="85pt"/>
     <oasis:colspec colnum="3" colname="col3" align="justify" colwidth="240pt"/>
     <oasis:colspec colnum="4" colname="col4" align="left"/>
     <oasis:thead>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">short name</oasis:entry>
         <oasis:entry colname="col2" align="left">feature name</oasis:entry>
         <oasis:entry colname="col3" align="left">description</oasis:entry>
         <oasis:entry colname="col4">citation</oasis:entry>
       </oasis:row>
     </oasis:thead>
     <oasis:tbody>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">RR</oasis:entry>
         <oasis:entry colname="col2" align="left">Range Ratio</oasis:entry>
         <oasis:entry colname="col3" align="left">Detection of superimposed long-periodic signals; sensitive to outliers; calculated as the ratio of the mean annual range to the overall range</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.51"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">Skew</oasis:entry>
         <oasis:entry colname="col2" align="left">Skewness</oasis:entry>
         <oasis:entry colname="col3" align="left">Boundedness, inhomogeneities, outliers, asymmetry of the probability distribution</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.52"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">P52</oasis:entry>
         <oasis:entry colname="col2" align="left">Annual Periodicity</oasis:entry>
         <oasis:entry colname="col3" align="left">Strength of the annual cycle; calculated by correlating (Pearson) the mean annual (52 weeks) periodicity with the complete time series</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.53"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">SDdiff</oasis:entry>
         <oasis:entry colname="col2" align="left">Standard Deviation of Differences</oasis:entry>
         <oasis:entry colname="col3" align="left">Flashiness, frequency, and rapidity of short-term changes; calculated as the standard deviation of first derivatives</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.54"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">LRec</oasis:entry>
         <oasis:entry colname="col2" align="left">Longest Recession</oasis:entry>
         <oasis:entry colname="col3" align="left">Long descending head sequences; longest sequence without rising head values</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.55"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">jumps</oasis:entry>
         <oasis:entry colname="col2" align="left">Jumps</oasis:entry>
         <oasis:entry colname="col3" align="left">Inhomogeneities/breaks; calculated as the absolute and standardized maximum change of the mean of two successive years</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.56"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">SB</oasis:entry>
         <oasis:entry colname="col2" align="left">Seasonal Behaviour</oasis:entry>
         <oasis:entry colname="col3" align="left">Timing of the annual maximum; agreement with expected average seasonality (minimum in September, maximum in March)</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx38" id="text.57"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row rowsep="1">
         <oasis:entry colname="col1">med01</oasis:entry>
         <oasis:entry colname="col2" align="left">Median <inline-formula><mml:math id="M47" display="inline"><mml:mrow><mml:mo>[</mml:mo><mml:mn mathvariant="normal">0</mml:mn><mml:mo>,</mml:mo><mml:mn mathvariant="normal">1</mml:mn><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col3" align="left">Boundedness; median after scaling to <inline-formula><mml:math id="M48" display="inline"><mml:mrow><mml:mo>[</mml:mo><mml:mn mathvariant="normal">0</mml:mn><mml:mo>,</mml:mo><mml:mn mathvariant="normal">1</mml:mn><mml:mo>]</mml:mo></mml:mrow></mml:math></inline-formula></oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx11" id="text.58"/>
                </oasis:entry>
       </oasis:row>
       <oasis:row>
         <oasis:entry colname="col1">HPD</oasis:entry>
         <oasis:entry colname="col2" align="left">High Pulse Duration</oasis:entry>
         <oasis:entry colname="col3" align="left">Average duration of heads exceeding the 80th percentile of non-exceedance</oasis:entry>
         <oasis:entry colname="col4">
                  <xref ref-type="bibr" rid="bib1.bibx34" id="text.59"/>
                </oasis:entry>
       </oasis:row>
     </oasis:tbody>
   </oasis:tgroup></oasis:table></table-wrap>

</app>
  </app-group><notes notes-type="codedataavailability"><title>Code and data availability</title>

      <p id="d2e2411">The code used in this study is publicly available on Zenodo (<ext-link xlink:href="https://doi.org/10.5281/zenodo.19452437" ext-link-type="DOI">10.5281/zenodo.19452437</ext-link>, <xref ref-type="bibr" rid="bib1.bibx21" id="altparen.60"/>). All data used in this study are publicly available via Zenodo (<ext-link xlink:href="https://doi.org/10.5281/zenodo.16601180" ext-link-type="DOI">10.5281/zenodo.16601180</ext-link>; <xref ref-type="bibr" rid="bib1.bibx20" id="altparen.61"/>).</p>
  </notes><notes notes-type="authorcontribution"><title>Author contributions</title>

      <p id="d2e2429">TL conceived the study, designed the experiments, and carried out all computations. MO contributed to discussions on methodology and created the figures. TL wrote the original draft of the manuscript; MO reviewed and edited the text.</p>
  </notes><notes notes-type="competinginterests"><title>Competing interests</title>

      <p id="d2e2435">The contact author has declared that neither of the authors has any competing interests.</p>
  </notes><notes notes-type="disclaimer"><title>Disclaimer</title>

      <p id="d2e2441">Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. The authors bear the ultimate responsibility for providing appropriate place names. Views expressed in the text are those of the authors and do not necessarily reflect the views of the publisher.</p>
  </notes><ack><title>Acknowledgements</title><p id="d2e2449">All programming was done in Python version 3.12 <xref ref-type="bibr" rid="bib1.bibx35" id="paren.62"/> and the associated libraries, including NumPy <xref ref-type="bibr" rid="bib1.bibx8" id="paren.63"/>, Pandas <xref ref-type="bibr" rid="bib1.bibx26" id="paren.64"/>, Tensorflow <xref ref-type="bibr" rid="bib1.bibx1" id="paren.65"/>, Keras <xref ref-type="bibr" rid="bib1.bibx3" id="paren.66"/>, SciPy <xref ref-type="bibr" rid="bib1.bibx36" id="paren.67"/>, Scikit-learn <xref ref-type="bibr" rid="bib1.bibx31" id="paren.68"/> and Matplotlib <xref ref-type="bibr" rid="bib1.bibx14" id="paren.69"/>. The authors further acknowledge support by the state of Baden-Württemberg through bwHPC.</p></ack><notes notes-type="financialsupport"><title>Financial support</title>

      <p id="d2e2479">The article processing charges for this open-access publication were covered by the Karlsruhe Institute of Technology (KIT).</p>
  </notes><notes notes-type="reviewstatement"><title>Review statement</title>

      <p id="d2e2486">This paper was edited by Christa Kelleher and reviewed by two anonymous referees.</p>
  </notes><ref-list>
    <title>References</title>

      <ref id="bib1.bibx1"><label>Abadi et al.(2016)Abadi, Barham, Chen, Chen, Davis, Dean, Devin, Ghemawat, Irving, Isard et al.</label><mixed-citation> Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zhen, X.: Tensorflow: A system for large-scale machine learning, in: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 265–283, 2016.</mixed-citation></ref>
      <ref id="bib1.bibx2"><label>Arsenault et al.(2023)Arsenault, Martel, Brunet, Brissette, and Mai</label><mixed-citation>Arsenault, R., Martel, J.-L., Brunet, F., Brissette, F., and Mai, J.: Continuous streamflow prediction in ungauged basins: long short-term memory neural networks clearly outperform traditional hydrological models, Hydrol. Earth Syst. Sci., 27, 139–157, <ext-link xlink:href="https://doi.org/10.5194/hess-27-139-2023" ext-link-type="DOI">10.5194/hess-27-139-2023</ext-link>, 2023.</mixed-citation></ref>
      <ref id="bib1.bibx3"><label>Chollet et al.(2015)</label><mixed-citation>Chollet, F. and contributors: Keras, <uri>https://github.com/fchollet/keras</uri> (last access: 7 April 2026), 2015.</mixed-citation></ref>
      <ref id="bib1.bibx4"><label>Clark et al.(2022)Clark, Pagendam, and Ryan</label><mixed-citation>Clark, S. R., Pagendam, D., and Ryan, L.: Forecasting Multiple Groundwater Time Series with Local and Global Deep Learning Networks, Int. J. Env. Res. Pub. He., 19, 5091, <ext-link xlink:href="https://doi.org/10.3390/ijerph19095091" ext-link-type="DOI">10.3390/ijerph19095091</ext-link>, 2022.</mixed-citation></ref>
      <ref id="bib1.bibx5"><label>DWD-CDC(2024)</label><mixed-citation>DWD-CDC: Daily grids of evapotranspiration, soil moisture, soil temperature, <uri>https://opendata.dwd.de/climate_environment/CDC/grids_germany/daily/</uri> (last access: 7 April 2026), 2024.</mixed-citation></ref>
      <ref id="bib1.bibx6"><label>Frame et al.(2022)Frame, Kratzert, Klotz, Gauch, Shalev, Gilon, Qualls, Gupta, and Nearing</label><mixed-citation>Frame, J. M., Kratzert, F., Klotz, D., Gauch, M., Shalev, G., Gilon, O., Qualls, L. M., Gupta, H. V., and Nearing, G. S.: Deep learning rainfall–runoff predictions of extreme events, Hydrol. Earth Syst. Sci., 26, 3377–3392, <ext-link xlink:href="https://doi.org/10.5194/hess-26-3377-2022" ext-link-type="DOI">10.5194/hess-26-3377-2022</ext-link>, 2022.</mixed-citation></ref>
      <ref id="bib1.bibx7"><label>Guo et al.(2019)Guo, Lin, and Antulov-Fantulin</label><mixed-citation>Guo, T., Lin, T., and Antulov-Fantulin, N.: Exploring interpretable LSTM neural networks over multi-variable data, in: International conference on machine learning, PMLR, 2494–2504, <uri>https://proceedings.mlr.press/v97/guo19b.html</uri> (last access: 7 April 2026), 2019.</mixed-citation></ref>
      <ref id="bib1.bibx8"><label>Harris et al.(2020)Harris, Millman, van der Walt, Gommers, Virtanen, Cournapeau, Wieser, Taylor, Berg, Smith, Kern, Picus, Hoyer, van Kerkwijk, Brett, Haldane, del Río, Wiebe, Peterson, Gérard-Marchant, Sheppard, Reddy, Weckesser, Abbasi, Gohlke, and Oliphant</label><mixed-citation>Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M. H., Brett, M., Haldane, A., del Río, J. F., Wiebe, M., Peterson, P., Gérard-Marchant, P., Sheppard, K., Reddy, T., Weckesser, W., Abbasi, H., Gohlke, C., and Oliphant, T. E.: Array programming with NumPy, Nature, 585, 357–362, <ext-link xlink:href="https://doi.org/10.1038/s41586-020-2649-2" ext-link-type="DOI">10.1038/s41586-020-2649-2</ext-link>, 2020.</mixed-citation></ref>
      <ref id="bib1.bibx9"><label>Hashemi et al.(2022)Hashemi, Brigode, Garambois, and Javelle</label><mixed-citation>Hashemi, R., Brigode, P., Garambois, P.-A., and Javelle, P.: How can we benefit from regime information to make more effective use of long short-term memory (LSTM) runoff models?, Hydrol. Earth Syst. Sci., 26, 5793–5816, <ext-link xlink:href="https://doi.org/10.5194/hess-26-5793-2022" ext-link-type="DOI">10.5194/hess-26-5793-2022</ext-link>, 2022.</mixed-citation></ref>
      <ref id="bib1.bibx10"><label>He et al.(2016)He, Zhang, Ren, and Sun</label><mixed-citation>He, K., Zhang, X., Ren, S., and Sun, J.: Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778, <uri>https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf</uri> (last access: 7 April 2026), 2016.</mixed-citation></ref>
      <ref id="bib1.bibx11"><label>Heudorfer et al.(2019)Heudorfer, Haaf, Stahl, and Barther</label><mixed-citation>Heudorfer, B., Haaf, E., Stahl, K., and Barther, R.: Index-based characterization and quantification of groundwater dynamics, Water Resour. Res., 55, 5575–5592, <ext-link xlink:href="https://doi.org/10.1029/2018WR024418" ext-link-type="DOI">10.1029/2018WR024418</ext-link>, 2019.</mixed-citation></ref>
      <ref id="bib1.bibx12"><label>Heudorfer et al.(2024)Heudorfer, Liesch, and Broda</label><mixed-citation>Heudorfer, B., Liesch, T., and Broda, S.: On the challenges of global entity-aware deep learning models for groundwater level prediction, Hydrol. Earth Syst. Sci., 28, 525–543, <ext-link xlink:href="https://doi.org/10.5194/hess-28-525-2024" ext-link-type="DOI">10.5194/hess-28-525-2024</ext-link>, 2024.</mixed-citation></ref>
      <ref id="bib1.bibx13"><label>Hochreiter and Schmidhuber(1997)</label><mixed-citation> Hochreiter, S. and Schmidhuber, J.: Long short-term memory, Neural. Comput., 9, 1735–1780, 1997.</mixed-citation></ref>
      <ref id="bib1.bibx14"><label>Hunter(2007)</label><mixed-citation> Hunter, J. D.: Matplotlib: A 2D graphics environment, Comput. Sci. Eng., 9, 90–95, 2007.</mixed-citation></ref>
      <ref id="bib1.bibx15"><label>Kraft et al.(2025)Kraft, Schirmer, Aeberhard, Zappa, Seneviratne, and Gudmundsson</label><mixed-citation>Kraft, B., Schirmer, M., Aeberhard, W. H., Zappa, M., Seneviratne, S. I., and Gudmundsson, L.: CH-RUN: a deep-learning-based spatially contiguous runoff reconstruction for Switzerland, Hydrol. Earth Syst. Sci., 29, 1061–1082, <ext-link xlink:href="https://doi.org/10.5194/hess-29-1061-2025" ext-link-type="DOI">10.5194/hess-29-1061-2025</ext-link>, 2025.</mixed-citation></ref>
      <ref id="bib1.bibx16"><label>Kratzert et al.(2018)Kratzert, Klotz, Brenner, Schulz, and Herrnegger</label><mixed-citation>Kratzert, F., Klotz, D., Brenner, C., Schulz, K., and Herrnegger, M.: Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks, Hydrol. Earth Syst. Sci., 22, 6005–6022, <ext-link xlink:href="https://doi.org/10.5194/hess-22-6005-2018" ext-link-type="DOI">10.5194/hess-22-6005-2018</ext-link>, 2018.</mixed-citation></ref>
      <ref id="bib1.bibx17"><label>Kratzert et al.(2019)Kratzert, Klotz, Shalev, Klambauer, Hochreiter, and Nearing</label><mixed-citation>Kratzert, F., Klotz, D., Shalev, G., Klambauer, G., Hochreiter, S., and Nearing, G.: Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets, Hydrol. Earth Syst. Sci., 23, 5089–5110, <ext-link xlink:href="https://doi.org/10.5194/hess-23-5089-2019" ext-link-type="DOI">10.5194/hess-23-5089-2019</ext-link>, 2019.</mixed-citation></ref>
      <ref id="bib1.bibx18"><label>Kratzert et al.(2024)Kratzert, Gauch, Klotz, and Nearing</label><mixed-citation>Kratzert, F., Gauch, M., Klotz, D., and Nearing, G.: HESS Opinions: Never train a Long Short-Term Memory (LSTM) network on a single basin, Hydrol. Earth Syst. Sci., 28, 4187–4201, <ext-link xlink:href="https://doi.org/10.5194/hess-28-4187-2024" ext-link-type="DOI">10.5194/hess-28-4187-2024</ext-link>, 2024.</mixed-citation></ref>
      <ref id="bib1.bibx19"><label>Lees et al.(2021)Lees, Buechel, Anderson, Slater, Reece, Coxon, and Dadson</label><mixed-citation>Lees, T., Buechel, M., Anderson, B., Slater, L., Reece, S., Coxon, G., and Dadson, S. J.: Benchmarking data-driven rainfall–runoff models in Great Britain: a comparison of long short-term memory (LSTM)-based models with four lumped conceptual models, Hydrol. Earth Syst. Sci., 25, 5517–5534, <ext-link xlink:href="https://doi.org/10.5194/hess-25-5517-2021" ext-link-type="DOI">10.5194/hess-25-5517-2021</ext-link>, 2021.</mixed-citation></ref>
      <ref id="bib1.bibx20"><label>Liesch(2025)</label><mixed-citation>Liesch, T.: Groundwater level time series, meteorological forcings and static feature dataset for 667 wells in Germany, Zenodo [data set], <ext-link xlink:href="https://doi.org/10.5281/zenodo.16601180" ext-link-type="DOI">10.5281/zenodo.16601180</ext-link>, 2025.</mixed-citation></ref>
      <ref id="bib1.bibx21"><label>Liesch(2026)</label><mixed-citation>Liesch, T.: KITHydrogeology/dynamic_static: Strategies for Incorporating Static Features into Global Deep Learning Models – Code Release v1.0.0 (v1.0.0), Zenodo [code], <ext-link xlink:href="https://doi.org/10.5281/zenodo.19452437" ext-link-type="DOI">10.5281/zenodo.19452437</ext-link>, 2026.</mixed-citation></ref>
      <ref id="bib1.bibx22"><label>Liu et al.(2022)Liu, Yang, Mohammadi, Song, Bi, and Wang</label><mixed-citation>Liu, Q., Yang, M., Mohammadi, K., Song, D., Bi, J., and Wang, G.: Machine Learning Crop Yield Models Based on Meteorological Features and Comparison with a Process-Based Model, Artificial Intelligence for the Earth Systems, 1, e220002, <ext-link xlink:href="https://doi.org/10.1175/AIES-D-22-0002.1" ext-link-type="DOI">10.1175/AIES-D-22-0002.1</ext-link>, 2022.</mixed-citation></ref>
      <ref id="bib1.bibx23"><label>Martel et al.(2025)Martel, Arsenault, Turcotte, Castañeda-Gonzalez, Brissette, Armstrong, Mailhot, Pelletier-Dumont, Lachance-Cloutier, Rondeau-Genesse et al.</label><mixed-citation>Martel, J.-L., Arsenault, R., Turcotte, R., Castañeda-Gonzalez, M., Brissette, F., Armstrong, W., Mailhot, E., Pelletier-Dumont, J., Lachance-Cloutier, S., Rondeau-Genesse, G., and Caron, L.-P.: Exploring the ability of LSTM-based hydrological models to simulate streamflow time series for flood frequency analysis, Hydrol. Earth Syst. Sci., 29, 4951–4968, <ext-link xlink:href="https://doi.org/10.5194/hess-29-4951-2025" ext-link-type="DOI">10.5194/hess-29-4951-2025</ext-link>, 2025.</mixed-citation></ref>
      <ref id="bib1.bibx24"><label>Martel et al.(2025)Martel, Brissette, Arsenault, Turcotte, Castañeda-Gonzalez, Armstrong, Mailhot, Pelletier-Dumont, Rondeau-Genesse, and Caron</label><mixed-citation>Martel, J.-L., Brissette, F., Arsenault, R., Turcotte, R., Castañeda-Gonzalez, M., Armstrong, W., Mailhot, E., Pelletier-Dumont, J., Rondeau-Genesse, G., and Caron, L.-P.: Assessing the adequacy of traditional hydrological models for climate change impact studies: a case for long short-term memory (LSTM) neural networks, Hydrol. Earth Syst. Sci., 29, 2811–2836, <ext-link xlink:href="https://doi.org/10.5194/hess-29-2811-2025" ext-link-type="DOI">10.5194/hess-29-2811-2025</ext-link>, 2025.</mixed-citation></ref>
      <ref id="bib1.bibx25"><label>Marx et al.(2023)Marx, Di Stefano, Leutheuser, Chin-Cheong, Pfister, Burckhardt, Bachmann, and Vogt</label><mixed-citation>Marx, A., Di Stefano, F., Leutheuser, H., Chin-Cheong, K., Pfister, M., Burckhardt, M.-A., Bachmann, S., and Vogt, J. E.: Blood glucose forecasting from temporal and static information in children with T1D, Frontiers in Pediatrics, 11, 1296904, <ext-link xlink:href="https://doi.org/10.3389/fped.2023.1296904" ext-link-type="DOI">10.3389/fped.2023.1296904</ext-link>, 2023.</mixed-citation></ref>
      <ref id="bib1.bibx26"><label>McKinney(2010)</label><mixed-citation>McKinney, W.: Data Structures for Statistical Computing in Python, in: Proceedings of the 9th Python in Science Conference, edited by: van der Walt, S. and Millman, J., SciPy, Austin, Texas, 56–61, <ext-link xlink:href="https://doi.org/10.25080/Majora-92bf1922-00a" ext-link-type="DOI">10.25080/Majora-92bf1922-00a</ext-link>, 2010.</mixed-citation></ref>
      <ref id="bib1.bibx27"><label>Miebs et al.(2020)Miebs, Mochol-Grzelak, Karaszewski, and Bachorz</label><mixed-citation> Miebs, G., Mochol-Grzelak, M., Karaszewski, A., and Bachorz, R. A.: Efficient strategies of static features incorporation into the recurrent neural network, Neural Process. Lett., 51, 2301–2316, 2020.</mixed-citation></ref>
      <ref id="bib1.bibx28"><label>Muñoz-Sabater et al.(2021)Muñoz-Sabater, Dutra, Agustí-Panareda, Albergel, Arduini, Balsamo, Boussetta, Choulga, Harrigan, Hersbach, Martens, Miralles, Piles, Rodríguez-Fernández, Zsoter, Buontempo, and Thépaut</label><mixed-citation>Muñoz-Sabater, J., Dutra, E., Agustí-Panareda, A., Albergel, C., Arduini, G., Balsamo, G., Boussetta, S., Choulga, M., Harrigan, S., Hersbach, H., Martens, B., Miralles, D. G., Piles, M., Rodríguez-Fernández, N. J., Zsoter, E., Buontempo, C., and Thépaut, J.-N.: ERA5-Land: a state-of-the-art global reanalysis dataset for land applications, Earth Syst. Sci. Data, 13, 4349–4383, <ext-link xlink:href="https://doi.org/10.5194/essd-13-4349-2021" ext-link-type="DOI">10.5194/essd-13-4349-2021</ext-link>, 2021.</mixed-citation></ref>
      <ref id="bib1.bibx29"><label>Ohmer et al.(2025)Ohmer, Liesch, Habbel, Heudorfer, Gomez, Clos, Nölscher, and Broda</label><mixed-citation>Ohmer, M., Liesch, T., Habbel, B., Heudorfer, B., Gomez, M., Clos, P., Nölscher, M., and Broda, S.: GEMS-GER: A Machine Learning Benchmark Dataset of Long-Term Groundwater Levels in Germany with Meteorological Forcings and Site-Specific Environmental Features, Zenodo [data set], <ext-link xlink:href="https://doi.org/10.5281/zenodo.16736908" ext-link-type="DOI">10.5281/zenodo.16736908</ext-link>, 2025.</mixed-citation></ref>
      <ref id="bib1.bibx30"><label>Ohmer et al.(2026)Ohmer, Liesch, Habbel, Heudorfer, Gomez, Clos, Nölscher, and Broda</label><mixed-citation>Ohmer, M., Liesch, T., Habbel, B., Heudorfer, B., Gomez, M., Clos, P., Nölscher, M., and Broda, S.: GEMS-GER: a machine learning benchmark dataset of long-term groundwater levels in Germany with meteorological forcings and site-specific environmental features, Earth Syst. Sci. Data, 18, 77–95, <ext-link xlink:href="https://doi.org/10.5194/essd-18-77-2026" ext-link-type="DOI">10.5194/essd-18-77-2026</ext-link>, 2026.</mixed-citation></ref>
      <ref id="bib1.bibx31"><label>Pedregosa et al.(2011)Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg et al.</label><mixed-citation> Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., and Cournapeau, D.: Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12, 2825–2830, 2011.</mixed-citation></ref>
      <ref id="bib1.bibx32"><label>Rahman et al.(2020)Rahman, Yuan, Xie, and Sha</label><mixed-citation>Rahman, M. H., Yuan, S., Xie, C., and Sha, Z.: Predicting human design decisions with deep recurrent neural network combining static and dynamic data, Design Science, 6, e15, <ext-link xlink:href="https://doi.org/10.1017/dsj.2020.12" ext-link-type="DOI">10.1017/dsj.2020.12</ext-link>, 2020. </mixed-citation></ref>
      <ref id="bib1.bibx33"><label>Razafimaharo et al.(2020)Razafimaharo, Krähenmann, Höpp, Rauthe, and Deutschländer</label><mixed-citation>Razafimaharo, C., Krähenmann, S., Höpp, S., Rauthe, M., and Deutschländer, T.: New high-resolution gridded dataset of daily mean, minimum, and maximum temperature and relative humidity for Central Europe (HYRAS), Theor. Appl. Climatol., 142, 1531–1553, <ext-link xlink:href="https://doi.org/10.1007/s00704-020-03388-w" ext-link-type="DOI">10.1007/s00704-020-03388-w</ext-link>, 2020.</mixed-citation></ref>
      <ref id="bib1.bibx34"><label>Richter et al.(1996)Richter, Baumgartner, Powell, and Braun</label><mixed-citation> Richter, B., Baumgartner, J., Powell, J., and Braun, D.: A method for assessing hydrologic alteration within ecosystems, Conserv. Biol., 10, 1163–1174, 1996.</mixed-citation></ref>
      <ref id="bib1.bibx35"><label>van Rossum(1995)</label><mixed-citation>van Rossum, G.: Python Programming Language, <uri>https://www.python.org/</uri> (last access: 7 April 2026), 1995.</mixed-citation></ref>
      <ref id="bib1.bibx36"><label>Virtanen et al.(2020)Virtanen, Gommers, Oliphant, Haberland, Reddy, Cournapeau, Burovski, Peterson, Weckesser, Bright, van der Walt, Brett, Wilson, Millman, Mayorov, Nelson, Jones, Kern, Larson, Carey, Polat, Feng, Moore, VanderPlas, Laxalde, Perktold, Cimrman, Henriksen, Quintero, Harris, Archibald, Ribeiro, Pedregosa, van Mulbregt, and SciPy 1.0 Contributors</label><mixed-citation>Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, İ., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors: SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python, Nat. Methods, 17, 261–272, <ext-link xlink:href="https://doi.org/10.1038/s41592-019-0686-2" ext-link-type="DOI">10.1038/s41592-019-0686-2</ext-link>, 2020.</mixed-citation></ref>
      <ref id="bib1.bibx37"><label>Wang et al.(2022)Wang, Jin, Hu, Feng, and Cheng</label><mixed-citation>Wang, T., Jin, F., Hu, Y. J., Feng, L., and Cheng, Y.: Making Early and Accurate Deep Learning Predictions to Help Disadvantaged Individuals in Medical Crowdfunding, Prod. Oper. Manag., 10591478241231846, <ext-link xlink:href="https://doi.org/10.1177/10591478241231846" ext-link-type="DOI">10.1177/10591478241231846</ext-link>, 2022.</mixed-citation></ref>
      <ref id="bib1.bibx38"><label>Wunsch et al.(2022)Wunsch, Liesch, and Broda</label><mixed-citation>Wunsch, A., Liesch, T., and Broda, S.: Feature-based Groundwater Hydrograph Clustering Using Unsupervised Self-Organizing Map-Ensembles, Water Resour. Manag., 36, 39–54, <ext-link xlink:href="https://doi.org/10.1007/s11269-021-03006-y" ext-link-type="DOI">10.1007/s11269-021-03006-y</ext-link>, 2022.</mixed-citation></ref>

  </ref-list></back>
    <!--<article-title-html>Strategies for incorporating static features into global deep learning models</article-title-html>
<abstract-html/>
<ref-html id="bib1.bib1"><label>Abadi et al.(2016)Abadi, Barham, Chen, Chen, Davis, Dean, Devin, Ghemawat, Irving, Isard et al.</label><mixed-citation>
      
Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D. G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., and Zhen, X.:
Tensorflow: A system for large-scale machine learning, in: 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), 265–283, 2016.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib2"><label>Arsenault et al.(2023)Arsenault, Martel, Brunet, Brissette, and Mai</label><mixed-citation>
      
Arsenault, R., Martel, J.-L., Brunet, F., Brissette, F., and Mai, J.:
Continuous streamflow prediction in ungauged basins: long short-term memory neural networks clearly outperform traditional hydrological models, Hydrol. Earth Syst. Sci., 27, 139–157, <a href="https://doi.org/10.5194/hess-27-139-2023" target="_blank">https://doi.org/10.5194/hess-27-139-2023</a>, 2023.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib3"><label>Chollet et al.(2015)</label><mixed-citation>
      
Chollet, F. and contributors:
Keras, <a href="https://github.com/fchollet/keras" target="_blank"/> (last access: 7 April 2026), 2015.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib4"><label>Clark et al.(2022)Clark, Pagendam, and Ryan</label><mixed-citation>
      
Clark, S. R., Pagendam, D., and Ryan, L.:
Forecasting Multiple Groundwater Time Series with Local and Global Deep Learning Networks, Int. J. Env. Res. Pub. He., 19, 5091, <a href="https://doi.org/10.3390/ijerph19095091" target="_blank">https://doi.org/10.3390/ijerph19095091</a>, 2022.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib5"><label>DWD-CDC(2024)</label><mixed-citation>
      
DWD-CDC:
Daily grids of evapotranspiration, soil moisture, soil temperature, <a href="https://opendata.dwd.de/climate_environment/CDC/grids_germany/daily/" target="_blank"/> (last access: 7 April 2026), 2024.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib6"><label>Frame et al.(2022)Frame, Kratzert, Klotz, Gauch, Shalev, Gilon, Qualls, Gupta, and Nearing</label><mixed-citation>
      
Frame, J. M., Kratzert, F., Klotz, D., Gauch, M., Shalev, G., Gilon, O., Qualls, L. M., Gupta, H. V., and Nearing, G. S.:
Deep learning rainfall–runoff predictions of extreme events, Hydrol. Earth Syst. Sci., 26, 3377–3392, <a href="https://doi.org/10.5194/hess-26-3377-2022" target="_blank">https://doi.org/10.5194/hess-26-3377-2022</a>, 2022.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib7"><label>Guo et al.(2019)Guo, Lin, and Antulov-Fantulin</label><mixed-citation>
      
Guo, T., Lin, T., and Antulov-Fantulin, N.:
Exploring interpretable LSTM neural networks over multi-variable data, in: International conference on machine learning, PMLR, 2494–2504, <a href="https://proceedings.mlr.press/v97/guo19b.html" target="_blank"/> (last access: 7 April 2026), 2019.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib8"><label>Harris et al.(2020)Harris, Millman, van der Walt, Gommers, Virtanen, Cournapeau, Wieser, Taylor, Berg, Smith, Kern, Picus, Hoyer, van Kerkwijk, Brett, Haldane, del Río, Wiebe, Peterson, Gérard-Marchant, Sheppard, Reddy, Weckesser, Abbasi, Gohlke, and Oliphant</label><mixed-citation>
      
Harris, C. R., Millman, K. J., van der Walt, S. J., Gommers, R., Virtanen, P., Cournapeau, D., Wieser, E., Taylor, J., Berg, S., Smith, N. J., Kern, R., Picus, M., Hoyer, S., van Kerkwijk, M. H., Brett, M., Haldane, A., del Río, J. F., Wiebe, M., Peterson, P., Gérard-Marchant, P., Sheppard, K., Reddy, T., Weckesser, W., Abbasi, H., Gohlke, C., and Oliphant, T. E.:
Array programming with NumPy, Nature, 585, 357–362, <a href="https://doi.org/10.1038/s41586-020-2649-2" target="_blank">https://doi.org/10.1038/s41586-020-2649-2</a>, 2020.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib9"><label>Hashemi et al.(2022)Hashemi, Brigode, Garambois, and Javelle</label><mixed-citation>
      
Hashemi, R., Brigode, P., Garambois, P.-A., and Javelle, P.:
How can we benefit from regime information to make more effective use of long short-term memory (LSTM) runoff models?, Hydrol. Earth Syst. Sci., 26, 5793–5816, <a href="https://doi.org/10.5194/hess-26-5793-2022" target="_blank">https://doi.org/10.5194/hess-26-5793-2022</a>, 2022.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib10"><label>He et al.(2016)He, Zhang, Ren, and Sun</label><mixed-citation>
      
He, K., Zhang, X., Ren, S., and Sun, J.:
Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 770–778, <a href="https://openaccess.thecvf.com/content_cvpr_2016/papers/He_Deep_Residual_Learning_CVPR_2016_paper.pdf" target="_blank"/> (last access: 7 April 2026), 2016.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib11"><label>Heudorfer et al.(2019)Heudorfer, Haaf, Stahl, and Barther</label><mixed-citation>
      
Heudorfer, B., Haaf, E., Stahl, K., and Barther, R.:
Index-based characterization and quantification of groundwater dynamics, Water Resour. Res., 55, 5575–5592, <a href="https://doi.org/10.1029/2018WR024418" target="_blank">https://doi.org/10.1029/2018WR024418</a>, 2019.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib12"><label>Heudorfer et al.(2024)Heudorfer, Liesch, and Broda</label><mixed-citation>
      
Heudorfer, B., Liesch, T., and Broda, S.:
On the challenges of global entity-aware deep learning models for groundwater level prediction, Hydrol. Earth Syst. Sci., 28, 525–543, <a href="https://doi.org/10.5194/hess-28-525-2024" target="_blank">https://doi.org/10.5194/hess-28-525-2024</a>, 2024.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib13"><label>Hochreiter and Schmidhuber(1997)</label><mixed-citation>
      
Hochreiter, S. and Schmidhuber, J.:
Long short-term memory, Neural. Comput., 9, 1735–1780, 1997.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib14"><label>Hunter(2007)</label><mixed-citation>
      
Hunter, J. D.:
Matplotlib: A 2D graphics environment, Comput. Sci. Eng., 9, 90–95, 2007.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib15"><label>Kraft et al.(2025)Kraft, Schirmer, Aeberhard, Zappa, Seneviratne, and Gudmundsson</label><mixed-citation>
      
Kraft, B., Schirmer, M., Aeberhard, W. H., Zappa, M., Seneviratne, S. I., and Gudmundsson, L.:
CH-RUN: a deep-learning-based spatially contiguous runoff reconstruction for Switzerland, Hydrol. Earth Syst. Sci., 29, 1061–1082, <a href="https://doi.org/10.5194/hess-29-1061-2025" target="_blank">https://doi.org/10.5194/hess-29-1061-2025</a>, 2025.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib16"><label>Kratzert et al.(2018)Kratzert, Klotz, Brenner, Schulz, and Herrnegger</label><mixed-citation>
      
Kratzert, F., Klotz, D., Brenner, C., Schulz, K., and Herrnegger, M.:
Rainfall–runoff modelling using Long Short-Term Memory (LSTM) networks, Hydrol. Earth Syst. Sci., 22, 6005–6022, <a href="https://doi.org/10.5194/hess-22-6005-2018" target="_blank">https://doi.org/10.5194/hess-22-6005-2018</a>, 2018.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib17"><label>Kratzert et al.(2019)Kratzert, Klotz, Shalev, Klambauer, Hochreiter, and Nearing</label><mixed-citation>
      
Kratzert, F., Klotz, D., Shalev, G., Klambauer, G., Hochreiter, S., and Nearing, G.:
Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets, Hydrol. Earth Syst. Sci., 23, 5089–5110, <a href="https://doi.org/10.5194/hess-23-5089-2019" target="_blank">https://doi.org/10.5194/hess-23-5089-2019</a>, 2019.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib18"><label>Kratzert et al.(2024)Kratzert, Gauch, Klotz, and Nearing</label><mixed-citation>
      
Kratzert, F., Gauch, M., Klotz, D., and Nearing, G.:
HESS Opinions: Never train a Long Short-Term Memory (LSTM) network on a single basin, Hydrol. Earth Syst. Sci., 28, 4187–4201, <a href="https://doi.org/10.5194/hess-28-4187-2024" target="_blank">https://doi.org/10.5194/hess-28-4187-2024</a>, 2024.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib19"><label>Lees et al.(2021)Lees, Buechel, Anderson, Slater, Reece, Coxon, and Dadson</label><mixed-citation>
      
Lees, T., Buechel, M., Anderson, B., Slater, L., Reece, S., Coxon, G., and Dadson, S. J.:
Benchmarking data-driven rainfall–runoff models in Great Britain: a comparison of long short-term memory (LSTM)-based models with four lumped conceptual models, Hydrol. Earth Syst. Sci., 25, 5517–5534, <a href="https://doi.org/10.5194/hess-25-5517-2021" target="_blank">https://doi.org/10.5194/hess-25-5517-2021</a>, 2021.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib20"><label>Liesch(2025)</label><mixed-citation>
      
Liesch, T.:
Groundwater level time series, meteorological forcings and static feature dataset for 667 wells in Germany, Zenodo [data set], <a href="https://doi.org/10.5281/zenodo.16601180" target="_blank">https://doi.org/10.5281/zenodo.16601180</a>, 2025.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib21"><label>Liesch(2026)</label><mixed-citation>
      
Liesch, T.: KITHydrogeology/dynamic_static: Strategies for Incorporating Static Features into Global Deep Learning Models – Code Release v1.0.0 (v1.0.0), Zenodo [code], <a href="https://doi.org/10.5281/zenodo.19452437" target="_blank">https://doi.org/10.5281/zenodo.19452437</a>, 2026.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib22"><label>Liu et al.(2022)Liu, Yang, Mohammadi, Song, Bi, and Wang</label><mixed-citation>
      
Liu, Q., Yang, M., Mohammadi, K., Song, D., Bi, J., and Wang, G.:
Machine Learning Crop Yield Models Based on Meteorological Features and Comparison with a Process-Based Model, Artificial Intelligence for the Earth Systems, 1, e220002, <a href="https://doi.org/10.1175/AIES-D-22-0002.1" target="_blank">https://doi.org/10.1175/AIES-D-22-0002.1</a>, 2022.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib23"><label>Martel et al.(2025)Martel, Arsenault, Turcotte, Castañeda-Gonzalez, Brissette, Armstrong, Mailhot, Pelletier-Dumont, Lachance-Cloutier, Rondeau-Genesse et al.</label><mixed-citation>
      
Martel, J.-L., Arsenault, R., Turcotte, R., Castañeda-Gonzalez, M., Brissette, F., Armstrong, W., Mailhot, E., Pelletier-Dumont, J., Lachance-Cloutier, S., Rondeau-Genesse, G., and Caron, L.-P.: Exploring the ability of LSTM-based hydrological models to simulate streamflow time series for flood frequency analysis, Hydrol. Earth Syst. Sci., 29, 4951–4968, <a href="https://doi.org/10.5194/hess-29-4951-2025" target="_blank">https://doi.org/10.5194/hess-29-4951-2025</a>, 2025.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib24"><label>Martel et al.(2025)Martel, Brissette, Arsenault, Turcotte, Castañeda-Gonzalez, Armstrong, Mailhot, Pelletier-Dumont, Rondeau-Genesse, and Caron</label><mixed-citation>
      
Martel, J.-L., Brissette, F., Arsenault, R., Turcotte, R., Castañeda-Gonzalez, M., Armstrong, W., Mailhot, E., Pelletier-Dumont, J., Rondeau-Genesse, G., and Caron, L.-P.:
Assessing the adequacy of traditional hydrological models for climate change impact studies: a case for long short-term memory (LSTM) neural networks, Hydrol. Earth Syst. Sci., 29, 2811–2836, <a href="https://doi.org/10.5194/hess-29-2811-2025" target="_blank">https://doi.org/10.5194/hess-29-2811-2025</a>, 2025.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib25"><label>Marx et al.(2023)Marx, Di Stefano, Leutheuser, Chin-Cheong, Pfister, Burckhardt, Bachmann, and Vogt</label><mixed-citation>
      
Marx, A., Di Stefano, F., Leutheuser, H., Chin-Cheong, K., Pfister, M., Burckhardt, M.-A., Bachmann, S., and Vogt, J. E.:
Blood glucose forecasting from temporal and static information in children with T1D, Frontiers in Pediatrics, 11, 1296904, <a href="https://doi.org/10.3389/fped.2023.1296904" target="_blank">https://doi.org/10.3389/fped.2023.1296904</a>, 2023.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib26"><label>McKinney(2010)</label><mixed-citation>
      
McKinney, W.:
Data Structures for Statistical Computing in Python, in: Proceedings of the 9th Python in Science Conference, edited by: van der Walt, S. and Millman, J., SciPy, Austin, Texas, 56–61, <a href="https://doi.org/10.25080/Majora-92bf1922-00a" target="_blank">https://doi.org/10.25080/Majora-92bf1922-00a</a>, 2010.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib27"><label>Miebs et al.(2020)Miebs, Mochol-Grzelak, Karaszewski, and Bachorz</label><mixed-citation>
      
Miebs, G., Mochol-Grzelak, M., Karaszewski, A., and Bachorz, R. A.:
Efficient strategies of static features incorporation into the recurrent neural network, Neural Process. Lett., 51, 2301–2316, 2020.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib28"><label>Muñoz-Sabater et al.(2021)Muñoz-Sabater, Dutra, Agustí-Panareda, Albergel, Arduini, Balsamo, Boussetta, Choulga, Harrigan, Hersbach, Martens, Miralles, Piles, Rodríguez-Fernández, Zsoter, Buontempo, and Thépaut</label><mixed-citation>
      
Muñoz-Sabater, J., Dutra, E., Agustí-Panareda, A., Albergel, C., Arduini, G., Balsamo, G., Boussetta, S., Choulga, M., Harrigan, S., Hersbach, H., Martens, B., Miralles, D. G., Piles, M., Rodríguez-Fernández, N. J., Zsoter, E., Buontempo, C., and Thépaut, J.-N.:
ERA5-Land: a state-of-the-art global reanalysis dataset for land applications, Earth Syst. Sci. Data, 13, 4349–4383, <a href="https://doi.org/10.5194/essd-13-4349-2021" target="_blank">https://doi.org/10.5194/essd-13-4349-2021</a>, 2021.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib29"><label>Ohmer et al.(2025)Ohmer, Liesch, Habbel, Heudorfer, Gomez, Clos, Nölscher, and Broda</label><mixed-citation>
      
Ohmer, M., Liesch, T., Habbel, B., Heudorfer, B., Gomez, M., Clos, P., Nölscher, M., and Broda, S.:
GEMS-GER: A Machine Learning Benchmark Dataset of Long-Term Groundwater Levels in Germany with Meteorological Forcings and Site-Specific Environmental Features, Zenodo [data set], <a href="https://doi.org/10.5281/zenodo.16736908" target="_blank">https://doi.org/10.5281/zenodo.16736908</a>, 2025.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib30"><label>Ohmer et al.(2026)Ohmer, Liesch, Habbel, Heudorfer, Gomez, Clos, Nölscher, and Broda</label><mixed-citation>
      
Ohmer, M., Liesch, T., Habbel, B., Heudorfer, B., Gomez, M., Clos, P., Nölscher, M., and Broda, S.:
GEMS-GER: a machine learning benchmark dataset of long-term groundwater levels in Germany with meteorological forcings and site-specific environmental features, Earth Syst. Sci. Data, 18, 77–95, <a href="https://doi.org/10.5194/essd-18-77-2026" target="_blank">https://doi.org/10.5194/essd-18-77-2026</a>, 2026.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib31"><label>Pedregosa et al.(2011)Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg et al.</label><mixed-citation>
      
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., and Cournapeau, D.:
Scikit-learn: Machine learning in Python, J. Mach. Learn. Res., 12, 2825–2830, 2011.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib32"><label>Rahman et al.(2020)Rahman, Yuan, Xie, and Sha</label><mixed-citation>
      
Rahman, M. H., Yuan, S., Xie, C., and Sha, Z.:
Predicting human design decisions with deep recurrent neural network combining static and dynamic data, Design Science, 6, e15, <a href="https://doi.org/10.1017/dsj.2020.12" target="_blank">https://doi.org/10.1017/dsj.2020.12</a>, 2020.


    </mixed-citation></ref-html>
<ref-html id="bib1.bib33"><label>Razafimaharo et al.(2020)Razafimaharo, Krähenmann, Höpp, Rauthe, and Deutschländer</label><mixed-citation>
      
Razafimaharo, C., Krähenmann, S., Höpp, S., Rauthe, M., and Deutschländer, T.:
New high-resolution gridded dataset of daily mean, minimum, and maximum temperature and relative humidity for Central Europe (HYRAS), Theor. Appl. Climatol., 142, 1531–1553, <a href="https://doi.org/10.1007/s00704-020-03388-w" target="_blank">https://doi.org/10.1007/s00704-020-03388-w</a>, 2020.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib34"><label>Richter et al.(1996)Richter, Baumgartner, Powell, and Braun</label><mixed-citation>
      
Richter, B., Baumgartner, J., Powell, J., and Braun, D.:
A method for assessing hydrologic alteration within ecosystems, Conserv. Biol., 10, 1163–1174, 1996.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib35"><label>van Rossum(1995)</label><mixed-citation>
       van Rossum, G.:
Python Programming Language, <a href="https://www.python.org/" target="_blank"/> (last access: 7 April 2026), 1995.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib36"><label>Virtanen et al.(2020)Virtanen, Gommers, Oliphant, Haberland, Reddy, Cournapeau, Burovski, Peterson, Weckesser, Bright, van der Walt, Brett, Wilson, Millman, Mayorov, Nelson, Jones, Kern, Larson, Carey, Polat, Feng, Moore, VanderPlas, Laxalde, Perktold, Cimrman, Henriksen, Quintero, Harris, Archibald, Ribeiro, Pedregosa, van Mulbregt, and SciPy 1.0 Contributors</label><mixed-citation>
      
Virtanen, P., Gommers, R., Oliphant, T. E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S. J., Brett, M., Wilson, J., Millman, K. J., Mayorov, N., Nelson, A. R. J., Jones, E., Kern, R., Larson, E., Carey, C. J., Polat, İ., Feng, Y., Moore, E. W., VanderPlas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E. A., Harris, C. R., Archibald, A. M., Ribeiro, A. H., Pedregosa, F., van Mulbregt, P., and SciPy 1.0 Contributors: SciPy 1.0:
Fundamental Algorithms for Scientific Computing in Python, Nat. Methods, 17, 261–272, <a href="https://doi.org/10.1038/s41592-019-0686-2" target="_blank">https://doi.org/10.1038/s41592-019-0686-2</a>, 2020.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib37"><label>Wang et al.(2022)Wang, Jin, Hu, Feng, and Cheng</label><mixed-citation>
      
Wang, T., Jin, F., Hu, Y. J., Feng, L., and Cheng, Y.:
Making Early and Accurate Deep Learning Predictions to Help Disadvantaged Individuals in Medical Crowdfunding, Prod. Oper. Manag., 10591478241231846, <a href="https://doi.org/10.1177/10591478241231846" target="_blank">https://doi.org/10.1177/10591478241231846</a>, 2022.

    </mixed-citation></ref-html>
<ref-html id="bib1.bib38"><label>Wunsch et al.(2022)Wunsch, Liesch, and Broda</label><mixed-citation>
      
Wunsch, A., Liesch, T., and Broda, S.:
Feature-based Groundwater Hydrograph Clustering Using Unsupervised Self-Organizing Map-Ensembles, Water Resour. Manag., 36, 39–54, <a href="https://doi.org/10.1007/s11269-021-03006-y" target="_blank">https://doi.org/10.1007/s11269-021-03006-y</a>, 2022.

    </mixed-citation></ref-html>--></article>
