Intersecting Near-Real Time Fluvial and Pluvial Inundation Estimates with Sociodemographic Vulnerability to Quantify a Household Flood Impact Index

. Increased interest in combining compound ﬂood hazards and social vulnerability has driven recent advances in ﬂood impact mapping. However, current methods to estimate event speciﬁc compound ﬂooding at the household level require high-performance computing resources frequently not available to local stakeholders. Government and non-government agencies currently lack methods to repeatedly and rapidly create ﬂood impact maps that incorporate local variability of both hazards and social vulnerability. We address this gap by developing a methodology to estimate a ﬂood impact index at the household 5 level in near-real time, utilizing high resolution elevation data to approximate event speciﬁc inundation from both pluvial and ﬂuvial sources in conjunction with a social vulnerability index. Our analysis uses the 2015 Memorial Day ﬂood in Austin, Texas as a case study and proof of concept for our methodology. We show that 37% of the Census Block Groups in the study area experience ﬂooding from only pluvial sources and are not identiﬁed in local or national ﬂood hazard maps as being at risk. Furthermore, averaging hazard estimates to cartographic boundaries masks household variability, with 60% of the Census 10 Block Groups in the study area having a coefﬁcient of variation around the mean ﬂood depth exceeding 50%. Comparing our pluvial ﬂooding estimates to a 2D physics-based model, we classify household impact accurately for 92% of households. Our methodology can be used as a tool to create household compound ﬂood impact maps to provide computationally efﬁcient information to local stakeholders.

resolution terrain data in fluvial inundation has been covered in previous work, we refer the reader to the GeoFlood publication 90 (Zheng et al., 2018) and references therein. Since the novelty of our study lies in the integration of a pluvial flooding estimate and vulnerability in near real-time into this existing approach, we provide more background on these specific components.

Modeling Surface Water in Depressions
A variety of processes form depressions along different sections of alluvial plains, ranging from centimeters to kilometers in scale, and play a critical role in sediment deposition and water accumulation, suggesting the necessity to include such 95 features in flood management and forecasting (Syvitski et al., 2012). Prior to the recent increase in availability of lidar data, end users viewed depressions in coarser resolution DEMs (+30 meters) as errors in the data collection process and were subsequently filled in or removed to ensure that water flowed continuously downstream (Li et al., 2011;Callaghan and Wickert, 2019). Flood-fill, breaching, carving, and combination algorithms modify the DEM by raising and/or lowering cells to create a depressionless surface (Jenson and Domingue, 1988;Martz and Garbrecht, 1999;Soille et al., 2003;Lindsay and Creed, 100 2005). Alternatives to modifying elevation data also exist through the use of a least-cost drainage path algorithm that is able to pass through depressions (Metz et al., 2011). Regardless of the method used, these algorithms produce hydrologically connected elevation surfaces by ignoring or removing depressions in the DEM and discounting their significant hydrologic impact (Callaghan and Wickert, 2019). With lidar technology and the availability of high resolution DEMs (1-meter and finer), topographic analyses can incorporate existing depressions, both naturally occurring and from anthropogenic sources.

105
A variety of methods utilizing remote sensing and automation techniques can identify depressions. Identification methods typically begin by comparing a filled and unfilled DEM (i.e., a depressionless DEM and the original DEM) to identify areas that are different. From here, methodologies vary slightly in their ability to eliminate noise in data and to represent the complex nested hierarchy of depressions. Some methods utilize elevation profiles , simplified hierarchical trees (Wu and Lane, 2016), or filtering based on threshold variables for surface area, depth, or volume (de Carvalho Júnior et al., 2013).
(e.g., an urban watershed). The algorithm chosen for this study is Fill-Spill-Merge, a mass-conserving approach that uses a network based algorithm (Barnes et al., 2019b, a).
Fill-Spill-Merge utilizes a depression hierarchy and represents the topologic and topographic complexity of depressions 125 across a landscape as a network. Sub-depressions can merge to form meta-depressions, and a depression hierarchy tree can selectively fill and breach depressions based on the volume of water in them. The Fill-Spill-Merge workflow is as follows: First, Fill-Spill-Merge calculates the depression hierarchy, flow directions, and label matrix needed to route water over the landscape. Second, water is routed to its lowest downslope pit, assigning it to the appropriate leaf in the hierarchy. Third, moving through each leaf, water that overflows from a depression is redistributed to siblings and parents within the hierarchy. 130 Fourth, the algorithm determines the final depths based on if the depression is completely filled, partially filled, or empty.
The implementation of the depression hierarchy and routing process between leaves, siblings, and parents makes this algorithm's computation time independent of the runoff depth, therefore drastically increasing its computational speed at higher runoff values when compared to cell-by-cell algorithms by a factor ranging between 2,000 -63,000 (Barnes et al., 2019a). Fill-Spill-Merge's ability to efficiently route water over a complex landscape is therefore ideal in determining the extent and depths 135 of pluvial flood waters. While Fill-Spill-Merge was originally tested on coarse resolution DEMs (ranging between 15-meter and 120-meter cell size), this analysis looks to apply Fill-Spill-Merge on a higher resolution DEM (1-meter resolution).

Recent Compound Flooding Advancements
Recent advancements in the field of flood hazard mapping as related to this study fall into two broad categories, both utilizing high resolution (5-meter horizontal resolution or better) elevation data: (1) large scale (e.g., global, national, regional) com-140 pound flood mapping efforts for multiple return periods (Bates et al., 2021) and (2) the speeding up of hydrodynamic models using advanced computing techniques (e.g., using graphical processing units or GPUs) and numerical weather forecasting (Ming et al., 2020). Both advancements have their advantages including but not limited to highlighting national and global spatial patterns of future flood hazards (Bates et al., 2021), or having the capability to forecast extreme events in some cases with a substantial lead time (e.g., produce results at 10-meter horizontal resolution within 2 hours) (Ming et al., 2020). However, the 145 use of high-performance flood modeling technologies is still in its infancy and hydrodynamic models are still burdened with massive data input requirements (Ming et al., 2020;Guo et al., 2021). Furthermore, the historical records required for some of these national models that are built on return periods simply do not exist for smaller and medium sized channels, such as those in this study.

150
Adaptive capacity is the degree to which an individual or community is able to respond to or cope with changes quickly and easily (Smit and Wandel, 2006). Exposure and sensitivity characteristics reflect the likelihood of a system experiencing a specific event and the characteristics of the system which influence its response to said event. Exposure and sensitivity are influenced by variables including social, political, cultural, and economic conditions, which in turn influence and constrain adaptive capacity (Smit and Wandel, 2006). Understanding the interconnected relationships among exposure, sensitivity, and 155 adaptive capacity is important to estimate the degree to which stakeholders can mitigate environmental hazards (Smit and Wandel, 2006). Social vulnerability, as seen by social scientists, serves as a proxy for a community's sensitivity. SVIs are therefore built on sociodemographic data and can incorporate multi-hazard exposure estimates for a final metric that represents a community's resiliency (Smit and Wandel, 2006).
The original calculation and most frequently cited tool for estimating social vulnerability within the United States is the 160 Social Vulnerability Index SoVI® (Cutter et al., 2003). SoVI® synthesizes 42 socio-economic and built environment variables to quantify social vulnerability to environmental hazards and generate a comparative metric that facilitates the examination of the differences between U.S. counties (Cutter et al., 2003). Since its inception, it has been revised numerous times (SoVI® 2010(SoVI® -2014 and reduced to 29 socio-economic variables. Since then, numerous social vulnerability indices, both global and regional, including those created by the United Nations Development Program(UNDP, 2010) and the Center for Disease 165 Control (CDC) (Flanagan et al., 2011) have been developed and widely used. Different constructs and variations of SVIs have different levels of predicative power, and therefore require fine tuning for each specific use (Rufat et al., 2019). Both SoVI® and the CDC's SVI, two of the most commonly cited SVI's that specifically focus on the US, estimate social vulnerability at the county level. Due to the vulnerability heterogeneity that exists within counties, variance can go undetected, which can adversely affect vulnerable populations. With the onset of sociodemographic data available at resolutions higher than counties, 170 researchers have applied similar methodologies to those by Cutter et al. (2013) at higher resolution boundaries.
Previous attempts have been made to disaggregate social vulnerability variables to a finer scale, such as individual tax parcels (Nelson et al., 2015). General methodologies follow the same core concept of using dasymetric mapping techniques, which utilize ancillary datasets to divide mapped areas into new but still relevant zones, such as tax parcels. This method is commonly used with cadastral data (land use/land cover data) to divide other geographic boundaries. Nelson et al. (2015) discusses using 175 cadastral-informed selective disaggregation logic to both extract relevant social vulnerability variables from tax parcel layers while dissolving Census Block Group variables to produce a parcel level SVI estimate. Our analysis dissolves Census Block Group variables to residential parcels, but does not use a selective disaggregation logic. While geographic tax parcel data are widely available (e.g., parcel boundaries), some associated variables (e.g., housing type, property value, gross rent, etc.) are not consistently reported across counties, regions, and states. Therefore, for vulnerability uniformity purposes, this analysis monetary losses (Tsakiris, 2014), others are concerned with the probability of a disaster causing harm (Kron, 2005). These diverging definitions stem from varying uses and understandings of the principal components of risk including exposure, hazard, vulnerability, and impact. This study uses the latter definition, defining risk as a probability, as the former definition can be misleading in the context of social vulnerability for this study (i.e., monetary risk might highlight more affluent/wealthy residents who are, in theory, less vulnerable). Exposure is broadly accepted to be the inventory or physical count of elements in an area where a hazard occurs, including the number of people, buildings, cultural sites, etc. (Cardona et al., 2012).

195
The definition of a hazard is where researchers begin to diverge. The IPCC defines a hazard as a possible, future occurrence of a natural or human-induced physical event that may have adverse effects on vulnerable and exposed elements (Cardona et al., 2012). This implies a probability component to a hazard as it examines future possible occurrences. However, an alternative definition as used by the United Nations International Strategy for Risk Reduction (ISDR), defines a hazard as a potentially damaging physical event, phenomenon, or human activity that may cause the loss of life, or injury, property damage, social 200 and economic disruption or environmental degradation (ISDR, 2009). With this definition, a map of inundation depths of an affected area is equivalent to a hazard in terms of flooding (Tsakiris, 2014). We choose to use this definition of a hazard as we are not currently considering probability and are rather using known flood characteristics to create our inundation map estimate.
Similarly to risk, vulnerability has also taken numerous definitions, falling into two categories depending on what end users 205 consider in the vulnerability estimate. The IPCC defines vulnerability as the degree to which a system is susceptible to, or unable to cope with, the adverse effects of a hazard or more broadly climate change (Cardona et al., 2012). This is more similarly related to the social vulnerability definition in social science fields, relating vulnerability to adaptive capacity (Smit and Wandel, 2006). Other definitions of vulnerability are more encompassing, including variables such as degree of exposure, capacity of the system, magnitude of a hazard, or value of assets exposed (Samuels and Goudby, 2009;Tsakiris, 2014). In as increased development, and subsequent expansion of impervious surfaces, increase people's potential exposure to both pluvial and fluvial flooding. Dividing Austin, Texas in the middle is the Colorado River, which is dammed by the Tom Miller Dam to the north-west (upstream) and the Longhorn Dam to the south-east (downstream). There are also numerous major 225 creeks throughout the northern and southern sections of Austin. This study focuses on the region of Austin that is north of the Colorado River containing the majority of new developments, major creeks, and population groups within Austin ( Figure   1). Furthermore, this area encompasses a wide range of demographic groups stretching from West to East Austin, as well as encompassing the downtown and University of Texas areas. When discussing hazard, vulnerability, and impacts at the parcel level, our analysis only considers residential parcels within the formally defined Austin neighborhood boundary. 1927, the farthest back that uninterrupted records for this region extend to. All stream reaches in this study reached their peak instantaneous flow rates within the three hours immediately following the end of the precipitation (i.e., by 20:00 CST).
The data sources and tools used in our analysis were deliberately chosen for their broad accessibility across the country, allowing the application of this methodology to occur across the US with little to no data availability concerns (Table 2).
Stream reaches, their boundaries, streamflow discharge, and rainfall are all publicly available and provided by the United  States Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA) ( Table 1). 1-meter DEMs for the contiguous United States are also broadly available from the USGS, as well as through other state and regional agencies. Parcel boundaries are well defined across the country, and while a single national source is not publicly available, most city and state agencies will provide this information for free. For example, the Texas Natural Resources Information System (TNRIS) currently has 228 of 254 counties' parcel data available for free.

245
The ACS 5-Year Estimates are period estimates that represent data from the previous 60 months, the largest sample size when compared to other ACS reports. For example, the 2017 data used in this analysis are an aggregation of data collected from 2013 through 2017. This large sample size is able to dampen outliers and potential errors in sociodemographic data.
ACS 5-Year Estimates are available for all Block Groups across the US, the highest spatial resolution at which the Census Bureau publishes data, and are therefore able to capture variation in the demographic makeup of a region. Block Groups have 250 a population ranging from 600 to 3,000 people, depending on if the Block Group is in a more rural or urban location.
ACS 5-Year Estimate reports at the Block Group level are not without disadvantages. Block Groups are not perfect delineations of neighborhoods, and can unintentionally group dissimilar neighborhoods (e.g., a predominantly black neighborhood that is grouped with an adjacent predominantly white neighborhood might not capture socio-economic differences and give a false illusion of neighborhood heterogeneity), creating a large margin of error in some estimations. ACS 5-Year Estimates are 255 also the least current datasets available due to their 5-year look back nature. This 5-year look back period also limits comparisons that can be made between datasets. For example, the 2017 ACS 5-Year Estimate used in this study could not be compared to the 2018 5-Year Estimate, as they would have four out of five years of overlapping coverage. However, compared to other ACS reports and the difficulties and expenses of other survey data sources, the advantages of using the ACS 5-Year Estimates reports outweigh the disadvantages presented. This analysis uses social and demographic data from the 2017 ACS 5-Year Es-260 timates report, as they best capture the socio-economic conditions of 2015 (i.e., 2015 is the midpoint of the 2017 dataset). In applications of this methodology in terms of future planning and emergency response, the most relevant 5-Year Estimate will be the most recently released.

Methodology and Workflow
The following subsections detail the methodology and workflow for calculating the flood hazard map, sociodemographic 265 vulnerability, and flood impact index at the parcel level ( Figure 2).

Flood Hazard at the Parcel Level
The 1-meter DEM was first processed using the GeoNet workflow (Passalacqua et al., 2010;Sangireddy et al., 2016). GeoNet extracts channel networks from high resolution topography data through the application of nonlinear filtering and the identification of geodesic paths as curves of minimum cost. GeoNet uses a Perona-Malik non-linear smoothing image filter (set to 50 270 iterations) to remove observational noise and irregularities within the DEM. This non-linear filter uses gradient information to define the diffusion coefficient in order to preferentially smooth regions outside and within the channel, rather than across its boundary, in order to maintain clear channel boundaries. GeoNet is able to calculate both a geometric and Laplacian curvature based on the desired use. We chose to use the geometric curvature in order to normalize across the entire study region (as flow direction, and slope in a cost function representing travel between two points to determine the geodesic curve from the channel head to the basin outlet. GeoFlood integrates terrain and hydrological outputs from GeoNet, which creates, through the application of the Height Above Nearest Drainage (HAND) method, an inundation map (extent and depths of flood waters) along the delineated stream channels for a given input flow rate (Nobre et al., 2011;Zheng et al., 2018).
The HAND method relies on a flow direction raster as one of its primary inputs, thus requiring a hydrologically connected, 280 or a "hydrologically coherent" (Nobre et al., 2011), DEM where all depressions, pits, and flat areas are removed. Therefore, the resulting estimated fluvial inundation depths do not consider depressions. Given a known centerline water depth, h, at a river segment, the HAND raster is used to produce a water depth grid of the inundated area, F(h), within the local catchment draining to that segment. The water depth, d, at any location, i, is therefore: The Fill-Spill-Merge algorithm determines the pluvial inundation depths and extents using a uniform runoff depth across the study region. The previous five days leading up to the storm event under investigation all recorded some level of precipitation.
Furthermore, the storm itself exhibited flash flood characteristics, with 80% of the precipitation (over 10-cm) falling within two hours. These conditions led to saturated soils for the majority of downtown Austin, justifying using rainfall depth as an equivalent for runoff depth. We utilized a uniform rainfall depth, as it is a more accurate representation of an input that would 290 be available in a near real-time scenario as compared to a gridded satellite precipitation measurement. Fill-Spill-Merge routes the rainfall depth through the depression hierarchy to its lowest down stream point before being redistributed to nodes with enough volume to contain the volume of rainwater, with the excess being sent to the "ocean" (Barnes et al., 2019a). Fill-Spill-Merge requires an input elevation that is equal to the lowest elevation across the DEM which serves as the "ocean", or the super-sink of the network that all water not remaining in a depression drains to. To accommodate this, we added an artificial 295 elevation along the entire perimeter of the DEM that was set to 0-feet. Given a known volume of water in a depression, V w , and the raster cells within that depression with a known length (l) and width (w), c i = c 1 , ...,c N , the water level in the depression, Z w , is therefore: with each cell in the depression having a water elevation equal to the computed Z w . For a more detailed explanation of this 300 algorithm, we refer the reader to the Fill-Spill-Merge publication (Barnes et al., 2019a) and references therein.
Since the rainfall event was lasted approximately 5-hours and all stream reaches had their maximum instantaneous streamflows within 3 hours following the storm, we chose to use the peak discharge of each reach independent of time as a proxy for the worst-case scenario of fluvial flooding. Similarly, we consider the total cumulative rainfall depth as the worst-case scenario for pluvial flooding. The inundation extents produced by GeoFlood and Fill-Spill-Merge, eq.
[1] and [2], are merged 305 to estimate the compound hazard. Using raster math functions, the fluvial and pluvial inundation estimates are summed. This summation specifically highlights areas that will experience both fluvial and pluvial flooding.
We determine residential parcel hazard by overlaying the inundation and parcel layers and extracting the highest flood depth that intersects each parcel. Numerous factors affect an individual's exposure to a hazard, including but not limited to the flood duration, depth of water, velocity of storm water, and water quality (Middelmann-Fernandes, 2010). Therefore, there only min-max normalized, a small regional flood would appear to have a similar hazard to a large regional flood. Therefore, a household's hazard level refers to the reclassified maximum inundation depth, d max , at that parcel (Eq. [3]).
Before being multiplied by SVI, the reclassified flood depths are normalized to a 0-1 scale with one having the highest flood 325 hazard and zero experiencing no flood.

Sociodemographic Vulnerability at the Parcel Level
We collected sociodemographic vulnerability data at the Block Group level from Bixler et al. (2021) The 18 variables that remained are therefore the most significant socio-economic variables (of the original 29) that will impact 340 an individual's social vulnerability (Table 3). The six components are listed in descending order from the highest amount of variance explained. For example, the variables in the Wealth component account for 17.53% of the original variance between all of the variables. The removed variables have less descriptive power, and are therefore removed. We manually adjusted the cardinality of each component so that a higher variable value indicated a higher vulnerability (Table 3). For example, Wealth has a negative cardinality because having a higher per capita income would make an individual less vulnerable. The numerical variable. This final score was again normalized from 0-1 (with one being the most vulnerable). The residential parcel SVI score is the SVI score for the block group to which that parcel belongs to.
4.3 Flood Impact at the Parcel Level

350
As previously described, impact is the product of hazard and vulnerability (Eq.
[5]). Therefore household impact is calculated by multiplying the normalized flood hazard value (Eq.

Pluvial Flooding Comparison
GeoFlood has been shown to capture the general fluvial inundation patterns of flood events, with inundation extents overlapping with 60-90% of FEMA inundation extents (Zheng et al., 2018(Zheng et al., , 2022 based assessment for river, urban and coastal flooding and has been developed at the RWTH Aachen University and University Magdeburg-Stendal, Germany (Grimm et al., 2012;Bachmann, 2012Bachmann, , 2021. The hydrodynamic analysis implemented in ProMaIDes is based on a finite volume approach solving the diffusive wave equations and includes a multistep backward differentiation method for the temporal discretization (Tsai, 2003).
The 2D model domain for the hydrodynamic model is one subbasin within the Shoal Creek Watershed, covering approxi-365 mately 5 km 2 . The hydrodynamic model can be driven by spatially and temporally varying rainfall input. However, to enhance comparability, we applied a uniform rainfall depth of 13.2 cm. We used a uniform roughness coefficient for the model area  Following the initial preprocessing steps (i.e., initializing GeoNet, GeoFlood) we computed the flood inundation layers (fluvial 375 and pluvial components) in under 28 minutes on a Linux machine with a 4.2 GHz i5-10210U processor with 4 cores (8 threads).

In the following figures (excluding Figure 3), inset areas (A) and (B) compare two different locations within Austin, TX and represent the same area across all figures. Inset (A) to the North highlights an area that is dominated by fluvial flooding. Inset
(B) to the South highlights an area that is dominated by pluvial flooding.

380
To compare the inundation extent estimates from Fill-Spill-Merge to the physical based model, we overlayed and intersected both rasters (Figure 3). The intersected raster was then classified into four categories of wet-wet, wet-dry, dry-wet, and dry-dry, with each term in each pair referring to one of the raster layers (i.e., wet-wet refers to a cell that is flooded in both rasters, where wet-dry refers to a cell that is flooded in only one raster) (Johnson et al., 2019). We define accuracy of the Fill-Spill-Merge model as the number of wet-wet cells divided by the sum of the wet-wet, wet-dry, and dry-wet cells. We found the Fill-Spill-

385
Merge model to be 31% accurate when excluding any inundated depths less than 1-cm. When the lower limit of allowable depths is increased to 6-cm and 15-cm, the accuracy increases to 44% and 66.5% respectively, suggesting that Fill-Spill-Merge performs comparably well at depths that are more likely to affect on the final impact index. Fill-Spill-Merge is predominantly underestimating inundated extents when compared to the model and this is occurring at larger intersections and along some roadways ( Figure 3, inset A, B, C).

390
With the overarching goal to be able to produce a comparable impact map in a fraction of the time, we thus compare reclas-  Overall floodwater extents increase when considering both pluvial and fluvial sources ( Figure 5). However, pluvial and fluvial flooding do not affect all locations equally, with some locations being affected more by fluvial flooding and others being affected more by pluvial flooding. Of the 177 block groups within the study area, 67 (37.9%) experience flooding from only pluvial sources. Flood mapping that exclusively considers fluvial sources would not identify these block groups' potential flood hazard. Only five block groups have an increase in flood extents greater than 100%, suggesting that while pluvial flooding can greatly increase inundation extents across a city or region, fluvial flooding remains the dominant source of flood waters (i.e., the majority of flooding comes from fluvial sources) in those block groups that already experience fluvial flooding. This increase in floodwater extents is also visible by catchment area, showing that the increase in floodwater extents is equally substantial across an entire watershed and not limited to certain locations along a stream reach (Table 4). The increase in floodwater extents 415 within catchment areas when considering the combined effects of fluvial and pluvial flood sources ranges from 40% to 156%. Analyzing flood hazard results by block groups produces a high level of variability, both between and within block groups ( Figure 6). High coefficients of variation (standard deviation divided by mean) signals a wide distribution, suggesting that mean hazard within the giving boundary is going to significantly over-and under-estimate household hazard. This is represented in Figure 6 by the circles, with darker larger circles equating a higher coefficient of variation. Furthermore, the high dispersion 420 in average by block group suggests that aggregating at a higher-level boundary (e.g., county) would result in similarly high coefficients of variation.

FSM's and Hydrodynamic Model's Parcel Classification Comparison
Reporting hazard values by residential parcels allows for this variability and dispersion to be captured in the final impact calculation (Figure 7). The reclassification of hazard values (Eq. [3]) allows for easier comparisons between regions, thus allowing for quicker identification of potential hot spots. High hazard results appear predominantly along streamlines, which is 425 expected as fluvial channel floodplains offer more locations for higher depths as compared to topographic depressions which have a much smaller scale in size. Conversely, areas further away from streamlines overwhelmingly appear to be classified in the lowest hazard level and thus are directly impacted to a lesser extent by high depth values (Figure 7).

Sociodemographic Vulnerability
Clear geographic disparities exist between the eastern and western portions of the study area in terms of the SVI estimates 430 (Figure 8). Each residential parcel's SVI value is equivalent to the SVI value of the block group that it coincides with. It is important to remember that the SVI estimate shown is relative and is therefore an arbitrary value that can be compared between locations. Parcels with a score of 1 are the most vulnerable, and parcels with a score of 0 are the least vulnerable. The purpose of dissolving SVI down to the parcel level is to intersect it with our household hazard estimate to compute a parcel specific impact.

Impact
There is a clear distinction in the flood impact index between the east and west portions of the study area, however individual block groups themselves also contain variability (Figure 9). Some locations have varying levels of impact within the same block group, which aggregated estimates would not capture. This is especially prevalent in areas with a higher concentration of higher impact households. Furthermore, high impact parcels exist in areas not directly adjacent to stream reaches.

High-Resolution Compound Flooding's Role in Increasing Parcel Level Hazard
Flood hazard is a function of both inundation extents and depths. Extent determines the breadth of flood waters, with larger flood extents forcing response and recovery efforts to spread out over large areas. Depth determines the level of damage, with a higher depth related to a higher level of damage. A significant source of hazard in urban areas that is often ignored is from 445 pluvial sources (Houston et al., 2011;Grahn and Nyberg, 2017). The exclusion of pluvial flooding from flood mitigation and emergency response planning will result in a drastic under representation of flood water extents which could impact millions of households across the United States (Wing et al., 2018). With 38% of all Census Block Groups in our study area only impacted by pluvial flooding, our results show that pluvial flooding cannot be excluded from flood hazard maps ( Figure 5).
Leading flood hazard maps (e.g., FEMA floodplain maps) and numerous flood risk studies (Burton and Cutter, 2008;Fekete,  pluvial flooding's impact on roadways. We show that pluvial flooding specifically leads to ponded water on impervious surfaces such as roadways, intersections, and parking lots, that would otherwise not be identified as being inundated (Figure 4).
Standing water depths greater than 13-cm can be high enough to reach the undercarriage of most passenger cars, inhibiting  safe evacuation routes (Moftakhari et al., 2018). Any increase in velocity or depth can block emergency response vehicles from reaching inundated areas. The co-occurrence of multiple types of flooding will either increase depths (i.e., occurring at the same location), extents (i.e., occurring at the same time), or a combination of both (Wahl et al., 2015). In our study area, compound flooding is predominantly related to increasing extents (Figure 4). Fluvial flooding is associated with higher depths, concentrated along stream reaches, while pluvial flooding is associated with lower depths spread out over larger areas (Figure 7). Low depth pluvial flooding can be described as "nuisance flooding", which has the ability to disrupt transportation networks, impact public safety, 465 and potentially damage property (Moftakhari et al., 2018). Fluvial and pluvial floodwaters require specific mitigation actions; therefore, it is important to quantify this distinction due to the place-based nature of flooding.
The City of Austin's FloodPro software, which is the city's leading source of floodplain information, lacks pluvial flooding information, therefore significantly under reporting exposure. The inclusion of high-resolution pluvial flooding estimates is necessary in understanding the potential impacts to local infrastructure, residents, and emergency services. High-resolution 470 compound flooding estimates can drastically improve local and regional flood polices' impacts by more accurately addressing flood issues that would otherwise go unnoticed.

Impact of Aggregating Hazard and Impact to Cartographic Boundaries
One of the leading purposes of mapping flood hazards with social vulnerability is to identify the most impacted populations and individuals. However, aggregating and reporting estimates to cartographic boundaries can significantly mask household 475 level variability, thus misclassifying some high-and low-impacted households. This misidentification can inhibit the proper allocation of mitigation and emergency response services. Our results show that when household hazard is averaged to Census Block Groups, 60% of all Block Groups have a coefficient of variation higher than 50%, showing that using a central tendency statistic to report flood hazards over a cartographic boundary is not representative of actual flood conditions (Figure 7).
The majority of recent research on social vulnerability to floods aggregates exposure, hazard, impact, consequence, or the 480 subsequent risk estimates to Census Tract, zip code, or county boundaries (Burton and Cutter, 2008;Cutter et al., 2013;Chakraborty et al., 2014;Wing et al., 2020;Tate et al., 2021).The two primary reasons for aggregating results are (i) the exploratory nature and large geographic scale of these studies to identify broad regions of interest and (ii) the aggregated boundary is the resolution of the utilized socio-economic data. In this study, hazard is heterogeneous within Block Groups, (Figure 7). Since social vulnerability estimates do not vary within a block group (Figure 8), the observed heterogeneity in 485 the final impact estimate comes solely from the variability in hazard (Figure 9). While aggregated results can draw attention to broad regions of risk, household level data are required to properly classify who will be impacted. This is not the first study to incorporate tax parcel data to attempt to estimate a hazard at the household level (Nelson et al., 2015;Fahy et al., 2019), however previous studies have relied on 100-year floodplain data that lack pluvial estimates. Recent studies have also computed high resolution compound floodplains based on a multitude of return periods (Bates et al., 2021). However, return 490 period information has implementation limitations in city planning and natural resource management scenarios, as end users prefer to have information reported in more easily understandable and concrete reference points such as depth values (Luke et al., 2018).
The methodology proposed in this study is not intended to replace large-scale pluvial and compound flood mapping techniques that also utilize high resolution DEMs. As stated in Bermúdez et al. (2018) and Bulti and Abebe (2020a), full hydro-495 dynamic 1D and 2D drainage models are well established to simulate urban pluvial floods and are available in a number of commercial software including SOBEK, XP-SWMM 2D, MIKE FLOOD, and InfoWorks ICM (see references therein). Furthermore, Tate et al. (2021) has demonstrated that high resolution elevation data can be incorporated into full hydrodynamic models at the national scale. While broad exploratory and aggregated studies can assist with equally scaled mitigation and planning programs at the national and state level (e.g., FEMA's National Flood Insurance Program, or the Texas Water De-500 velopment Board's Flood Intended Use Program), household estimates are necessary for local planning and action plans to effectively serve those who are most impacted. If our final impact estimates were aggregated to the block group level, high impact households would be masked and not identified. Similarly, low impact households could be labeled inaccurately, leading to a misappropriation of resources. Highly impacted households are not necessarily limited to high vulnerability neighborhoods and it is therefore important to view and report impact and risk estimates in an unbiased manner and at the highest resolution 505 possible. Additionally, this simplified model has fewer input data requirements and requires less technical expertise to produce inundation scenario maps, a feature that is unavailable in full hydrodynamic models.

Pluvial Flooding Comparison
While GeoFlood's accuracy and comparability to full hydrodynamic model results has already been researched (Zheng et al., 2018), Fill-Spill-Merge's applicability as a pluvial flooding estimate has previously not been studied. The advantages and dis-510 advantages between a terrain-based estimate of pluvial flooding to a hydrodynamic model can be grouped into two categories: time and accuracy.
The single subbasin used in the hydrodynamic model, which is 5 km 2 in size, represents only 2% of the entire watershed studied and took over 11 hours to compute. This is even considering the additional model parameters chosen to reduce computational time such as using a uniform rainfall and roughness coefficient, reduced rainfall and follow up time, and down 515 sampling the DEM. While there is room for the model to be optimized and be increased in speed, the terrain-based estimate for the entire study area can be processed in less than thirty minutes. Rapidly occurring floods (i.e., flooding occurring within 6 hours of the onset of precipitation) are some of the most hazardous natural events (Hapuarachchi et al., 2011). Short-term storm specific hazard and impact estimates require the speed that comes with our estimation methodology, which can play a critical role in deploying emergency communications before a flooding event begins.

520
When we compared the terrain based pluvial inundation estimate to the hydrodynamic model, we found that it had a spatial extent accuracy of 31%, which further increased to 66.5% when we ignored the lowest depth classification (Eq. [3]). The mismatch in inundation extents predominantly occurred along intersections and roadways, which do not have an impact on our household level classification since these locations do not intersect with residential parcels. This is supported by our 92% similar household classification, especially considering 237 of the 251 misclassified parcels were by only one class. The difference 525 in the depth estimates of Fill-Spill-Merge and the hydrodynamic model are minimized when we examine maximum parcel depths. Identifying the households with the highest impact is the most important function of the reclassification methodology. 69% of the misclassified parcels are miscategorized between not experiencing the hazard (i.e., no flooding) and receiving less than 15-cm of flooding (i.e., the lowest classification), therefore having little effect on the final impact calculation.
Comparing simplified conceptual models to full hydrodynamic models is a common methodology of verifying the function-530 ality of said simplified models in their ability to produce comparable results in a fraction of the time (Lhomme et al., 2008;Bernini and Franchini, 2013;Zheng et al., 2018). A validation of the proposed methodology would involve comparing estimates to historical observations (McGrath et al., 2018). However, these data do not exist for the 2015 Memorial Day flood in Austin, Texas.

535
There are inherent challenges associated with SVIs and reporting results in terms of relative risk that will require future and more in-depth analyses. Studies have shown that social vulnerability models related to specific hazards and outcomes perform better than generic social vulnerability indices (Tellman et al., 2020). Furthermore, the performance of generic indices has been shown to be statistically biased based when the model configuration is manipulated (Tate, 2013). Similarly, while some studies show that flood exposure is higher for socially vulnerable populations (Lee and Jung, 2014;Rolfe et al., 2020), other 540 studies show that low socially vulnerable populations can experience the highest exposure to flood hazards given certain circumstances (Fielding and Burningham, 2005;Bin and Kruse, 2006;Ueland and Warf, 2006;Chakraborty et al., 2014).
Resiliency and vulnerability indices are created unequally, and researchers should clearly state index objectives and structure underlying their metrics to support validation of the results based on established goals (Bakkensen et al., 2017). We selected the SoVI® algorithm and variable set due to its widespread adoption and the proof of concept nature of our workflow to be 545 able to accept an SVI-like variable.
The simplistic nature of SVIs allows instantaneous estimations, but SVIs cannot measure the full complex nature of vulnerability (Rufat et al., 2015). SVIs could inadvertently weight variables inaccurately (i.e., household income carries the same vulnerability weight as median age), creating a biased depiction of vulnerability over a region, thus misidentifying at risk individuals and perpetuating risk. SVIs should incorporate city specific information, including variables such as distance to critical 550 infrastructure (e.g., hospitals) or access to resources (e.g., gas, food, electricity, transportation, and water), to ensure proper representation of all residents. Further consideration needs to be given to estimating social vulnerability at the household level.
Census data, especially at the Block Group level, can have large margins of error. Assuming values found for the areal units apply at the household level requires a more specific analysis. One such option that has been used to address this concern is the use of primary household survey data (Collins et al., 2015). Despite these limitations, generic social vulnerability in-555 dices continue to have prolific use in disaster and emergency research fields and are beneficial in identifying potentially at risk individuals (Tellman et al., 2020;Tate et al., 2021).
There are also challenges associated with estimating flood hazard. The methods used to estimate exposure are a simplification of much more complex flood mechanics and do not account for such variables as storm drainage networks, movement around buildings and structures, and timing/velocity considerations. One of the known limitations of topographic based inundation 560 models is that they lack a timing mechanism and can therefore only be used to show a single state of inundation (Bulti and Abebe, 2020b;Fritsch et al., 2016;Lhomme et al., 2008). While this workflow can produce estimates in near real-time, it is important to consider these estimates in the broader context of flood modeling and consider the inherent uncertainties of terrain-based flood mapping. In the context of pluvial flooding, specifically nuisance flooding at lower depths, estimates are directly impacted by DEM accuracy. The DEM used has a vertical accuracy of 6-cm, which is significant when considering 565 flood depths that are between 3 and 10-cm (Moftakhari et al., 2018). While uncertainty and its communication can have a substantial impacts on regulatory and response processes (Downton et al., 2005;Luke et al., 2018), there is also evidence that flood emergency managers are willing to trade larger uncertainties for faster information (McCarthy et al., 2007). As shown, pluvial flooding has a direct impact on roadways and intersections, suggesting its predominant impact may be in the disruption of traffic and emergency services, especially considering the exponential decrease in vehicular traffic as a result of standing 570 water, and the complete halting of traffic when depths exceed 15-cm (Pregnolato et al., 2017). Furthermore, initial conditions and whether or not their is a base amount of ponded water can be considered and applied to this methodology as well by by routing a volume/depth through the topography before any additional runoff is routed.
There are other topographic based inundation methods that incorporate more flooding factors such as land use information factors including infiltration and friction (Chu et al., 2013b;Appels et al., 2011;Antoine et al., 2009;Lhomme et al., 2008).

575
However, it is important to note the increased computational time involved with such methods. For example, the Rapid Flood Spreading Model (RFSM) only requires an input flood volume, elevation, and surface roughness (Bernini and Franchini, 2013;Lhomme et al., 2008). Due to the inclusion of surface roughness, the RFSM method is approximately 3.8 times slower than the Fill-Spill-Merge method utilized on our study (calculated by comparing the ratio of DEM cells to computational time in our study to theirs, being 502 million cells in 28 minutes with our method and 387 thousand cells in 5 seconds with theirs). In a 580 near-real time scenario when the entire storm event occurs in less than 5 hours, the difference between a computational time of 28 minutes (our computational time) and 106 minutes (our computational time multiplied by 3.8, the estimated speed ratio of RFSM to Fill-Spill-Merge) is substantial. Future research will examine how to improve FSM's ability to estimate lower depth inundation extents (i.e., less than 15-cm) as well as how road network disruptions impact a household's ability to access critical resources in near real-time (e.g., grocery stores, gas stations, pharmacies, hospitals, etc.)

585
While it is necessary to understand both short-term and long-term risk, as they require unique actions and policies to address them, this study is a specific attempt to identify short-term impacts for a known storm event. Long term future flood risks caused by the projected increase in frequency of extreme weather events due to climate change will require additional analyses. Future flood risk calculations can incorporate this workflow by using modeled storm characteristics and projected sociodemographic information. As a supplemental tool, this workflow can contribute to other research, response, and mitigation efforts.

Conclusion
The proposed workflow in this study creates a storm specific urban flood impact index at the parcel level using high resolution topographic data, near real-time pluvial and fluvial flood estimations, and a region specific social vulnerability index. The application of this workflow to the Memorial Day flood in Austin, TX showed that estimating fluvial flooding alone is not enough to predict urban flood hazards. We showed that Our pluvial hazard estimate can accurately determine the parcel level 595 impact index 94.4% of the time when compared to a full hydrodynamic model, in near-real time. Furthermore, we show that aggregating results to cartographic boundaries masks the dispersion of hazards and impacts, thus making it difficult to identify priority locations that must be addressed in planning, management, and emergency response scenarios. Through the inclusion of a social vulnerability index, end users are better informed in identifying those individuals facing the greatest impact (a product of flood hazard and vulnerability).

600
This methodology's power lies in the ability to calculate a 1-meter horizontal resolution inundation estimate for large urban areas (>250 km 2 ) in under 30 minutes on a personal computer while strictly using open source data that are theoretically available for the entire United States (i.e., Census data, USGS stream gauge data, rainfall data, DEMs, and residential parcels can generally be found free online for the majority of the country). Therefore, this framework can estimate a region and storm specific impact index in near-real time anywhere with widely available computing resources. Future work will explore 605 including more flooding and vulnerability factors, such as non-census sociodemographic data, social and governance networks, coastal compound flooding, and local infrastructure data to improve impact estimates.
Code and data availability. All data used in this analysis were publicly obtained from their respective sources including NOAA, USGS, TNRIS, and the US Census Bureau. The GeoFlood and Fill-Spill-Merge codes can be found on their respective GitHub pages (https://github. com/r-barnes/Barnes2020-FillSpillMerge, https://github.com/passaH2O/GeoFlood). All data and the associated codes used can be retrieved de Moel, H. and Aerts, J. C.: Effect of uncertainty in land use, damage models and inundation depth on flood damage estimates, Natural