the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
INSPIRE Game: Integration of vulnerability in impact-based forecasting of urban floods
Abstract. Extreme precipitation events (EPEs) and flash floods incur huge damage to life and property in urban cities. Precipitation forecasts help predict extreme events; however, they have limitations in anticipating the impacts of extreme events. Impact-based forecasts (IBFs), when integrated with information of hazard, exposure and vulnerability, can anticipate the impacts and suggest emergency decisions. In this study, we present a serious game experiment, called the INSPIRE game, that evaluates the roles of hazards, exposure, and vulnerability in a flash flood situation triggered by EPE. Participants make decisions in two rounds based on the extreme precipitation and flood that occurred over Mumbai on 26 July, 2005. In the first round, participants make decisions for the forthcoming EPE scheduled for later in the afternoon. In the second round, they make decisions for the compound events of extreme precipitation, river flood and high tide. Decisions are collected from 123 participants, predominantly Researchers, PhDs and Masters students. Results show that participant’s use of information to make decisions was based on the severity of the situation. A larger proportion of participants used precipitation forecast and exposure to make correct decisions in the first round, while used precipitation forecast and vulnerability in the second. Higher levels of education and research experience enabled participants to discriminate between the severity of the event and use the appropriate information set presented to them. Additionally, between the choice of qualitative and quantitative information of rainfall, 64% of the participants preferred qualitative over quantitative. Finally, we discuss the relevance and potential of vulnerability integration in IBFs using inferences derived through the serious game.
- Preprint
(17898 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
CC1: 'Comment on hess-2024-116', Ashish Meena, 25 Apr 2024
This is an interesting work to integrate vulnerability in decision-making. However, I would like to ask the authors if this integration is limited to the indicator-based vulnerability assessment opted in the study?
Citation: https://doi.org/10.5194/hess-2024-116-CC1 -
CC2: 'Reply on CC1', Akshay Singhal, 06 May 2024
Thank you for the comment. I would like to elucidate on this subject that the integration of vulnerability presented in the study is not limited to the indicator-based method of vulnerability assessment used in the study. We selected the indicator-based method since it provides a range of perspectives and dimensions based on different indicators. Also, it well differentiates between the overall 'vulnerability' and 'exposure' and allows for a comparative understanding between the two concepts. However, any other method of vulnerability assessment can be used to achieve its integration.
Citation: https://doi.org/10.5194/hess-2024-116-CC2
-
CC2: 'Reply on CC1', Akshay Singhal, 06 May 2024
-
RC1: 'Referee comment on hess-2024-116', Anonymous Referee #1, 03 Jun 2024
Overview:
This paper reviews use of a 'serious game' in exploring how different types of information influences flood management decisions, here tested in the city of Mumbai, India. I comment here as someone who is familiar with urban flood management, but less familiar with use of 'serious games.' Overall, I think this is an important concept to explore how including information about exposure and vulnerability (rather than just the hazard itself) could improve decision making. However, as articulated below, I do think multiple aspects need to be clarified or explained more.General comments:
It seems like there could be some more description of previous work in this realm, including adding this helpful recent review of serious games and flood risk (https://wires.onlinelibrary.wiley.com/doi/full/10.1002/wat2.1589)It could be helpful to give more meaningful abbreviations to E1, E2, E3 that indicate the type of information given, to help the reader more easily interpret the figures.
I suggest that you discuss the tradeoffs of what information can be made available (e.g. like relative time to prepare, ease or cost of availability, etc) in terms of why not suggest that participants are given all/best of the available information (i.e. high res rainfall, exposure, vulnerability).
It would be helpful to expand on discussion about the 10% of participants that took actions in Gamma and were perceived to have misunderstandings. Could you learn anything from those participants that could improve the game?
In discussing implications of results, the statement is made around line 315 that information on vulnerability helps make better decisions than just exposure. But yet just before this, it was stated that only in 1 round (R2) did vulnerability info yield the highest score. Please be clearer and not mis-leading/overly generalizing in your conclusions.
Detailed comments:
pg2- line 25- in this paragraph, missing an obvious one of increasing population, thus more potential impactspg5- mention somewhere in this section what type of players this is targeted to? e.g. educated audience that would play role of flood manager vs general public
game scoring- it would be good to mention here that detailed information on decisions is in the appendix. Also, please clarify whether the managers on whom correct scores are based have 'hindsight' on what the best decision was based on what actually happened in the real event.
pg 5, line 144- it is mentioned that a modified precipitation forecast is used as the original severely underpredicted the actual event. This is worth more discussion later, given that flood managers cannot make good decisions when the information that they have ends up not at all aligning with the actualized event.
pg 6, line 158- this line noting '55% of its actual value' is confusing. Perhaps re-word to something like ''each indicator value of Alpha was scaled by 55%.'
pg7, line 185- it would be helpful to briefly explain how the indices (e.g. vulnerability) are calculated or their primary data inputs, rather than simply referring to a reference for all information.
pg9- line 251- it would be good to expand upon 'not overly straightforward' to connect to the fact that this demonstrates how flood managers can have trouble identifying the optimal outcome in the midst of the event.
pg 10, line 274- when discussing how medians in R2 have shifted higher but tails are negative, should have 'but' instead of 'and' in 'located close together, and the tails of their distributions...'
Chart of E1,2,3 info could be helpful, like a more detailed version of Figure 2 left side.
Figure 1- should be a bit larger so that sub-figure panels and associated text are more visible.
Citation: https://doi.org/10.5194/hess-2024-116-RC1 -
RC2: 'Comment on hess-2024-116', Anonymous Referee #2, 04 Jun 2024
The paper by Singhal et al. describes a serious game mimicking the decision process during an extreme flood event. The game is based on the record floods that affected Mumbai in 2005. Overall, the paper is well written and relatively clear except for certain method points discussed below. The topic of using games to help understand and improve emergency management is highly relevant for the HESS journal in the global context of increased population in flood prone areas and changing climate. By establishing a certain distance between the players and reality, a game constitutes an efficient tool to extreme and often dramatic events.
However, the game presented by the authors suffers from several fundamental flaws that make it unsuitable for publication in its present form. Two major flaws are discussed in the following section with more detailed comments provided in a subsequent part of the review report. All comments are numbered to facilitate later reference.
>>> Major comments
Comment #1 - No considerations of ethic: serious games are qualified as “serious” because they are closely related to real situations and, hence, can have a powerful impact on their players. More generally, a serious game is a social experiment on human beings which requires a detailed assessment on the ethic of the process to ensure that players are protected from harm. The authors never mention this aspect which is surprising considering the policy of their respective institutions on this aspect (IISER, 2021; Université Grenoble Alpes, 2024). Following Fisher & Anushko (2008), ethical considerations (1) must address potential conflict of interest between the researchers and the participants, (2) must ensure informed consent of participants, (3) must ensure equitable treatment of participants regardless of their cultural or socio-economic background. We noticed several elements in the authors’ game design that would require careful review in the light of these three principles:
(1.1) The 2005 Mumbai floods was an extremely traumatic experience. There is a high risk of participants being negatively affected by the game if they were associated with the event. There is no information in the paper on how the participants were identified, if they are voluntary, or if the purpose of the game was clearly explained to them.
(1.2) India is a country with a large cultural, linguistic and socio-economic diversity. It is not clear how this diversity was represented in the group of participants beyond their academic level. For example, the game seems to be based on questions asked in English with answers provided in the same language through a spreadsheet. This favours disproportionally participants with an academic background such as PhD or researchers who, unsurprisingly, scored best in the game.
(1.3) The participants are ignorant that the game is about Mumbai until this fact is revealed at the end of the game. This practice is a deceptive method which is highly debated in social sciences (Fisher & Fyrberg, 1994). Although not firmly prohibited, we are personally sceptical about its benefits due to the lack of trust it generates between participants and game organisers. This aspect may not be significant here due to the absence of on-going relationships between the authors and participants (the game seems to be a “one-off”). However, it could trigger problematic situations in relation to our point 1.1 above if participants suddenly realise that they have been playing with data from a flood and a city they are familiar with.
Comment #2 - Bias in research analysis: The method presented by the author suffers from several biases that could potentially affect their conclusions and limit their applicability to real-life emergency decision making. More specifically:
(2.1) The authors excluded responses from participants that made more decisions for the “Gamma” district (Line 129 of the manuscript). This is not acceptable as it modifies the outcome of the game arbitrarily. There could be many reasons why participants decided to take such decisions. For example, they could have favoured economic interests in the “Gamma” district against population safety in other districts. Such decisions are morally questionable but they remain part of the game nonetheless.
(2.2) The game duration is extremely short with 25 minutes for the two rounds and an additional 15 minutes of questions and discussion. In addition, the game is played individually without any interaction between the participants except during the last 15 minutes. Consequently, the game does not explore human interactions and coordination at all, which are fundamental in analysing emergency response (Drabek, 1985).
(2.3) All decision variables are colour coded, which removes the ability for the participants to weight quantitatively the information provided. We appreciate the author’s intent to simplify the information and allow the participants to compare disparate data. However, this is not the reality of an emergency decision process where flood managers must deal with sometimes confusing data.
(2.4) Vulnerability data are presented to participants at the same time or even after rainfall forecast data. The game setup seems to reproduce the case of an untrained manager going through her or his very first flood and who discovers vulnerability hot spots at the same time than rainfall forecasts arrive. This is not realistic for a seasoned manager who knows the city well. We suggest reconsidering this point and present the vulnerability data well in advance to the players so that they understand the layout of the city before the game starts. The lack of context understanding seems to be confirmed by the game results where participants obtained better score in the second round compared to the first (see Line 254).
(2.4) The game assumes that there is a “correct” answer for every round defined by local experts. This aspect is quite disturbing as it is difficult to know what the best decision in a city as complex as Mumbai is when facing a flood as extreme as 2005. In addition, there is little information about who the experts are and if participants accept them as experts whereas the definition of a correct decision in this case is likely to be highly contested. We suggest considering more diverse form of rewards such as achieving consensus (if debate was allowed between participants) or showing consistency throughout the game (an important quality of emergency decisions).
>>> Minor comments
Line 25, “Three main reasons may be …”: a fourth more fundamental reason is simply that extreme rainfall do not necessarily translates into high hazard. There are hydrogical (e.g. antecedent conditions, non-linear runoff generation, ...) and hydrodynamic (e.g. topography, levee systems, backwater effects, ...) factors that complicate flooding processes and reduce the value of rainfall information.
Line 85, “Gupta and Nair”: the reference is not about the Mumbai flood but about floods in Chennai and Bangalore. Please remove this reference and replace it by a more appropriate one.
Line 105, “Flood Manager”: this role needs to be defined in greater details. There is a great diversity of flood managers ranging from liaison officers to operators of major infrastructures. Please clarify this point and explain how it was presented to the participants.
Line 115, “Meteorological Department, Department of Town Planning, Department of River Management, Department of Coast Management and the media cell”: why are there so many organisations providing information and only one role for the participants (Flood Manager)? Please clarify why it is important to distinguish the information provider and its effect on the responses during the game.
Line 142, “The accumulated rainfall forecast, used in the game, is a slight modification”: Please clarify if there was an attempt to reproduce the skill level of recent rainfall forecast. This information is important to assess if the forecasts are realistic for current decision making in Mumbai.
Line 147, “The information of exposure and vulnerability is statistically calculated”: this sentence is not clear. Remove this statement and refer to the following sections which explain the process.
Line 152, “Vulnerability and Exposure analysis”: the concept of exposure is confusing. As indicated by the authors and following Gallopín (2006), vulnerability can be decomposed into exposure, sensitivity and adaptive capacity. Consequently, exposure is a part of vulnerability, not an independent concept. However, the section title at line 152 suggests that it is distinct. Please clarify.
Line 156, “standardized”: please remove this word. The authors are simply calculating the value of each indicator based on the proportion of area flooded in the ward assuming an homogeneous distribution of the indicator across the ward. Standardized has a different meaning which often relates to subtracting the mean and dividing by the standard deviation.
Line 160, “normalized”: Please define this normalisation.
Line 193, “based on the beta distribution”: this approach seems overcomplicated for the definition of simple indicators. The use of the beta distribution adds the uncertainty associated with the choice of the distribution and its parameter values. We suggest replacing this by the quantiles of the indicators across the 24 wards.
Line 220, “qualitative rainfall forecasts”: please clarify how are rainfall forecasts color coded.
Line 302, “level of education does play a role in decision-making”: it is not obvious that researchers are best placed to take high risk decisions under intense time pressure. We believe that this statement is in fact the result of the multiple biases introduced by the game described in the previous section.
>>> References
Drabek, T. E. (1985). Managing the Emergency Response. Public Administration Review, 45, 85–92. https://doi.org/10.2307/3135002
Fisher, C. B., & Anushko, A. E. (2008). Research ethics in social science. The SAGE Handbook of Social Research Methods, 95–109.
Fisher, C. B., & Fyrberg, D. (1994). Participant partners: College students weigh the costs and benefits of deceptive research. American Psychologist, 49(5), 417–427. https://doi.org/10.1037/0003-066X.49.5.417
Gallopín, G. C. (2006). Linkages between vulnerability, resilience, and adaptive capacity. Global Environmental Change, 16(3), 293–303. https://doi.org/10.1016/j.gloenvcha.2006.02.004
IISER. (2021). Manual on R&D Project Management with Guidelines (p. 82). Indian Institute of Science Education and Research Bhopal. https://www.iiserb.ac.in/assets/all_upload/pdf/548351c53d6600ee2db8ebb02b804208.pdf
Université Grenoble Alpes. (2024). Le comité d’éthique et de déontologie. https://www.univ-grenoble-alpes.fr/universite/engagements/ethique-et-deontologie/le-comite-d-ethique-et-de-deontologie-1145514.kjsp?RH=1665562627143
Citation: https://doi.org/10.5194/hess-2024-116-RC2
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
431 | 75 | 15 | 521 | 7 | 11 |
- HTML: 431
- PDF: 75
- XML: 15
- Total: 521
- BibTeX: 7
- EndNote: 11
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1