Time-lapse Rgb Imagery for a Remote Greenlandic River Printer-friendly Version Interactive Discussion Classification of Time-lapse Rgb Imagery for a Remote Greenlandic River Hessd Time-lapse Rgb Imagery for a Remote Greenlandic River Printer-friendly Version Interactive Discussion Hessd Time-lapse R

This discussion paper is/has been under review for the journal Hydrology and Earth System Sciences (HESS). Please refer to the corresponding final paper in HESS if available. Abstract River systems in remote environments are often challenging to monitor and understand where traditional gauging apparatus are difficult to install or where safety concerns prohibit field measurements. In such cases, remote sensing, especially terrestrial time lapse imaging platforms, offer a means to better understand these fluvial systems. One 5 such environment is found at the proglacial Isortoq River in southwest Greenland, a river with a constantly shifting floodplain and remote Arctic location that make gauging and in situ measurements all but impossible. In order to derive relevant hydraulic parameters for this river, two RGB cameras were installed in July of 2011, and these cameras collected over 10 000 half hourly time-lapse images of the river by September 10 of 2012. Existing approaches for extracting hydraulic parameters from RGB imagery require manual or supervised classification of images into water and non-water areas, a task that was impractical for the volume of data in this study. As such, automated image filters were developed that removed images with environmental obstacles (e.g. shadows, sun glint, snow) from the processing stream. Further image filtering was ac-15 complished via a novel automated histogram similarity filtering process. This similarity filtering allowed successful (mean accuracy 79.6 %) supervised classification of filtered images from training data collected from just 10 % of those images. Effective width, a hydraulic parameter highly correlated with discharge in braided rivers, was extracted from these classified images, producing a hydrograph proxy for the Isortoq River be-20 tween 2011 and 2012. This hydrograph proxy shows agreement with historic flooding observed in other parts of Greenland in July 2012 and offers promise that the imaging platform and processing methodology presented here will be useful for future monitoring studies of remote rivers.


Abstract
River systems in remote environments are often challenging to monitor and understand where traditional gauging apparatus are difficult to install or where safety concerns prohibit field measurements.In such cases, remote sensing, especially terrestrial time lapse imaging platforms, offer a means to better understand these fluvial systems.One such environment is found at the proglacial Isortoq River in southwest Greenland, a river with a constantly shifting floodplain and remote Arctic location that make gauging and in situ measurements all but impossible.In order to derive relevant hydraulic parameters for this river, two RGB cameras were installed in July of 2011, and these cameras collected over 10 000 half hourly time-lapse images of the river by September of 2012.Existing approaches for extracting hydraulic parameters from RGB imagery require manual or supervised classification of images into water and non-water areas, a task that was impractical for the volume of data in this study.As such, automated image filters were developed that removed images with environmental obstacles (e.g.shadows, sun glint, snow) from the processing stream.Further image filtering was accomplished via a novel automated histogram similarity filtering process.This similarity filtering allowed successful (mean accuracy 79.6 %) supervised classification of filtered images from training data collected from just 10 % of those images.Effective width, a hydraulic parameter highly correlated with discharge in braided rivers, was extracted from these classified images, producing a hydrograph proxy for the Isortoq River between 2011 and 2012.This hydrograph proxy shows agreement with historic flooding observed in other parts of Greenland in July 2012 and offers promise that the imaging platform and processing methodology presented here will be useful for future monitoring studies of remote rivers.

Introduction
Proglacial streams and rivers along land-terminating edges of the Greenland Ice Sheet are among the world's most difficult fluvial systems to study in the field, owing to their remoteness, harsh climate, and braided morphology.Discharge variations in large proglacial rivers are of particular scientific interest, as these systems typically derive water from the interior ablations surface Greenland Ice Sheet and are thus useful for inferring runoff mass losses from the ice sheet (Rennermalm et al., 2013;Smith et al., 2014).However, their high sediment loads, unstable banks, and dynamic braided channels present challenges to traditional in situ river gauging techniques, and long term hydrographs for these rivers are rare.While not unique to Greenland, these challenges are particularly evident there, with more than 100 large (> 1 km width) large braided rivers exiting the ice sheet with no observations of discharge whatsoever.
Regardless of the technology used, each remotely sensed image must first be classified into areas of water and non-water, a task for which numerous methodologies exist.In satellite remote sensing, NIR wavelengths can reliably detect open water surfaces.
However, satellite imagery often lacks the required spatial and temporal resolution to adequately capture hydrologic phenomena, especially for smaller rivers.This has led to the use of non-metric, true color (RGB) digital camera imagery to capture water Introduction

Conclusions References
Tables Figures

Back Close
Full surfaces as an inexpensive and image-on-demand alternative to satellite and airborne platforms, especially for braided rivers.To calculate hydraulic parameters (e.g.effective width, braiding index, sinuosity, or bed slope elevation), these studies have commonly classified water surfaces within images either manually or by supervised classification (Egozi and Ashmore, 2008;Bertoldi et al., 2009;Hundey and Ashmore, 2009;Ashmore et al., 2011;Welber et al., 2012).Another parameter estimation approach relies on water surface delineation from automatically generated DEMs constructed from stereo-imagery and other data sources (Chandler et al., 2002;Ashmore and Sauks, 2006;Bird et al., 2010;Bertoldi et al., 2010).While each of these studies successfully calculated hydrologic parameters from remotely sensed images, their manual, time-intensive approaches are impractical for large data volumes.This is especially an issue for long term hydrologic monitoring sorely needed in many remote rivers, as using the image platform and processing developed by Ashmore and Sauks (2006) and Welber et al. (2012), for instance, could easily generate tens of thousands of images per year.Automated DEM generation methods would seem a ready alternative, yet these require numerous fixed targets of known position to persist from image to image, which are seldom found or are difficult to install on dynamic braided river systems owing to their constantly shifting morphology.
If such image platforms are to be viable for long term monitoring studies, a systematic procedure for automatic image quality selection and classification, preferably for RGB image data, is needed.
To that end, this paper proposes a semi-automated processing stream designed to classify and extract hydraulic parameters of interest from large volumes of RGB image data collected from a fixed terrestrial platform, and demonstrates its efficacy in a remote Greenlandic river.Automated filters are developed that remove obstacles to image classification based on easily calculated environmental variables, and an image similarity filter is developed that allows supervised classification of many images from minimal training data.Here, these filtering and classification techniques are employed to extract effective width W e (inundation area divided by reach length), a hydraulic pa-Introduction

Conclusions References
Tables Figures

Back Close
Full rameter that has been shown to be highly correlated with discharge in braided rivers and has been successfully extracted from remotely sensed data in proglacial environments (Smith et al., 1996;Smith, 1997;Ashhmore and Sauks, 2006;Smith and Pavelsky, 2008;Ashmore et al., 2011).To evaluate the robustness of the extraction, we assess image classification accuracy using manually generated ground truth data.

Data
This study was conducted on the proglacial Isortoq River in southwestern Greenland.
The Isortoq, one of the largest braided rivers draining the Greenland ice sheet, issues from the Issunguata Sermia glacier terminus with discharge dominated by meltwater outflow from the ablating ice surface (Smith et al., 2014).In July 2011, two Nikon D200 model RGB cameras (focal lengths of 24 and 50 mm) were installed 250 m above a reach of the Isortoq braid plain approximately 3.1 km downstream of the ice edge.
The camera system was identical to that developed by the Extreme Ice Survey project (www.extremeicesurvey.org) for use in severe Arctic conditions.In addition to the cameras, a modified battery pack and electronic controller were housed inside a weatherproof case with an abrasion-resistant viewing window.The case was mounted on a survey tripod and powered by a 12 V gel battery recharged by solar panel.The cameras were oriented so as to image sections of the braid plain of approximately 1.5 km×2.0 km and 2.0 km × 2.3 km, respectively (Fig. 1), and captured one image every 30 min when light conditions permitted.
Camera data collection commenced 22 July 2011, and over 20 000 images were retrieved from the cameras by 10 September 2012, covering most of two melt seasons.The camera setup proved robust: the light sensor operated properly, the position of the cameras remained unchanged, and the batteries powering the cameras were still functional after the one year collection period for the wide focus camera.However, a presumed Arctic fox chewed through the cables connecting the battery to the camera for the more narrowly focused platform and halted data collection only two months after Introduction

Conclusions References
Tables Figures

Back Close
Full installation.Therefore, all analyses presented in this paper refer to the wide focus camera, which remained continuously operable throughout the study period 22 July 2011-10 September 2012.

Methods
Classifying the RGB image data into water and non-water areas to extract W e presented several technical challenges for the 10 327 images that were collected by the wide focus camera from July 2011 to September 2012.Existing approaches for hydraulic parameter extraction from RGB data require either manual or supervised classification of water within each image and are thus inappropriate for the large data volumes generated in this study.Unsupervised classification techniques provide a straightforward alternative for large time-lapse camera datasets, yet also present additional challenges as the images collected here are extremely diverse and differing soil moisture in the braid plain gives the appearance of multiple classes of output.Environmental factors such as time-varying solar angles, blowing sand, dense fog, shadowing, snow and rain on the camera lens, and acute sun-glint from water surface are especially prevalent in the Isortoq image data.These factors were all addressed, and W e accurately extracted, by the processing workflow described below and presented in Fig. 2.

Environmental filtering
The first task for extracting W e was to filter the large amount of image data into those images that were most easily classified into water and non-water areas by eliminating images containing the environmental obstacles described above.Once images are classified, water area (and therefore W e ) may be calculated.Several filters were developed to remove these poor quality images.First, images acquired during periods of non-flow (before and after melt season activity) were culled.Next, images with shad-Introduction

Conclusions References
Tables Figures

Back Close
Full owing were culled by calculating the zenith and azimuth angles of the sun relative to the river plain.Through visual inspection of the image time series, zenith angles less than 65 • and azimuth angles between degrees were found to produce shadows created by steep valley walls that prevented accurate classification (note valley walls, Figs. 1  and 2).Next, images that exhibited excessive sun glinting were removed.Sun glint was defined as when an image exhibited either a ratio of the 95th brightness percentile to the 5th brightness percentile greater than 1.8 or contained more than 1 % of pixels with brightness value greater than 215.This filter was necessary, as sun glint was observed both on open water and saturated sand, making distinction between these very different fluvial environments difficult (Fig. 2).Successful application of these winter, shadow, and sun glint filters culled 9487 images from the image time series, leaving 840 images free of environmental obstacles that still represented every day of the two melt seasons.

Similarity filtering
Even with these stringent filters, unsupervised classification was still unable to delineate water surfaces with satisfactory accuracy, and the number of images remaining was still too large for supervised classification to be feasible.As such, a semisupervised classification approach was developed.To perform this classification, another image filtering was needed to find images that were similar enough to one another to share training data from a small sample of images in a supervised classification.The presence of dense fog, blowing sand, or cloudiness changes the brightness values of the imagery, so even images collected with identical solar geometry can be difficult to classify in an unsupervised manner.A similarity filter was developed that selected images that not only had similar solar geometry, but also had the same brightness and illumination and were all free of environmental obstacles not covered by the first filtering.This similarity filtering was accomplished by calculating and comparing the histograms of each of the red, green, and blue bands for each image.Histograms of Introduction

Conclusions References
Tables Figures

Back Close
Full brightness values that fell into 100 bins evenly spaced from 0 to 255 (reflectance values) were calculated for each band of each image.Using the same bins for each image ensured that cross comparison of images would not be affected by stretching of the image data.Once these histograms were generated, the RMSE between histogram counts per bin was computed in a band-by-band pairwise permutation, giving a perimage and per-band indication of the similarity of every image to each other image.These band-by-band RMSE values were then averaged to arrive at an overall measure of image similarity.This metric was used to identify the 20 % of the images that were most similar to each other, resulting in 168 images that were collected at similar sun angles without any environmental obstacles.In addition, the similarity filter also produced images that contained four basic elements: dark (non-sun lit, turbid) water, bright (sun lit or non-turbid) water, dark (wet) sand, and bright (dry) sand.

Georectification and classification
Once the final filtering of images was complete, images were cropped to exclude the wide upstream floodplain and georectified into ground coordinates using a 4th degree polynomial transformation implemented in ENVI v4.8 (Fig. 2).Eighty ground control points were manually extracted from a 2 m panchromatic World View 2 image acquired on 23 September 2011 (paired with a camera image collected 10 min later) and used to define the basis for the transformation.This warping polynomial was subsequently applied to all filtered images.After georectification, each image pixel had dimensions of 1 m by 1 m, an appropriate resolution for camera data collected at this scale.These georectified pixels allowed calculation of water surface area, and thus W e , from the classified images.pervised classification method performed in ENVI v4.8 for each image.This process requires that each image has nearly identical RGB composition in order to be successful, which was guaranteed by the similarity filtering.

Accuracy assessment
The semi-supervised classification described here proved an effective and unbiased classification method.Figure 3 shows the overall accuracy, user's accuracy for water, and user's accuracy for non-water as a function of W e from a random sample of 56 images (33 % of filtered images).Accuracy was assessed using approximately 500 semirandom manually derived assessment points for each class (water and non-water) per image.Of particular interest were both the overall accuracy (total number of correctly classified assessment points divided by total number of assessment points, ∼ 500), and the user's accuracy for water and non-water (percentage of image pixels classified correctly as assessed by the training data).These metrics provide an assessment of classification performance from the standpoint of each classified image: the paradigm that speaks directly to the fidelity of extracted W e .Accuracy assessment indicates that overall accuracy is acceptable (mean accuracy for the assessment sample is 79.6 %), and neither overall accuracy (r = −0.11)nor water user's accuracy (r = 0.35) show strong correlation with W e .This lack of correlation indicates that the classification of water is not affected by the extent of water inundation in the scene.There is a strong correlation (r = −0.79) between the user's accuracy of non-water pixels and W e , but this negative correlation is a reflection of the difficulty of classifying the small number of non-water pixels remaining in scenes where the braid plain was nearly completely flooded.The reason for this successful classification was the similarity of filtered images, which was guaranteed by the histogram matching procedure described above.Introduction

Conclusions References
Tables Figures

Back Close
Full located where the image data provided complete bank to bank coverage and indicated by the magenta polygon in Fig. 2.

Extracted W e hydrograph
The W e hydrograph shown in Fig. 4 is a proxy for discharge variations in the Isortoq River from 2011-2012.Gaps in the date record indicate that there were no images that passed filtering on those dates, even though images were acquired half hourly.This is a result of prolonged rain events, heavy fog, or strong winds that caused images to be non-similar during these days.Despite these gaps, the data record still provides near daily coverage, indicating that filtering did not substantially affect the temporal distribution of the output data.Of note is the large peak in W e seen in July of 2012, coinciding with historic melting of the Greenland ice sheet (Hall et al., 2013;Tedesco et al., 2013) and destruction of the Watson River bridge in the town of Kangerlussuaq (Smith et al., 2014), located approximately 15 km south of the Isortoq River.
Figure 4 also reveals that the relative magnitude of W e during this melt event was an order of magnitude greater than W e in low flow stages.This shows that the Isortoq River behaves like other braided rivers with non-cohesive bed material, as its width adjusts rapidly to changing discharge.In addition, the peak W e observed here corresponds to almost complete floodplain occupation by the river, highlighting the difficulty of installing traditional gauging equipment at this site.

Conclusions
This paper has demonstrated the efficacy of a fixed position RGB time-lapse camera platform for hydraulic parameter extraction for a large proglacial braided river in a remote area of Greenland.The operational camera delivered over 10 000 half hourly images in just over one year of collection, and demonstrated remarkable resilience in the Greenlandic winter.Such a platform is useful for extraction of multiple hydraulic pa-Introduction

Conclusions References
Tables Figures

Back Close
Full rameters, including effective width (W e ), a proxy for discharge variations.To fully realize this monitoring potential, the W e variations extracted for each image could be calibrated with a rating curve built from intermittent field data.
The above accuracy assessments indicate that the semi-supervised classification method produced accurate and unbiased results.An accurately delineated water surface is necessary to preserve the fidelity of extracted hydraulic parameters.The processing techniques described in this paper fall short of completely automated processing, yet this paper does present an analysis protocol that achieves a consistent standard of classification from images that are automatically selected for ease of classification.Furthermore, the similarity filtering presented herein allows for supervised classification of numerous images from minimal training data, enabling long term hydrologic records to be maintained without onerous manual classification of imagery or photogrammetrically challenging DEM extraction.Introduction Full Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | To classify images into water and non-water areas for W e extraction, training data representing four classes (dark water, bright water, dark sand, and bright sand) were manually collected from a random 10 % sample (16 images) of the similarity filtered images.The RGB statistics generated from these training polygons were applied to all images passing the similarity filtering and used to train a maximum likelihood su-Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper | Discussion Paper |

Figure 1 .Figure 2 .Figure 3 .Figure 4 .
Figure 1.Fig. 1 shows example images taken on 17 July 2012 of the Isortoq River by the two camera systems as well as the cameras themselves (foreground and background, a).The Issunguata Sermia Glacier is seen in the background, and nearly all water in this river is derived from its melting terminus.Only the wide focus camera (c) has a continuous data record from 2011-2012, as a presumed Arctic fox severed the wiring on the narrow focus camera.The yellow polygon in the wide focus image shows the target reach for W e extraction, covering an area of approximately 1000 by 2000 m.