Although precipitation has been measured for many centuries,
precipitation measurements are still beset with significant inaccuracies.
Solid precipitation is particularly difficult to measure accurately, and
wintertime precipitation measurement biases between different observing
networks or different regions can
exceed 100 %. Using precipitation gauge results from the World
Meteorological Organization Solid Precipitation Intercomparison Experiment
(WMO-SPICE), errors in precipitation measurement caused by gauge uncertainty,
spatial variability in precipitation, hydrometeor type, crystal habit, and
wind were quantified. The methods used to calculate gauge catch efficiency
and correct known biases are described. Adjustments, in the form of
“transfer functions” that describe catch efficiency as a function of air
temperature and wind speed, were derived using measurements from eight
separate WMO-SPICE sites for both unshielded and single-Alter-shielded
precipitation-weighing gauges. For the unshielded gauges, the average
undercatch for all eight sites was 0.50 mm h
Like many atmospheric measurements, precipitation is subject to the observer effect, whereby the act of observing affects the observation itself. Hydrometeors falling towards a precipitation gauge can be deflected away from the gauge inlet due to changes in the speed and direction of the airflow around the gauge that are caused by the gauge itself (e.g. Sevruk et al., 1991). The magnitude of this effect varies with wind speed; wind shielding; the shape of the precipitation gauge; and the predominant size, phase, and fall velocity of the hydrometeors (Colli et al., 2015; Folland, 1988; Groisman et al., 1991; Theriault et al., 2012; Wolff et al., 2013). Because all these factors affect the amount of undercatch, it is difficult to accurately describe and adjust the resultant errors for all gauges, in all places, in all types of weather. This has been an active area of research for over 100 years (e.g. Alter, 1937; Heberden, 1769; Jevons, 1861; Nipher, 1878), with significant findings for manual measurements described in the WMO precipitation intercomparison experiment performed in the 1990s (Goodison et al., 1998; Yang et al., 1995, 1998b). More recently, studies of the magnitude and importance of such measurement errors have been performed using both analytical (Colli et al., 2015, 2016; Nespor and Sevruk, 1999; Theriault et al., 2012) and observational approaches (e.g. Chen et al., 2015; Ma et al., 2015; Rasmussen et al., 2012; Wolff et al., 2013, 2015). The ultimate goal of all this research is to facilitate the creation of accurate and consistent precipitation records spanning different climates and different measurement networks, including measurements by different gauges and shields (e.g. Førland and Hanssen-Bauer, 2000; Scaff et al., 2015; Yang and Ohata, 2001).
Due to the importance of precipitation measurements for hydrological, climate, and weather research, and also due to the many unanswered questions and uncertainties that are associated with solid precipitation, in particular (Førland and Hanssen-Bauer, 2000; Goodison et al., 1998; Sevruk et al., 2009), the WMO began planning an international intercomparison focused on solid precipitation in 2010. The primary goals of this intercomparison include the assessment of new automated gauges and wind shields as well as the development of adjustments for these gauges and wind shields.
In this work, results from the WMO Solid Precipitation Intercomparison Experiment (WMO-SPICE) were used to develop adjustments for different types of weighing gauges, within different types of shields. Due to the nature of this unique dataset, including periods of precipitation from many different sites, gauges, and shields, new analysis techniques were used to develop adjustments and quantify the errors inherent in applying such corrections. The focus of this work is on the unshielded and single-Alter-shielded precipitation-weighing gauges. Based on results of a previous CIMO survey (Nitu and Wong, 2010), WMO-SPICE selected two of the most ubiquitous configurations used in national networks for the measurement of solid precipitation. As requested by WMO-SPICE, both unshielded and single-Alter-shielded weighing gauges were present at all of the WMO-SPICE testbeds. Eight of these sites also had a Double Fence Automated Reference (DFAR) configuration (Nitu, 2012), which is essentially an automated weighing gauge within a Double Fence Intercomparison Reference (DFIR) shield. The DFIR is described in detail in the previous WMO Solid Precipitation Intercomparison report (Goodison et al., 1998). The measurements from these gauges provided a unique opportunity to develop and test wind corrections for unshielded and single-Alter-shielded gauges at multiple sites.
A testbed with a well-shielded gauge installed either within well-maintained bushes (Yang, 2014) or large, octagonal, concentric wooden fences (Golubev, 1986, 1989; Yang et al., 1998b) is required to develop a precipitation transfer function. Such transfer functions are typically developed at one site and used to adjust precipitation measurements recorded elsewhere. This raises the obvious question of how universal such adjustments are. At a given site, the catch efficiency (CE) of an unshielded or a single-Alter-shielded gauge can vary significantly (e.g. Ma et al., 2015; Rasmussen et al., 2012; Theriault et al., 2012; Wolff et al., 2015). Some of this variability is driven by differences in ice crystal shape (habit), mean hydrometeor fall velocity, and hydrometeor size (e.g. Theriault et al., 2012), so it follows that there may be significant variability in catch efficiency from site to site. In addition, issues such as complex topography may contribute to the site-specific variability in catch efficiency. The magnitude of this site-to-site variability has not been previously quantified.
In the present study, measurements from eight different WMO-SPICE testbeds, varying significantly in their siting, elevation, and climate, allowed us to address the important and long-standing question of how applicable a single transfer function is for multiple sites. Measurements from all sites, spanning two winter seasons from 2013–2015, were combined to develop robust multi-site transfer functions, marking a significant departure and improvement over past work, in which typically only one (Folland, 1988; Smith, 2009; Wolff et al., 2015) or sometimes two (Kochendorfer et al., 2017) sites were used to develop transfer functions. In addition, the resultant multi-site or “universal” transfer functions were tested on each site individually, revealing the magnitude of site-specific errors and biases for the different types of transfer functions developed.
Precipitation measurements from eight sites were included in this analysis,
each of which had a DFAR configuration. They include the Canadian CARE site
(CARE), the Norwegian Haukeliseter site (Hauk), the Swiss Weissfluhjoch site
(Weis), the Finnish Sodankylä site (Sod), the Canadian Caribou Creek site
(CaCr), the Spanish Formigal site (For), the US Marshall site (Ma), and the
Canadian Bratt's Lake site (BrLa). Site locations are shown in Fig. 1. These
sites are described in detail in the WMO-SPICE commissioning reports
(available here:
SPICE testbeds included in this study.
Except for Formigal, Spain, where DFAR measurements were only available
during the winter of 2014–2015, measurements from the winter seasons (1
October–30 April) of 2013–2014 and 2014–2015 were used in this analysis.
For WMO-SPICE, the reference precipitation-weighing gauges were designated by
the International Organizing Committee as the OTT Pluvio
Comparison of unshielded Pluvio
Site names, abbreviations (Abbr.), latitude, mean gauge height wind
speed (
Data from each site were processed using a standardized quality control
procedure. The procedure was developed and tested using WMO-SPICE data and
consisted of the following steps:
A format check, to ensure that the data had the correct number of
records per day, as expected based on the instrument sampling resolution
(data with repeated time stamps were removed, and any missing time stamps
were filled with “null” values); A range check, to identify and remove values that were outside of the
manufacturer-specified output range for each sensor; A jump filter, to identify and remove points exceeding the maximum
point-to-point variation expected for a given sensor; A smoothing step, employing a Gaussian filter to mitigate the influence of
high-frequency noise on instrument data (applied only to data from weighing
gauges).
The above steps were applied uniformly at a central archive to the measurements, with thresholds and filter parameters defined separately for each instrument and sampling resolution. Data were also flagged according to the results of the above checks and filters. Data and flags were output at 1 min temporal resolution. In addition, a manual assessment was used to identify and account for any periods during which instrument data were not available (e.g. instrument maintenance, site power outage) or during which instrument performance may have been compromised (e.g. frozen sensors). These steps are described in more detail in Reverdin (2016).
To ensure that the datasets used for the development of transfer functions
were as consistent as possible across all sites, and represented
precipitating periods with a high degree of confidence, a methodology was
established to identify precipitation events for further analysis using the
1 min, quality-controlled data. The occurrence of a precipitation event was
predicated on the fulfilment of two criteria: first, accumulated
precipitation
The rationale behind the use of 30 min intervals and a 0.25 mm accumulation threshold for precipitation events is detailed elsewhere (Kochendorfer et al., 2017). To summarize this rationale, the 0.25 mm threshold was found to reduce the effects of measurement noise on the selection of events, while the 30 min interval provided a large sample size of events. The 60 % threshold for precipitation occurrence – corresponding to 18 min within a 30 min interval – was chosen primarily to ensure that the accumulation reported by the weighing gauge in the DFAR represented falling precipitation, and not one or more “dumps” of precipitation accumulated on the gauge housing or orifice into the bucket, or any other type of false accumulation. Separate tests, which are not described in detail here, indicated that the number of 30 min events selected was insensitive to the specific value of the threshold, provided it was between 50 and 80 %.
The event selection procedure was applied uniformly to the 1 min quality-controlled datasets from each site. The accumulation reported by the weighing gauge in the DFAR and each single-Alter-shielded and unshielded weighing gauge was determined for each event. Ancillary parameters, such as ambient temperature (minimum, maximum, and mean) and mean wind speed, were also determined for each event. Flags created in the quality control process were also aggregated and reported for each event. The resultant site event datasets (SEDs), which included all 30 min precipitation events selected within a winter season for a specific site, provided the basis for the development of transfer functions.
Additional filtering was performed on the resultant 30 min SEDs for the
transfer function development. At several sites, data from wind directions
associated with wind-shadowing from towers, shields, buildings, and other
obstructions were removed from the record. Single-Alter gauge measurements at
CARE were affected by a neighbouring windshield for wind directions within
Unrealistic wind speed (
Minimum accumulation thresholds were also used for all the gauges under test
(UT), to account for random variability in the event accumulation values. The
use of a minimum threshold was necessary because even the DFAR precipitation
measurements were subject to random variability. Tests performed using
identical gauge–shield combinations revealed that the application of a
minimum threshold to only one gauge arbitrarily included some events near the
threshold and excluded others and thereby biased the results towards the gauge used for the event selection.
Following Kochendorfer et al. (2017), the minimum threshold for the gauge
under test was estimated using Eq. (1).
A minimum threshold was calculated for the unshielded gauges
(
To develop transfer functions for both 10 m and gauge height wind speeds,
the best available wind speed sensors at every site were used to estimate the
10 m and gauge height wind speeds. Because the availability and quality of
wind speed measurements varied from site to site, the methods used for wind
speed estimation also varied. Generally, the log-profile law was applied to
estimate the change in wind speed with height, assuming neutral surface layer
stability.
In addition to the selection of events used to create the transfer functions,
light precipitation events (accumulated DFAR precipitation
For the light precipitation analysis, four sites were selected: CARE, Bratt's Lake, Haukeliseter, and Sodankylä (Fig. 1). For automated gauges, the natural fluctuation (noise) around zero can easily be confused with light precipitation. To help distinguish between noise and precipitation, a minimum threshold of 0.1 mm in 30 min was chosen for the reference gauge. In addition, independent verification of precipitation occurrence was provided by a sensitive precipitation detector, following the same methodology used for the SEDs (Sect. 2.2.2). Light events were selected only when the precipitation detector observed precipitation for at least 18 min of the 30 min period.
The above criteria were applied to the quality-controlled, 1 min datasets from the selected sites. The selected events were filtered further using the procedures outlined in Sect. 2.2.3. For the tested unshielded and single-Alter-shielded gauges, minimum threshold values equivalent to those used for the SEDs were estimated by replacing 0.25 mm in Eq. (1) with 0.1 mm. The resultant unshielded (single-Alter-shielded) 0.025 mm (0.043 mm) minimum threshold values were applied, with all 30 min periods with less than the minimum threshold excluded from the analysis. Where possible, the gauge height wind speed was used in the computation.
Because many precipitation measurements are only available over longer time periods, 12 and 24 h precipitation accumulations were also used for the testing and development of transfer functions. These longer precipitation accumulations were created using a minimum threshold of 1.0 mm per each 12 or 24 h period as measured by the DFAR and a minimum of 15 min of precipitation as measured by the precipitation detector. Minimum and maximum thresholds for the unshielded and single-Alter-shielded measurements were calculated and applied using the same methods that were applied to the 30 min precipitation measurements, described in the section titled “Accumulation thresholds for gauges under test”. Accompanying mean air temperatures and wind speeds were also calculated for the 12 and 24 h precipitation accumulations.
A single transfer function of
Without explicitly including
Due to the prevalence of air temperature measurements in observing networks,
and the fact that not all of the WMO-SPICE sites included precipitation type
measurements, the 30 min mean
The resultant transfer functions were only valid for the range of wind speeds
available in the measurements from which they were derived. At high mean wind
speeds, where precipitation measurements were scarce or non-existent, the
transfer functions were unconstrained and inaccurate. A wind speed
threshold was therefore required, above which
the resultant transfer functions cannot be used. This wind speed threshold
was chosen by assessing the availability of high wind speed results for all
air temperatures after plotting catch efficiency in three dimensions as a
function of air temperature and wind speed. The mean wind speed threshold at
gauge height was 7.2 m s
Transfer functions were applied to obtain the adjusted SA and UN precipitation measurements at each site. Statistics were then calculated for each site by comparing the adjusted SA or UN gauge precipitation to the DFAR precipitation. This approach was chosen because it produced more universal multi-site transfer functions, with a single transfer function describing all of the available sites within WMO-SPICE, while simultaneously providing realistic estimates of the magnitudes of site biases that occurred due to local variations in climate and siting. Transfer function statistics were also produced for the entire dataset by combining the adjusted SA–UN measurements from all of the sites together and comparing them to the corresponding DFAR measurements.
Four different statistics were used to quantify errors in the different types
of measurements and adjustments. These statistics were all based on the
30 min precipitation measurements. The RMSE was calculated based on the difference between the measurements under test
and the corresponding DFAR measurements. The mean bias was also calculated
from the difference between the mean of the precipitation measurements under
test and the mean of the corresponding DFAR precipitation measurements. The
correlation coefficient (
To test the appropriateness of the transfer functions for the adjustment of 12 and 24 h precipitation records, the 30 min transfer functions were applied to 12 and 24 h precipitation measurements. To better evaluate the effects of using these longer time periods, new Eq. (3) type transfer functions were also derived using the 12 and 24 h measurements and the same methods as described for the 30 min events. Error statistics for the 12 and 24 h measurements were calculated and compared for both the 30 min transfer functions and the appropriate 12 and 24 h transfer functions.
Unshielded gauge measurements from all eight sites were used to create and
test transfer functions. The resultant unshielded Eq. (3) transfer function
coefficients for both gauge height and 10 m wind speeds are given in
Table 2. Equation (4) transfer function coefficients were also produced for both mixed and solid precipitation
(Table 3). The three-dimensional (Eq. 3) transfer functions in Fig. 3 are
shown as a function of wind speed only, with
The
The
Transfer functions describing the unshielded (UN) catch efficiency
(CE) as a function of the gauge height wind speed
(
Comparison of uncorrected, unshielded precipitation measurements
(
As an example of the effects of the application of the transfer functions, the corrected and uncorrected unshielded measurements are compared to the DFAR measurements for CARE and Marshall in Fig. 4. The necessity of the adjustments is apparent from the uncorrected measurements (Fig. 4a and c), which often show smaller accumulations than those reported by the weighing gauge in the DFAR. Both the improvement and the remaining uncertainty in the corrected measurements are demonstrated in Fig. 4b and d. Adjusted and unadjusted measurements like these were used to produce error statistics for all eight sites.
The associated RMSE, bias, correlation coefficient (
Unshielded gauge statistics for all eight sites, calculated from the
difference between the DFAR precipitation and both the uncorrected (dark
blue) and the corrected precipitation. The corrected precipitation
measurements were based on the Eq. (3) transfer function for both the 10 m
wind speed (
The resultant errors were not affected significantly by the type of transfer
function or wind speed measurement height used, although there were small
differences between the
The Eq. (3) and Eq. (4) transfer functions generally produced similar results, with the exception of Weissfluhjoch, where the differences between Eq. (3) and Eq. (4) apparent in Fig. 2, at and just below the wind speed threshold, resulted in more significant errors for the Eq. (3) transfer function (Fig. 5). These differences may be specific to the population of data that was available to create and test these transfer functions, rather than being indicative of a general advantage of one correction type over the other.
The most significant exception to the general success of the unshielded
transfer functions was at Weissfluhjoch, where the RMSE actually increased
after adjustment (Fig. 5a). Most of the measurements at Weissfluhjoch were
improved by adjustment, as indicated by the significant increase in the
frequency of adjusted measurements within 0.1 mm of the reference
(
Errors in the 30 min precipitation, estimated from the difference
between the DFAR and the corrected, unshielded gauges at Weissfluhjoch
The general trend found for the Weissfluhjoch errors was valid for all sites, with the RMSE, bias, and correlation driven by the high wind speed results. This is partially because at high wind speeds in cold, snowy conditions, the transfer function adjustment more than doubled the amount of measured precipitation. Such large adjustments could significantly enhance errors in the adjusted catch efficiency, especially when the measured catch efficiency was higher than typical; at high wind speeds, a relatively small error in the measured precipitation is doubled or even tripled, resulting in errors in the adjusted precipitation of similar magnitude to the DFAR measurement itself. For this reason alone, errors in the adjusted catch efficiency look significantly different than errors in the measured CE. This highlights the value of determining errors and biases in the adjusted precipitation measurements rather than errors in the transfer functions. If necessary, cross-validation (e.g. Kochendorfer et al., 2017) or other statistical bootstrapping techniques can be used to independently validate transfer functions and estimate errors in transfer functions.
Another general trend in the results was that unshielded measurements from the sites with complex topography were more difficult to adjust, with the transfer functions performing worse at the mountainous sites. The average unshielded RMSE for the mountainous sites (Haukeliseter, Formigal, and Weissfluhjoch) was decreased from 0.48 mm (58.6 %) to 0.43 mm (53.7 %) by the adjustments, while for the other sites (CARE, Sodankylä, Caribou Creek, Marshall, and Bratt's Lake) it was decreased from 0.27 mm (42.0 %) to 0.20 mm (31.5 %). The mean of the absolute value of the unshielded biases for the mountainous sites was decreased from 0.33 mm (41.5 %) to 0.14 mm (18.0 %), and for the other sites it was decreased from 0.18 mm (28.4 %) to 0.04 mm (6.5 %) by the adjustments. The errors in both the adjusted and the unadjusted mountainous measurements were much larger than the errors from the other sites.
Transfer functions describing the single-Alter (SA) catch efficiency
(CE) as a function of the gauge height wind speed
(
Single-Alter gauge statistics for all eight sites, calculated from the
difference between the DFAR precipitation and both the uncorrected (dark
blue) and the adjusted precipitation. The adjusted precipitation measurements
were based on the Eq. (3) transfer function for both the 10 m wind speed
(
The single-Alter-shielded measurements from all eight sites were combined and used to develop transfer functions. Table 2 describes the resultant Eq. (3) transfer functions, and Table 3 describes the Eq. (4) transfer functions. Transfer functions created using both the gauge height wind speeds and the 10 m wind speeds were produced for the single-Alter-shielded measurements. The wind speed thresholds used when applying the transfer functions are shown in Tables 3 and 4. For wind speeds exceeding the threshold values, the transfer functions should be applied by forcing the actual wind speed to the wind speed threshold, as discussed in Sects. 2.2.5 and 3.1.
The resultant Eq. (3) and Eq. (4) transfer functions for single-Alter
measurements showed greater similarity to each other (Fig. 7) than the
unshielded transfer functions (Fig. 3). They were also similar to the
Kochendorfer et al. (2017) transfer functions (Fig. 7), developed using
earlier measurements from only Haukeliseter and Marshall. Application of the
single-Alter transfer functions reduced the RMSE at most of the sites in
comparison to the unadjusted measurements (Fig. 8a). In addition, the results
were relatively insensitive to the methods used to produce the adjusted
measurements. The RMSE values were quite similar using Eq. (3) and Eq. (4), and
they were not affected significantly by the wind speed measurement height.
Like the unshielded measurements, the biases (Fig. 8b) and the percentage of
events within 0.1 mm (
The single-Alter-shielded results from the mountainous Haukeliseter, Formigal, and Weissfluhjoch sites demonstrated the same trend as the mountainous unshielded measurements, with larger errors in both the corrected and the uncorrected mountainous measurements, and much smaller RMSE and biases for the flat sites. The single-Alter-shielded mean RMSE for the mountainous sites was only improved from 0.35 mm (42.8 %) to 0.33 mm (41.6 %) by adjustment, compared to the flat sites with a mean uncorrected RMSE of 0.18 mm (27.9 %) that was improved to 0.13 mm (21.0 %). For the single-Alter-shielded gauges, the mean of the absolute values of the biases for the mountainous sites were improved from 0.23 mm (29.0 %) to 0.15 mm (18.4 %), and for other sites it was improved from 0.11 mm (18.0 %) to 0.03 mm (4.7 %). The general trend was that both before and after adjustment, the RMSE and the biases were much larger for the mountainous sites than for the other sites. The unadjusted mountainous measurements exhibited larger uncorrected errors, and these errors remained larger than the other, flatter sites after correction.
In addition to the RMSE values shown in Figs. 5a and 8a, which describe the uncertainty of the adjusted measurements, the uncertainty of the CE transfer functions were also estimated. As described by Fortin et al. (2008), the uncertainty of an adjusted precipitation measurement is affected by the uncertainty of the transfer function and the magnitude of both the precipitation measurement and the adjustment. CE uncertainty estimates may be more difficult to interpret than the RMSE included in Figs. 5a and 8a, but they can be used to calculate uncertainty estimates specific to new measurements and sites.
The RMSE values of the Eq. (3) transfer functions describing the unshielded CE were 0.18 for both the gauge height and 10 m height wind speed transfer functions. The magnitude of the RMSE values for the different unshielded Eqs. (3) and (4) transfer functions were similar and varied between 0.18 and 0.21. For the single-Alter-shielded transfer functions, the uncertainty varied from 0.18 to 0.19. When binned by wind speed, the resultant transfer function uncertainties were relatively insensitive to wind speed, and 0.2 can be used as a representative value for the uncertainty of all of the transfer functions, for both snow and mixed precipitation.
As described in Sect. 2.2.8, for the testing of the transfer functions, wind speeds greater than the wind speed threshold were replaced with the wind speed threshold. Due to the inaccuracy of transfer functions at very high wind speeds, where data available to constrain the resultant fit were scarce, failure to implement the wind speed threshold could cause large errors due to over-corrections at some sites. Although all measurements were used in the development of the transfer functions, the high wind speed precipitation measurements were typically more accurately corrected using the wind speed threshold than the measured wind speed.
In addition, changing the wind speed threshold, and thereby changing the
magnitude of the applied transfer function at high wind speeds, had a
significant effect on the resultant site-specific errors and biases at some
sites. Changing the gauge height wind speed threshold from 7.2
to 5 m s
Blowing snow may have also contributed to errors found at high wind speeds,
with higher-than-normal catch efficiencies typically observed in blowing snow
events (e.g. Goodison, 1978; Schmidt, 1982). At Bratt's Lake, where the
effects of blowing snow were quite pronounced and there was independent
confirmation of blowing snow, these events were removed fairly easily by
removing all of the high-wind (
Although the Pluvio
The transfer functions were developed using 30 min events with
The four sites considered in the assessment of light events were characterized by different climate conditions. A total of 629 light precipitation events were observed at the low elevation, sub-arctic Sodankylä site, and 361 light events were observed at CARE, with its continental climate. Haukeliseter, which is located in a mountainous region well above the tree line, experienced 260 light events. The smallest number of light events (62) was observed at the Bratt's Lake site, which is located in a prairie region with flat terrain.
Similar to the previous analysis, statistics were computed for each site by
comparing the DFAR precipitation measurements to both the unadjusted and the
adjusted light precipitation measurements (Fig. 9). The results for the
unshielded and single-Alter-shielded gauges were similar with regard to the
benefit of the transfer function applications. The RMSE values were improved
at three locations, with the only exception being the windiest site
(Haukeliseter). The mean biases were improved, and often became positive,
indicating that the applied transfer function corrected the underestimation
in most cases. The percentage of events that agreed with the reference
accumulation within the 0.05 mm range was also improved at all sites. After
adjustment, many more cases fell
within the 0.05 mm error threshold, even for Haukeliseter, where 8 %
more SA and 24 % more UN gauge observations were closer to the DFAR. The
highest correlation between the reference and SA (UN) gauges was observed at
Sodankylä, with unadjusted correlations of 0.87 (0.76) that were improved
even further by adjustment. For Haukeliseter, the correlation decreased after
adjustment due to over-corrected outliers. This was caused in part by events
with unadjusted SA and UN measurements equal to or greater than the
corresponding DFAR measurements. For example, following Eq. (4) the catch
efficiency of a 7 m s
SLED gauge statistics for four sites, calculated from the difference
between the DFAR precipitation and both the uncorrected (dark blue for
Errors in the 30 min SLED precipitation, estimated from the
difference between the DFAR and the corrected single-Alter-shielded
Because many historical precipitation measurements are only available for 12
and 24 h periods, the effects of applying the derived 30 min transfer
functions to such measurement periods were examined. To help quantify the
sensitivity of a transfer function to the accumulation time period used for
its derivation, adjustments were also derived using both the 12 and 24 h
periods. Error statistics for the adjusted 12 h measurements were estimated
by applying both the 12 h transfer function and the 30 min transfer
function to the same 12 h unshielded precipitation measurements (Fig. 11).
The differences between the resultant RMSEs were small, and varied from site
to site, with the 30 min transfer function producing smaller RMSE on average
(All, Fig. 11a). The biases in the 30 min transfer functions were more
negative than the 12 h transfer function biases, with the mean bias for all
the measurements slightly underestimated in comparison to the near-zero 12 h
transfer function bias (All, Fig. 11b). Differences among the resultant
correlation coefficients and PE
Error statistics for 12 h unshielded precipitation measurements that are uncorrected (blue), corrected using the 30 min derived transfer functions (green), and corrected using the 12 h derived transfer functions (yellow) are compared.
Error statistics for 24 h unshielded precipitation measurements that are uncorrected (blue), corrected using the 30 min derived transfer functions (green), and corrected using the 12 h derived transfer functions (yellow) are compared.
The differences between the 30 min and the 12 or 24 h transfer function biases may have been caused by the fact that, during precipitation, it was either windier or colder than it was throughout the entire longer periods. The 30 min adjustments, for which the mean temperature and wind speed were more representative of conditions during precipitation, typically slightly under-corrected the longer precipitation periods, which may have experienced mean conditions that were typically warmer and less windy than the period when precipitation occurred within the 12 or 24 h period. This agrees with the analysis of high-frequency meteorological measurements from Jokioinen, Finland, from the WMO Solid Precipitation Intercomparison (Goodison et al., 1998), which compared storm-average and 12 h average air temperature and wind speeds. Significant variability was noted, but the use of the average air temperature and wind speed rather than storm-average measurements for the application of transfer functions was demonstrated to slightly under-correct the precipitation measurements. This was because the storm-average temperature at Jokioinen was typically lower than the 12 h temperature, and the storm-average wind speed was typically higher than the 12 h wind speed.
Additional uncertainty was introduced into the 12 and 24 h precipitation measurements because it was not possible to screen for wind direction, as 12 and 24 h average wind directions were not always representative of the different wind directions recorded during precipitation. Because of this, from some locations these 12 and 24 h precipitation measurements may have been affected by wind shadowing from neighbouring obstructions. In addition, the assumption of neutral conditions used to estimate either the gauge height or the 10 m height wind speed from the available wind speed measurements may not always have been valid for the wind speed measurements used to estimate the 12 and 24 h mean wind speeds. The assumption of neutral atmospheric conditions for the 30 min measurements is defensible because it is typically overcast during precipitation, and clear skies are typically associated with strong surface heating and cooling and large vertical air temperature gradients. However, longer time periods, such as 12 and 24 h, may include periods of precipitation and also periods of clear skies, as well as both stable and unstable surface layer conditions. However, the effects of these different issues are likely small, especially given the demonstrated performance of the 30 min adjustments over longer time periods, but they may nevertheless merit closer inspection in future work.
Transfer functions developed and tested on eight separate sites were shown to
reduce the biases in both unshielded and single-Alter-shielded weighing gauge
precipitation measurements. For the unshielded gauges, before adjustment the
mean bias from all sites was
The mountainous sites of Formigal, Haukeliseter, and Weissfluhjoch were more difficult to correct, with more significant biases and RMSE remaining after adjustment. Higher wind speeds at the mountainous sites cannot fully explain this phenomenon, as only one of the mountainous sites was much windier than the other sites, and the mean site errors were not well-correlated with mean site wind speed. One possible explanation for this issue is that it may have been more difficult to measure representative wind speeds at the mountainous sites. The Weissfluhjoch wind speed measurements provide some support for this hypothesis. Wind speed measurements at Weissfluhjoch were available from two different locations simultaneously during the winter of 2014–2015, and varied significantly from each other from all wind directions. Additional wind speed measurements, including more locations, more heights, and three-dimensional sonic anemometer measurements of the mean vertical wind speed might have helped identify the possible effects of large-scale, standing circulations that could have contributed to these discrepancies at the more complex sites. It is also possible that unique relationships between precipitation type and crystal habit and air temperature at the mountainous sites contributed to errors in the adjusted measurements. However, because one mountainous site was over-corrected (Weissfluhjoch), and the other two were under-corrected (Formigal and Haukeliseter), it is not possible to recommend general modifications to the transfer functions for use in mountainous sites. These results indicate that the transfer functions developed and presented here should be evaluated at new testbeds in the mountains and complex terrain, and also in other areas subject to high winds and unusual precipitation.
As indicated by the RMSE values, significant differences between the DFAR measurements and both the unshielded and the single-Alter-shielded measurements persisted after adjustment. For example, excluding gauges from the same three mountainous sites, the mean RMSE of the adjusted unshielded gauges was still 0.20 mm, or 31.5 % of the mean 30 min precipitation. The mean RMSE of the adjusted single-Alter-shielded gauge measurements at the flat sites (0.13 mm, or 21.0 %) was lower than the unshielded-gauge RMSE, confirming the increased accuracy provided by a single-Alter shield, but it was still significant. These errors in the adjusted measurements were presumably caused by a combination of factors, such as random spatial variability of precipitation across an individual site, sensor-induced noise in the precipitation measurements, the multiplicative effect of the transfer function corrections at high wind speeds (which can double or even triple both the amount of precipitation and the accompanying errors), and the effects of variability of crystal habit on catch efficiency. This suggests that to produce more accurate measurements, a better understanding of the interaction of the snowflake trajectory past a given gauge and wind shield combination is needed. Recent computational fluid dynamics studies of airflow and snowflake trajectories past simple representations of gauges and Alter shields provide initial insights into this complex interaction (Colli et al., 2015; Theriault et al., 2012).
Two different types of weighing gauges – the Geonor T-200B3 and OTT
Pluvio
Overall, the adjusted single-Alter-shielded measurements were found to be more accurate than the adjusted unshielded measurements. For all of the adjusted measurements, the average unshielded RMSE was 0.31 mm (42.8 %), and for the single-Alter shield the RMSE was 0.21 mm (30.6 %). The errors in the unadjusted single-Alter measurements were also generally smaller than the unadjusted unshielded measurements. This is consistent with the design philosophy of shields, which is to reduce the horizontal wind speed inside the shield, and thereby reduce the effects of the gauge on the flow around it.
The pre-SPICE transfer functions, which were created using both Marshall and Haukeliseter measurements for the single-Alter-shielded gauge and only the Marshall site for the unshielded gauge (Kochendorfer et al., 2017), were quite similar to the more universal multi-site transfer functions developed here. This indicates that despite notable differences among the eight different sites included in this study, robust transfer functions can be created using measurements from only one or two sites, provided that those sites are subject to typical catch efficiencies. For example, if Formigal was used to develop a transfer function for the Weissfluhjoch site (or vice versa) the resultant errors in the adjusted measurements would be large, as Formigal was on average under-corrected by the multi-site transfer function, and Weissfluhjoch was over-corrected. This also demonstrates the added value of using multiple sites to develop and test transfer functions.
The transfer functions also performed well on the light precipitation events, with improved biases and increases in the number of events that were within 0.05 mm of the DFAR measured precipitation. These results did not indicate that there was a significant change in the catch efficiency for light precipitation. They also confirmed the effectiveness of the transfer functions on these independent measurements, as the light precipitation events were excluded from the datasets used to create the transfer functions.
Application of the derived transfer functions to 12 and 24 h precipitation accumulations indicated that the transfer functions derived using 30 min periods can be applied to longer time periods. This is important for historic precipitation records, which are often only available every 12 or 24 h. In general, the sensitivity to the period chosen to derive the transfer function was small, and it varied from site to site. Most importantly, when tested on 12 and 24 h precipitation measurements, the differences between the error statistics describing transfer functions derived from 30 min, 12 h, and 24 h accumulations were in all cases smaller than the variability between sites. This indicates that when these transfer functions are applied to new sites, errors due to the variability in climatology will be larger than errors caused by longer measurement frequencies.
The data used in the preparation of this manuscript will be made available to the public after the WMO-SPICE Final Report has been published.
The authors declare that they have no conflict of interest.
Many of the results presented in this work were obtained as part of the Solid Precipitation Intercomparison Experiment (SPICE) conducted on behalf of the World Meteorological Organization (WMO) Commission for Instruments and Methods of Observation (CIMO). The analysis and views described herein are those of the authors at this time, and do not necessarily represent the official outcome of WMO-SPICE. Mention of commercial companies or products is solely for the purposes of information and assessment within the scope of the present work, and does not constitute a commercial endorsement of any instrument or instrument manufacturer by the authors or the WMO.
This article is part of the special issue “The World Meteorological Organization Solid Precipitation InterComparison Experiment (WMO-SPICE) and its applications (AMT/ESSD/HESS/TC inter-journal SI)”. It does not belong to a conference.
We thank Hagop Mouradian of Environment and Climate Change Canada for contributing the mapped site locations (Fig. 1). This research was funded by the Korean Ministry of Land, Infrastructure and Transport through a grant (16AWMP-B079625-03) from the Water Management Research Program. We also thank Eckhard Lanzinger, Vincent Fortin, and Kay Helfricht for providing thoughtful reviews of the originally submitted version of this manuscript, and greatly improving the quality of this paper. Edited by: Samuel Morin Reviewed by: Vincent Fortin, Eckhard Lanzinger, and Kay Helfricht