This study produced frequency maps with more spatial detail with respect to meso- scale variation in snowpack accumulation. The following sections describe the use of a snow accumulation and ablation model to provide for quality control of SWE observations from nine NWS stations and to estimate continuous SWE records at 46 stations over Minnesota and the eastern Dakotas (Figure 1). The NWS station's data is a raw data set. It has never been subject to any analysis or quality control process for identifying errors (Schmidlin 1990). By comparing computer simulation data with observed data, examination can identify erroneous observed values that will be discarded from further analyses. The daily maximum value of SWE for the period March 1-15, for each of the 40 years, 1953-1992, were tabulated. These years were selected because 1953 is when the NWS began observing the water equivalent of snow on the ground. The 1, 2, 4, 10, 20, and 50 percent probabilities of exceedence were computed for each station with the data being fitted by an extreme value Type I distribution. Contour maps for each probability level were produced and significant features are discussed.
Figure 1. Station Network.
This study required each data point to have a complete 40 year record of maximum SWE for the first fifteen days of March. The exclusive use of observed SWE would limit the study to the nine National Weather Service first-order stations. To achieve the desired density of data points, other sources of data were needed.
SWE data from 1953 to 1992, from three sources over Minnesota and the eastern Dakotas were analyzed. The first source contains nine complete sets of daily records from the National Weather Service first-order stations within the study area. The National Weather Service's snow accumulation and ablation model was used for quality control of the data as described earlier. The second SWE data set is made up of National Weather Service cooperative observer weekly reports. These data were archived in map form and had to be manually extracted. The last set consisted of U.S. Army Corps of Engineers (COE) snow survey data. The snow surveys were done at prescribed locations but were not done every year. These data mainly cover northern Minnesota and North Dakota and were retrieved from microfilm at the St. Paul Corps of Engineers. The second and third data sets are not complete over the 40 year period. There may be a number of reasons for this including: lost data or no observations taken. These two sets required augmentation and quality control which was done by the snow model also.
The NWS snow accumulation and ablation model is a conceptual, continuous simu- lation model that mathematically represents each of the significant physical process- es involved in the buildup and melt of a snowpack (Anderson 1973). All energy processes are parameterized in terms of air temperature and the model accounts for SWE over time. The flowchart in Figure 2 depicts each of these processes as they would occur for each six-hour iteration of the model. The computational time interval of the model is six hours. The time interval is limited by the largest minimum time interval the input data can be disaggregated into. Precipitation is disaggregated into hourly data, but the temperature is only disaggregated into six-hour data, thus it is the limiting factor. The reasoning is discussed later. Therefore, all input must be in terms of six-hourly mean values. These processes will be described, in terms of their parameterization, in the following sections.
Figure 2. Flowchart of Snow Accumulation and Ablation Model (Anderson 1973).
The snow model requires precipitation input to be a continuous sequence of six- hourly amounts. The climatological database contains mostly stations that report precipitation on a daily basis and many less that report on an hourly basis. Many of these stations have incomplete records over their reporting history. Therefore procedures were needed to:
Preliminary investigations and modeling found that using only the precipitation record from the station being modeled was not producing satisfactory results. This was attributed to biases in the precipitation record caused by some combination of:
To produce better precipitation data for input to the snow model it was decided to compute mean areal precipitation (MAP) for a defined area around the station of interest. This procedure had a 'correcting' effect on biased data in that no one obser- vation overwhelmingly dominated the final result. The use of mean precipitation over an area is generally applicable for snow situations because of the uniform nature of precipitation amounts over large areas from winter storms in this region of the country. The opposite situation is summer thundershowers which produce highly varying amounts over the same large areas. Also, the meaning process is generally applicable in non-mountainous areas (Anderson 1973).
The computer software used to compute these mean areal values is the National Weather Service's Mean Areal Precipitation for Model Calibration program. This program can estimate daily station precipitation and distribute it on a hourly basis. It also produces the mean areal precipitation values in six-hour intervals over the entire period of interest. This MAP procedure allowed for snowpack modeling at stations where precipitation records were not complete over the period of interest, but where snow observations were.
The program essentially contains five parts:
These five parts will be discussed in detail.
The area around the station being estimated is divided into four quadrants by North- South and East-West lines. The closest station with observed data in each quadrant is selected as an estimator. In Figure 3, points D, F, I, and L would be used as estimators. The weight of the estimator is equal to the reciprocal of the square of the distance from the station of interest. The estimate of precipitation at the station of interest is the weighted average of the four stations. If one or more quadrants contains no point of known precipitation, then the average involves only the remain- ing quadrants. In all estimation routines, it is assumed that orographic influences do not have a major effect on storm accumulation at any individual location. Therefore, station normals are not used to adjust estimation.
Figure 3. Example of typical observation station network.
There are two situations when hourly precipitation values must be estimated and each is handled differently. The first one is when hourly values are missing and the second is when hourly values are missing, but an accumulated value is reported at the end of the missing period.
The missing hourly value for a station is computed by a weighted average, using the square of the reciprocal of the distance between stations as the weight. The equation is as follows: (1)
The missing hourly precipitation reported as an accumulation is distributed on a proportional basis. The equation is as follows: (2)
where Ax, Ai, and (di,x) are as in eq.(1).
At this point, all hourly stations have complete records and distributions of daily values into hourly values is possible.
The daily observations are converted to hourly values by using equation 2 with the variables now described as:
An area around the snow station to be modeled is described by at least three latitude, longitude pairs.Weights for each precipitation station affecting the area are computed using the Thiessen method. Figure 4 illustrates a typical mean areal precipitation area and associated Thiessen station weights.
Figure 4. Station 1 is the modeled station. Stations with weight = 0 are hourly bservation stations.
The computation of MAP is done by multiplying the hourly precipitation by the station weight for all stations in the area and summing over six-hour intervals to create the desired continuous time-series.
The snow accumulation and ablation model requires mean temperature input on a six-hour basis. Since historical temperature data exists only as daily maxima and minima, a computational procedure is needed to convert these daily maxima and minima to six-hour mean values. The computational software for this process is the National Weather Service's Mean Areal Temperature for Model Calibration program.
The computation begins by assigning a maximum and minimum temperature to each day on the basis of station observation time. If the observation time is in the a.m., then the reported maximum temperature is for the previous day and the minimum temperature is for the current day. For stations with a p.m. observation time, both maximum and minimum temperatures are assumed to have occurred on the current day.
The second step in the procedure is to estimate a station's missing maximum and minimum temperature values. This procedure is much the same as with precipitation estimation. The area around each station is divided into four quadrants. The closest station in each quadrant with non-missing and non-estimated data is used. The estimation procedure in non-mountainous areas considers that both the mean maximum and minimum temperatures are the same at all stations. Therefore, it is assumed that the spatial temperature gradients are a linear function of distance (Anderson 1973). The resultant equation for temperature estimation is: (3)
Now, all stations have a continuous maximum and minimum temperature record over the period of interest.
The next step converts the max-min data to six-hourly mean data. At this point, an assumption is made to use a fixed diurnal temperature fluctuation since the actual time of daily maximum and minimum is unknown. The empirical relationships used in this procedure to produce six-hourly mean temperatures were derived from research at the Central Sierra Snow Laboratory near Donner Summit, California, and the NOAA-ARS Cooperative Snow Research Station near Danville, Vermont (Anderson 1973). The relationships used are:
This imposition of a fixed diurnal temperature pattern will introduce errors into the snowpack modeling. The most significant errors occur in late fall and late winter when large storm systems draw unseasonable warm air up from the south into the Upper Midwest. It is not uncommon for the temperature to increase during the nighttime hours. Therefore, the low temperature for the day may not occur in the early morning hours. In these situations, the passage of weather fronts drives the temperature cycle. The model assumption is that temperatures are controlled by a solar driven diurnal cycle. In the variant temperature cycle, precipitation events may be mis-classified by the model (snow when it actually rained and rain when it actually snowed). Most of these situations are easily identified in the snow modeling process by analyzing event snowfall records. The input temperatures are then manually corrected to result in appropriate simulated precipitation type.
The last step in processing the temperature data for model input is to create areal means. The arguments for using a mean areal value for precipitation input also apply to temperature input. Basically, a single point temperature observation may not accurately reflect the temperature acting on the snowpack over a broad area. If a single temperature station with a consistent bias is used in modeling, the resultant calibrated model parameters may not be representative of the physical surroundings that affect the snowpack. The use of areal means in modeling resulted in a station's calibrated parameters falling within reasonable ranges for its physical surroundings (as outlined by Anderson, 1973).
Areal mean precipitation was computed by laying a grid over the defined area, similar to the method used for determining mean temperature (Figure 5). For each grid point in the area, the station weight (1.0/di,x) is computed for the closest station in each quadrant. The normalized sum of the station weights becomes the areal weight for that station. A continuous series of six-hour mean areal values is then computed for the defined period.
Figure 5. Station 1 same as in Figure 4.
A model parameter value, using air temperature, determines if the current precipitation is rain or snow. Typically, this value is set to 0oC since a majority of the time when it snows, the air temperature is at or below freezing. This may cause some precipitation events to be mis-classified by the model because snow or rain can occur at temperatures above and below freezing, respectively. If it can be determined that one or the other of these situations consistently occurs, the parameter may be set to a temperature above or below freezing. In this study, the value was set to 0oC for all locations and any significant event that was mis-classified was manually corrected for by adjusting the temperature input.
If precipitation is classified as snow, a mean correction factor is applied to it. The correction factor is a parameter that is determined by calibration, and corrects for gage catch deficiency during snowfall and accounts for net vapor transfer and gains or losses due to drifting. It is a mean correction factor, therefore precipitation errors may still exist for individual storms. Over the course of a season, it is assumed these errors cancel each other out. The adjusted amount of precipitation is added to the existing snow cover.
The energy exchange at the snow-air interface is considered to occur under three conditions:
The melt versus non-melt periods are delimited by another model parameter referred to as the base-melt temperature. If air temperature is below base-melt temperature, it is assumed that no melt is occurring at the snow surface. The base melt temperature for this study was set to 0oC for all locations. The parameter may be set above or below 0oC to correct for temperature biases or any other observed conditions that would warrant that change. Each of the three conditions are discussed in the following sections.
Snowmelt factors during rain periods are quite distinct from those during non-rain periods. During rain-on-snow events, the dominant energy transfer processes are known if a few reasonable assumptions are made (U.S. Army Corps of Engineers 1956):
The energy balance of a melting snowpack can be expressed as: (8)
Units of all quantities are millimeters of water equivalent.
The energy balance equation is reformulated as detailed in Anderson (1973) to result in a melt equation for rain on snow events (9).
Air temperature and precipitation are data inputs to the model. Air pressure is computed from the station elevation by using the "standard atmosphere" altitude-pressure relationship. The wind function is an empirical equation developed by Anderson (1973) which encompasses the average 6-hour wind movement at a height of 1 meter during rain on snow events. If the amount of rain exceeds 2.5 millimeters during a six-hour period, melt is computed by the above described method.
The energy balance equations are not used to compute melt during non-rain periods. This is because of the wide variability of meteorological conditions that can occur during non-rain conditions. Some examples of these conditions are, sunny versus overcast, dry versus humid, and calm versus windy. The meteorological parameters for these conditions can be determined for an experimental location, allowing the use of an energy balance equation to compute melt. But, these parameters are not part of the historical meteorological data base for all locations. Therefore, an empirical equation based on air temperature is used (Anderson 1973): (10)
Several studies have shown that the melt factor varies seasonally to account for snowpack condition. A study at the Central Sierra Snow Laboratory (Anderson 1968) found that the seasonal variation in the melt factor could be represented by a sine function (Figure 6). The equation for the melt factor is: (11)
MFMAX and MFMIN are model parameters to be determined in the calibration process. This equation was tested by Anderson (1973) at several locations throughout the continental United States and found to be adequate everywhere.
Figure 6. Seasonal variation in melt factors used during non-rain periods (Anderson 1968).
The gain and loss of heat from the snowpack during this period is assumed to be proportional to the temperature gradient in the upper portion of the snowpack. The surface temperature is approximated by the air temperature. The temperature within the snowpack is a damped reflection of changes in the surface temperature (Anderson 1973). The snowpack temperature is represented in the model by an antecedent temperature index (ATI). This index weights the previous surface temperatures by amounts that decrease with time. The index equation contains a parameter that allows the user to regulate how much weight is placed on those prior temperatures. In short, the user controls how much memory the pack has of prior temperatures.
The gain or loss of heat from the snowpack is expressed as: (12)
The proportionality factor is needed because heat transfer by conduction, the primary mode of heat transfer within a snow cover, tends to increase as the season progresses and the cover density increases. The model assumes this factor varies the same as the melt factor Mf. Therefore: (13)
where: NMF is a model parameter, referred to as the maximum negative melt factor.
This parameter and the parameter in the antecedent temperature index calculation are adjusted in the model calibration process.
The heat storage within the snowpack is represented within the model by a heat deficit. The deficit indicates the amount of heat, in terms of millimeters of water, needed to bring the pack to an isothermal state at 0oC with the same liquid water contents as the last time the pack was at 0oC (U.S. Army Corps of Engineers 1956).
The heat deficit can change in three ways. First, there is energy exchange during non-melt periods. Second, the total heat deficit is increased by the heat deficit of new snowfall. Lastly, the heat deficit is reduced by melt or rainwater freezing within the snowpack.
A snowpack retains liquid water against gravity much the way soils do. In the model, the amount of liquid water that can be held is a percentage by weight of the solid portion (U.S. Army Corps of Engineers 1956). This percentage is a model parameter to be set during calibration.
The excess liquid water within the pack is transmitted through the snowpack and becomes the outflow from the snowpack. The model uses empirical equations to first lag the flow and then attenuate it. These equations were developed using data from the Central Sierra Snow Laboratory lysimeter (Anderson 1973). The equations were developed on a `ripe' snow. Ripe snow is well aged with a spherical crystal structure and is isothermal at 0oC. The equations are used under all snowpack conditions because there is a lack of data and knowledge on the transmission of water through a fresh snow cover (Anderson 1973).
The amount of snowpack melted by heating from the soil is parameterized in terms of millimeters per day. This parameter is typically a very minor one, especially in the upper Midwest where the soil is typically frozen for most of the snow season (Anderson 1973). The parameter values for this study were typically zero or very small.
The objective of the calibration is to adjust the above described parameters such that the model accurately simulates the snowpack water equivalent over the accumulation and ablation period. The model outputs a daily value of SWE which is compared to field observed values. The parameters are adjusted in a methodical process as outlined by Anderson (1973). Evaluation of the validity of the resultant set of parameter values is solely a subjective determination of how accurately the simulated SWE matches the observed SWE.
The snow model calibration began with the first-order NWS stations because of their continuous daily records of snow water equivalent. A five year period, out of the 40 years of interest, was selected based on representing a wide variety of snowpack accumulations. Some years had high peak seasonal accumulations, some had low peak accumulations. It was also desired to have as variable of snow cover conditions as possible in order to adequately test all the adjustable parameters. Ideally, the calibration data set should not be part of the model verification data set, but historical data limitations did not allow for that option.
The first step in calibration is to analyze the model output for obvious precipitation mis-classification errors and correct them as described earlier. This is also the first step in the simulation process. The model parameters are then subjectively optimized by comparing predicted snowpack SWE with observed SWE. The subjective criteria applied in comparisons were that the observed peak matched the simulated peak and that the simulation melted off the snowpack near the day the observations indicated no snowpack. The optimized parameters were then used to model another five year `checkout' period to ensure adequate snow cover simulation. At this point, minor parameter adjustments were made to `fine tune' the model. The original five years were modeled again to ensure the fine tuning did not just model a bias in the five year checkout period. The calibrated parameters were then used to simulate the entire 40 years worth of data.
The model calibration process at cooperative weather observer stations was slightly different than at the NWS stations. The cooperative stations did not have a continuous record of SWE observations for model calibration. Calibration was done on a five year period, as described earlier, using the meager amount of SWE observations available and the snow depth records available from historical archive tapes. These data were used to assist in the optimization of model parameters for calibration as described earlier. One significant difference is that calibration began with an initial parameter set from a calibrated first-order station that was in the area, or a blended set of parameters from two nearby first-order stations. This aided calibration because with such limited data available, it would have been easy to bias the calibration just trying to `fit' the data. The calibrated parameters were used to simulate 40 years worth of SWE data.
The entire 40 year period was simulated for each of the nine NWS stations. This was done as a quality control measure. By comparing the simulated water equivalent values with the observations, observed spikes and human observer biases, were eliminated from further analyses. Spikes show up as observations far above what would be physically expected given the snow cover conditions. They most often appeared immediately following a snowfall event (Figure 7). Observer bias shows up as 5,6, or 7 days of identical observed values that are physically unlikely given the snow cover. The 5,6, or 7 day periods coincide with NWS observer shift rotations. This means that one observer calculates the water equivalent for their shift period, then for the next period of 5,6, or 7 days, another observer will calculate the water equivalent. There are a couple of explanations as to why this may be happening. First, the observer may just make an observation on their first shift and use that same value for the next several days if snowpack conditions do not visually change over that time. Another explanation may be that some observers do not select a representative site to make measurements. Lastly, a combination of the two may be the reason. This sometimes results in a stair-step pattern to the observations (Figure 8) which did not physically occur in the snow cover. The model was used to select which observations to disregard.
Figure 7. Note observed SWE value of 2.4 inches on Nov 28. This observation is rejected as a 'spike' error.
Figure 8. Stair-step pattern of observed SWE. Note period of February 1 thru mid-March, the observed SWE changes several times for no apparent reason and does not increase for the significant snowfall at the end of February. Also note that the last two observed SWE values indicate a snowpack density greater than 1.0. The observations must be flagged as possible errors.
The 'accurate simulation' of 40 years of daily SWE values at cooperative observer sites was also used as a quality control tool for observed SWE values. An `accurate simulation' (Figure 9) is one that closely matches any observed data, does not indicate physical impossibilities like pack density greater than 1.0, and accurately simulates when the snowpack is completely melted. Rejecting an observed value was not as clear cut as in the NWS station case because the observations in this case were sporadic at best. Therefore, subjective rejection of a value had to be carefully considered.
Figure 9. Typical seasonal snowpack simulation for cooperative observer site.
Another simulation tool that proved useful was the snow model's ability to update the water equivalent status based on an observed SWE. This was especially useful in years when there were extreme conditions during snowfall. One extreme is high winds during snowfall which will cause higher than usual gage catch deficiency and extreme redistribution of snow on the ground. Another is a large amount of events misclassified by the model, i.e.,. snowfall at temperatures higher than the model stipulated temperature. Updating model SWE with observed values generally resulted in the simulation accurately depicting observed snow cover states for the rest of the season (Figure 10). The end result of a simulation for a station was a continuous daily record of estimated/observed SWE values where it did not exist before.
Figure 10. Example of updating model state of SWE. Note SWE updated from 5.8 inches to 4.6 inches on February 8. Model simulation subsequently quite accurate.
With the modeling simulation done at all 55 stations, a data set was assembled for the frequency analysis. The data of interest were the maximum daily snow cover water equivalents for each period of March 1-15 over the years 1953-1992. This results in 40 annual values for each of the 55 stations (Appendix 1; an example data set).
5. FREQUENCY ANALYSIS
The data series for a sample of stations was graphically plotted on extreme value probability paper using the Gringorten plotting position (Figure 11). The x-axis was transformed to a reduced Gumbel variate as described in Maidment (1993). (14)
The probabilities can be computed by reformulating the variate equation. All samples approximated a straight line so it was decided to fit the data to an Extreme Value Type I (Gumbel EV I) distribution. A straight line graphical plot on extreme value probability paper indicates a likelihood that the data comes from an extreme value distribution. Many others have found satisfactory results fitting extreme precipitation to the Gumbel EV I distribution including Richards et al. (1983), Hershfield (1961), and Miller (1964).
Figure 11. Typical graphical probability plot on extreme value paper. Note the plot approximates a straight line indicating that the data likely fits the distribution.
A computer program was written to compute the snow water equivalent for the stated probability levels using the inverted cumulative density function (Maidment 1993) for the Gumbel EV I distribution: (15)
where alpha and zeta are parameter estimators detailed below and
representing the 50, 20, 10, 4, 2, and 1 percent probabilities of exceedence.
The estimators alpha and zeta are obtained using sample L moments as illustrated by Maidment (1993). The L moments are calculated in terms of probability-weighted moment estimators (16)
where overline x = sample mean
where lambda_1 , lambda_2 = the first two L moments.
The adequacy and robustness of L moment estimators for the Gumbel (EV I) distribution have been illustrated by many and summarized by Maidment (1993). An example calculation is illustrated in Appendix 1.
The probability plot correlation test, developed by Filliben and illustrated in Maidment (1993), was used to test for goodness of fit of the data to the Gumbel (EV I) distribution. The test measures the correlation r between the ordered observations xi and the fitted quantities wi = G-1(1-qi) determined by the plotting positions qi for each xi. It is testing how close to a straight line the data fit on the graphical plot. A value of r close to 1.0 suggests the observations could have been drawn from the fitted distribution. Maidment (1993) reports tables of critical values of r. The testing of all 55 stations with a hypothesis that the data come from an EV I distribution had no stations being rejected at the 0.01 significance level. Therefore, this indicates that the data may actually be from an EV I distribution.
6. Results and Discussion
The results of the simulations of SWE for the 1-15 March period are maps, illustrated in Appendix 2, for the 50, 20, 10, 4, 2, and 1 percent probabilities of exceedence. A hydrologist can use these maps to assess the significance, with respect to history, of a late winter snowpack before the onset of spring runoff. A snowpack water content, for a particular area or drainage basin, with a low probability of exceedence, indicates to the hydrologist that the area has an unusually high snowpack water content. The situation should be analyzed for increased flood risk from snowmelt runoff.
The hydrologist producing a snowmelt flood outlook will use the SWE frequency maps, combined with other meteorological and hydrological information, to delineate the portions of larger river basins most susceptible to flooding. The maps are a valuable tool because a given value of snow water equivalent most likely does not have the same probability of exceedence across a large river basin. Therefore, two areas in different parts of the same drainage basin, with the same water equivalent, may not have the same flood potential. This possible difference in flood potential can have significant economic impacts with regard to flood control preparations. The clear communication of flood potential is essential to effective preparations and this can be improved by placing meteorological and hydrological variables such as snowpack water content into historical perspective utilizing probability of exceedence maps.
The pattern of snow accumulation over Minnesota and the eastern Dakotas shows some distinct features. From the analysis of the probability maps (Figure 12 and Appendix 2), a topographic map (Figure 13), and a forest cover map (Figure 14), some inferences can be drawn as to why the contours of equal snow water equivalent occur as they do. First, the heavy accumulation over northeast Minnesota can be attributed to the high terrain adjacent to the large water body of Lake Superior (arrow A on Figure 13). The elevation change in this area is about 1400 feet. The lake remains open much of the snow season, providing an ample source of moisture for precipitation producing storm systems when low level flow is from the east or southeast. It is this low level flow that feeds extra moisture into storms. For heavy precipitation events in this area, the low level flow is forced to rise as it encounters the high terrain. This is called orographic lifting. It is this extra vertical motion, combined with the ample moisture, that results in copious amounts of precipitation from the same storm system that produces less precipitation in adjacent areas. There is a direct relationship to the amount of vertical motion and the amount of precipitation produced by a storm.
Figure 12. Maximum SWE (inches) March 1-15. 2 percent probability of exceedence. Arrow C indicates contour gradient.
Figure 13. Elevation contours in 100s feet. Arrows A and B indicate areas of higher terrain.
Figure 14. Forested region in hatched area. Adapted from Minnesota Land Use Map (1969).
Another area in the region that experiences this orographic lifting is an area of west central Minnesota and northeast South Dakota (arrow B on Figure 13). The high terrain to the west of this area, called the Buffalo Ridge, has created a vertical gradient from the Minnesota River valley to the top of this ridge of about 1200 feet. The east and southeasterly low level flow to the north of low pressure storm systems as they pass to the south of this area causes added lift in this area. The track of heavy snow producing storm systems is typically to the south of this area (Reitan 1974). Thus, this area receives more precipitation (Figure 15) and snow accumulation than areas immediately surrounding it.
Another landscape feature that can affect seasonal snow accumulation is forest cover. The effect of a forest cover on a snowpack is well discussed in the literature including U.S. Army Corps of Engineers (1956), Brooks et al. (1991) and others. A forested area will experience a much slower melt rate than open areas. Therefore, it can be inferred that if two areas receive approximately the same amount of liquid precipitation as snow, but one area is forested and other not, then the forested area should have a higher water content in the snowpack at the end of the accumulation season. the probability maps show a tight gradient in the contours at point C (Figure 12). In the figure, the seven inch contour defines the southern and western boundary of this gradient. This gradient is over an area that receives approximately the same amount of precipitation over the winter season (Figure 15). The disparity in seasonal snow accumulation can be explained by looking at a forest cover map (Figure 14) and observing that the gradient overlies the boundary between generally forested area to the north and east and open area to the south and west.
Figure 15. Normal precipitation (inches) December-February. Adapted from Baker (1967).
7. RECOMMENDATIONS AND CONCLUSIONS
The author recommends that the area of analysis be expanded over the Upper Midwest using the methods explained here. Also, the density of data points could be increased over Minnesota and the eastern Dakotas to better delineate the local features discussed earlier.
An area of research would be to investigate other possible areas of enhanced accumulation such as the high terrain of southeast Minnesota. Another area of research would be to investigate any causes, such as down-slope wind, of areas to have lesser accumulation than surrounding areas. This warming and drying wind was called "chinook" or "snow-eater" by the Plains Indians. The wind event causes rapid snowmelt when it happens. The wind is caused by air being compressed and warmed as it descends down the eastern slopes of the Rocky Mountains and down the Great Plains toward the Mississippi River. The effective eastward range of this phenomena may be able to be delineated by studies of seasonal snow accumulation patterns.
It has been shown that a conceptual model can be used as a means of quality control for observed hydrologic data as well as to augment historical records for periods of missing data. The fitting of data to an Extreme Value Type I distribution was tested and accepted as the probability model. The contour maps produced from the probability calculations indicated two regions of Minnesota and one of eastern South Dakota that favor more snow accumulation than surrounding areas. Possible explanations for this phenomena were discussed. A byproduct of this study is 46 locations now have a synthetic database of continuous seasonal snow water equivalent values where none existed before. This data could be used for further investigations.
Anderson , E.A., 1968: Development and Testing of Snow Pack Energy Balance Equations. Water Resources Research, 4, 19-37.
____________, 1973: National Weather Service River Forecast System—Snow Accumulation and Ablation Model. NOAA Technical Memorandum NWS HYDRO-17, U.S. Department of Commerce, Silver Spring, MD, 217pp.
Baker, D.G., D.A. Haines, J.H. Strub, 1967: Climate of Minnesota, Part V. Precipitation Facts, Normals, and Extremes, 43pp.
Brooks, K.N., P.F. Ffolliott, H.M. Gregersen, J.L. Thames, 1991: Hydrology and the Management of Watersheds, Iowa State University Press, Ames, IA, 392pp.
Hershfield, D.M., 1961: Rainfall Frequency Atlas of the United States for Durations from 30 Minutes to 24 Hours and Return Periods from 1 to 100 years. U.S. Weather Bureau Technical Paper No. 40, U. S. Department of Commerce, Washington D.C., 115pp.
Maidment, D.R., 1993: Handbook of Hydrology, McGraw-Hill, Inc., New York, NY,1315pp.
Miller, J.F., 1964: Two- to Ten-Day Precipitation for Return Periods of 2 to 100 Years in the Contiguous United States. U.S. Weather Bureau Technical Paper No. 49, U. S. Department of Commerce, Washington D.C., 29pp.
Minnesota Land Use Map, 1969.
Reitan, C.H., 1974: Frequencies of cyclones and cyclogenesis for North America, 1951-1970. Monthly Weather Review., 102, 861-868.
Richards, F., J.F. Miller, E.A. Zurndorfer and N.S. Foat, 1983: Water Available for Runoff for 1 to 15 Days Duration and Return Periods of 2 to 100 Years for Selected Agricultural Regions in the Northwest United States. NOAA Technical Report NWS 36, U.S. Department of Commerce, Silver Spring, MD, 59pp.
Schmidlin, T.W., 1990: A critique of the climatic record of "water equivalent of snow on the ground" in the United States. J. Appl. Meteor., 29, 1136-1141.
U.S. Army Corps of Engineers, 1956: Snow Hydrology. North Pacific Division, Portland OR, 437pp.
U.S. Department of Commerce, 1964: Frequency of maximum water equivalent of March snow cover in north central United States. U.S. Weather Bureau Technical Paper No. 50, U.S. Weather Bureau, Washington, DC, 24pp.
____________, 1972: National Weather Service River Forecast System Forecast Procedures. NOAA Technical Memorandum NWS-HYDRO-14, Washington, DC., 251pp.