Advances in geographic information science (GIS) technologies such as remote sensing, spatial databases, and spatially explicit models have shown to be extremely useful in the exposure assessment process. By adopting such technologies, landscape-level exposure models can be developed by integrating the spatial parameters such as those shown in eqn [2]. These methods recognize that if a site is spatially heterogeneous with respect to either contamination or animal use, exposure models must be modified to include the dynamics imposed by those spatial factors thus improving the estimated parameters in eqn [2]. When using fish and wildlife as receptor species for mechanisms of contaminant accumulation, transport, redistribution, and as ecological endpoints, the foundations and principles of animal habitat relationships and the interaction between spatial pattern and ecological processes must be properly modeled with particular attention to (1) spatial relationships among fish and wildlife and their habitats, (2) spatial and temporal interactions, and (3) influences of spatial heterogeneity on biotic and abiotic processes. Below, the basic elements needed to estimate the spatially explicit parameters used in most exposure models are outlined.

Through various methods of data capture, such as remote sensing, global positioning system (GPS), and field survey, detailed biophysical characteristics of the landscape can be represented in a GIS. In the form of map layers, a GIS can store the spatial patterns ofindividual geographic phenomenon, such as habitat, land use, hydrology, population, topography, road networks, and other infrastructural information into a spatial database. The map layers are geographically referenced in a common coordinate system so that the layers are projected onto a scale-down plane surface that enables distance measurement, area calculation, and map overlay.

Historically, the map layers are often too general to make fine-resolution predictions in terms of how receptors may be utilizing contaminated areas or if the map layers were constructed with a focus on timber management and harvest rather than being designed to describe the ecosystem structure. For example, LANDSAT imagery has 30 m spatial resolution (i.e., each pixel has a ground equivalent dimension) and is commonly used for mapping the distribution of vegetation. Recently, the emergence of high-resolution remotely sensed imagery such as QuickBird, airborne visible/infrared imaging spectrometer (AVIRIS), and light detection and ranging (LIDAR) has enabled the researchers to map the three-dimensional information of the landscape with spatial resolution of <1m and hundreds of spectral channels. Through various techniques of digital image processing, including image filtering, band ratioing, feature/pattern extraction, and spectral classification, biophysical characteristics of the landscape can be extracted from the remotely sensed imagery. The specific technology to be i=i used to map out vegetation in the study site is dependent upon the stage of the bioaccumulation model that is being estimated; that is, the scale needed to determine transfer factors from soil to plant species to estimate bioavailabil-ity is very different from the scales needed to estimate the distributions of wildlife endpoint species. In many cases, field survey is necessary for verifying, calibrating, and validating purposes.

Once all the map layers are in digital format, the data can be compiled into a spatial database in which many spatial relationships could be explored and analyzed within and among the map layers. To assist risk assessors, the spatial database provides important information about how the focal wildlife species may use contaminated areas and how contaminants may move in the environment. Such a database is extremely useful in identifying potential data gaps and which data sources are available to assist in a risk assessment. In some cases, information on the spatial distribution of contaminants and waste units are not available, and methods of spatial interpolation can be used to generate new information based on known value at surrounding locations (see the following section).

For most areas it is difficult to map the distribution of contaminants in the soil or sediment. The most notable exception is gamma-ray detection for radioisotopes, which can be achieved through remote-sensing flyovers of the disturbed areas. However, when the contaminants of concern cannot be measured remotely, or the scale of such flyover data is too coarse, some sampling regime has to occur to determine their distribution. Once samples are obtained, contaminant distributions can be mapped using appropriate spatial interpolation techniques.

The first law of geography (Tobler's law) states the likelihood of things closer in distance to be more related and similar than those afar. Built upon this concept, spatial interpolation methods estimate unknown sampling points in relation to the distance of their neighbors near and far. Inverse distance weighting (IDW), local polynomial, global polynomial, spline and radial basis functions (RBSs) are deterministic interpolators that apply an established mathematical formula to the sample points. A second family of interpolation methods consists of geostatistical methods that are based on statistical models that incorporate autocorrelation (statistical relationships among the measured points). Not only do these techniques have the capability of producing prediction surfaces, but they can also provide some measure of the accuracy of these predictions using cross-validation techniques. Kriging is the most widely used geostatistical interpolator. An important feature of geostatistical analysis is the generation of an empirical semivariogram to estimate the spatial correlation of the sampling points in space. Thus, the semivariogram quantifies how the correlation between two points in space changes as they move closer together or farther apart. This is a useful tool in its own right and defines the variance structure of the geos-tatistical model.

The development of many spatially explicit exposure models to estimate the adverse impact of specific toxicants and toxicant mixtures to the environment was in part fueled by the demand in understanding the fate of contaminants in terms of environmental risk and environmental justice. In general, an exposure model provides the framework that uses one of the many functions in combining the identified controls (i.e., factors) in assessing the ecological risks. Depending on the basis of the actual algorithms, most of the existing predictive models can be broadly classified as physical-based, statistical-based, and rule-based models. Depending on how the model treats randomness in time and space, the exposure models can further be categorized as deterministic or stochastic models (Figure 1). A deterministic model does not

Randomness

Stochastic

Deterministic

Neural network

Monte Carlo

Cellular automata

CART

Freundlich adsorption Regression Decision tree

Plume dispersion

Physical-based Statistical-based Rule-based

Figure 1 Classification of the spatially explicit exposure models based on the basis of algorithms and randomness.

consider randomness at all; that is, a given set of input parameters always yield the same output prediction. A stochastic model allows the quantification of uncertainties in time and space, so that the same set of input parameters may have different results. In exposure modeling, uncertainties may come from the lack of input data or understanding about the physical reality, such as season-ality of the ecosystem, random behavior of individuals, etc.

The physical-based models adopt established laws or mathematical equations that attempt to describe the physical processes, for example, Freundlich adsorption equation, toxicokinetic model, plume dispersion model, etc. This type of model is commonly used in modeling the exposure and uptake of toxicants to the endpoints through media such as air, water, and soil. In general, a physical-based model is well established in physical laws and can be extensively applied to many endpoints. However, such models often require many physical parameters that may not be readily available (particularly in spatial data) and the accuracy of model prediction is limited by the extent of field calibration.

The statistical-based models explore the relationship (which can be attribute or spatial in nature) between identified controls and the level of exposure at endpoints with a probability distribution function, for example, generalized linear models, spatial statistical methods, etc. In most cases, this approach is based on empirical data collected from the field or in the laboratory. Many researchers utilized common statistical techniques to conduct ERA, such as logistic regression, kriging, Monte Carlo simulation, etc. The implications of the research that resulted from such models are usually restricted to the tested study areas or ecosystems with similar biophysical characteristics.

The rule-based (or agent-based) models assess the ecological impact of exposure by exploring the underlying mechanisms to simulate the process. The governing rules may be established from the literature or field experts, observed data for both training and validating (e.g., neural network, decision tree), or even arbitary rules (e.g., cellular automata, weighted linear combination). Within this category of model, one of the most controversial components is how to determine the weight of individual controls (i.e., the impacts) in computing the exposure level. The most common weights include the population of receptor species and the space-time interaction between the endpoints and stresses.

When conducting an ecological assessment it is often desirable to estimate the risk to a population rather to an 'at-risk individual'. To model population exposure, one must estimate the proportion of the local population exposed at levels that exceed toxic thresholds. This represents the proportion of the population potentially at risk. Specifically, the proportion of a population potentially at risk is represented by the number of individuals that may use habitat within waste unit(s). To properly estimate exposure, the movement of contaminated individuals within and between populations (metapopulation) may also be of interest especially when the proportion of new recruits is important to estimate the effects of the contaminant on fecundity and survival.

Also, often investigators are interested in making inferences about the mean exposure to a receptor species at a waste site, but it may be erroneous to assume that the distribution of the mean is the same as that of the population. Hence, a similar procedure needs to be performed to estimate the distribution of mean exposure. By estimating the number of individuals (n) that would use the waste site(s), n home ranges for the waste site(s) can be randomly sampled, the n exposures calculated, and the average taken based on eqn [2]. This procedure is repeated (usually 1000+) times for each site creating what is commonly referred to as a Monte Carlo random sample of average exposures. Hence, the resulting simulations provide an estimate of the distribution of mean exposure using histograms and quantiles. The 2.5th and 97.5th elements in the ranked means are the estimated lower and upper bounds, respectively, of the 95% confidence interval. The mean exposures and their corresponding 95% confidence intervals provide the information necessary to conduct hypothesis testing about the mean exposure at the waste units. In practice, a researcher could test the hypothesis that the mean exposure was zero, or below (above) a given regulatory limit, by using the appropriate confidence bound (upper or lower). Another approach is to combine the results of Monte Carlo simulation of exposure with literature-derived population density data to evaluate the likelihood and magnitude of population-level effects on wildlife.

Was this article helpful?

Start Saving On Your Electricity Bills Using The Power of the Sun And Other Natural Resources!

## Post a comment