TIES 2004 / ACCURACY 2004  Contributed Program Abstracts (pdf)

 

A Space-Time Dynamic Model Based on Image Warping

Sofia Aberg, Finn Lindgren and Anders Malmberg

Centre for Mathematical Sciences

Division of Mathematical Statistics

Lund University

Box 118, SE-221 00

Lund, Sweden

(s_aberg, finn, andersm)@maths.lth.se

Abstract: In this paper we present a spatio-temporal dynamic model which can be realized using image warping, a technique used to match images. Image warping is a non-linear transformation which maps all positions in one image plane to positions in a second plane. Using thin-plate splines this transformation is defined by a small set of matching points, a feature that will make this method work even for large data sets. In our case the dynamics of the process is described by warping transformations between consecutive images in the space-time series. Finding these transformations is a trade-off between a good match of the images and a smooth, physically plausible deformation. This leads to a penalized likelihood method for finding the transformation. The deformations that can be described with this approach include all affine transformations as well as a large class of non-linear deformations. Our method is applied to the problem of nowcasting radar precipitation.

Key words: Dynamic models; Spatio-temporal modelling; Image warping; Thin-plate splines

 

A Methodology for Estimation the Digital Elevation Model Uncertainty in Flood Inundation Modelling

Z. Akyürek*, M.Yılmaz**, N.Usul**

* Middle East Technical University, Natural and Applied Sciences Geodetic and Geographic Information Technologies, 06531-Ankara

zakyurek@metu.edu.tr

** Middle East Technical University, Civil Eng. Dept. Water Resources Lab 06531-Ankara

myılmaz@metu.edu.tr, nurusul@metu.edu.tr

Abstract. In the last decade, with the increasing demand of geographical and statistical data usage in everyday life, the use of Geographic Information Systems (GIS) is increasing in almost everywhere. While increasing the efficiency and visualization of well known methods, implementing GIS with the classic techniques brought new problems. There is no GIS dataset truly free of error and it is now generally accepted that problems of error in GIS analysis should be considered in great detail. One such problem, error propogation is observed in almost all GIS based analysis. In GIS integrated flood inundation modeling, where topographic conditions of the river network are extracted from a DEM, error inherent in the DEM will propogate through the analysis till its outputs. In this study Monte Carlo Simulations method is utilized for evaluation of DEM uncertainty while investigating its propogation in flood inundation modeling. In each simulation a DEM realization is created by introducing error field, which is generated randomly by normal distribution. Then flood inundation modeling is performed over this error introduced DEM and simulation result is stored. The effect of the spatial autocorrelation on flood inundation modeling is investigated by introducing filters in the error fields. Finally propogation of DEM uncertainty is based on the analysis of all simulation results.

Keywords: Uncertainty Propogation Modeling, GIS, Flood Inundation Modeling

 

Detecting the Influence of Protection on Landscape Transformation in Southwestern Ghana

Clement Aga Alo and R Gil Pontius Jr

Clark University

Department of International Development, Community and Environment

Graduate School of Geography

950 Main Street

Worcester MA 01610-1477

USA

calo@clarku.edu

Abstract. We examine the transitions among six land cover categories in southwestern Ghana and compare the transitions inside protected areas to those outside protected areas. Land cover maps for 1990 and 2000 are compared and transition matrices for the protected and unprotected areas are each analyzed to identify the most systematic landscape transitions. We examine the amount of gain of a category relative to the distribution of the other categories in 1990 in order to compute the amount of gain that would be expected in each category due to a random process of gain. We then compare the observed gain to the expected gain to detect systematic transitions. In a similar manner, we compare the observed loss to the expected loss due to a random process of loss. A non-random gain and a non-random loss in a particular transition implies a systematic process of change.

Results show that, in the protected areas, closed forest transitions systematically to Ground & Bushfires but outside the protected areas closed forest transitions systematically to Bush & Scattered Trees. Evidently, the process of land transformation inside the protected areas is different from outside the protected areas. We discuss the implications of these conclusions for forest management.

 

Comparison among different ordination techniques to analyse aquatic ecosystems

Paola Annoni *, Elisabetta Garofalo **
* Università Milano-Bicocca - Dip. Statistica, paola.annoni@unimib.it
** CESI S.p.A. – via Rubattino 54, Milano, garofalo@cesi.it

Abstract: In the study we compared different ordination techniques to detect the effect of environmental variability, both physical and chemical, on aquatic species. The spatial and/or temporal distributions of biological species are then considered as response variables; while the variables that describe environmental characteristics play the role of explanatory variables.

Methods used in this field of ecology can be classified into the following categories: indirect gradient analyses and direct gradient analyses. Indirect analyses consist of two steps: first ordination and then interpretation on the basis of environmental gradients. Direct analyses incorporate the environmental information into the ordination algorithm to detect the ordination that is best explained by the explanatory variables. These techniques are usually called canonical ordination techniques.

In this paper we selected some ordination techniques, that belong both to indirect and direct gradient analyses, and applied them to an Italian case study. The dataset comes from some experimental campaigns accomplished in a highly impacted aquatic ecosystem.

Comparison among results allowed to point out properties and faults of the selected methodologies when applied to ecological ecosystems.

Keywords: species-environment relationships, aquatic ecosystem, canonical ordination.

 

Statistical Approaches for Comparing Soil Salinity Levels of Different Subsurface Drip Irrigation (SDI) Systems Using 2-dimensional Irregular Soil Profile Grids.

Kathryn Bartimote*, Peter C. Thomson+, Mick A. Battam*, Bruce G. Sutton*

*Faculty of Agriculture, Food & Natural Resources

+ Faculty of Veterinary Science

University of Sydney NSW 2006

Australia

bartimotek@agric.usyd.edu.au

Abstract. An experiment was conducted to assess the differences in effects of three lateral subsurface dripline spacings (1.0m, 1.2m, and 1.5m) on salt accumulation. The electrical conductivity (EC1:5, dS/m), a measure of salinity, was recorded at various grid positions within each of nine soil pits (three for each spacing). However, the x-y coordinates, from which soil was sampled, are not consistent for each soil pit.

Four analysis approaches will be outlined. Firstly, contour plots show soil salinity patterns in relation to each water source, in particular the salt accumulation at the periphery of the wetting fronts

The second approach used is that of fitting 2-D loess smoothing functions to the individual soil pit data arrays. As well as providing a graphical interpretation of how the salinity varied within each soil pit (width and depth), it allows the mean salinity level across the three treatments to be formally compared.

The third is a REML (residual maximum likelihood) analysis where the spatial correlation between neighboring field positions is modeled in the random component of the mixed model. The variogram was used to identify the most appropriate method of modeling the spatial correlation.

Keywords: spatial, 2-dimension, irregular grid, correlation, subsurface irrigation

 

Spatial Accuracy Assessment for Biological Collections:

Best practices for Collecting, Managing, and Using Biodiversity Data

 

Reed Beaman

Peabody Museum

Yale University

New Haven, CT, USA

reed.beaman@yale.edu

Arthur Chapman

Centro de Referência em Informação Ambiental (CRIA)

Campinas, Sao Paulo, Brazil

John Wieczorek

Museum of Vertebrate Zoology

University of California, Berkeley, CA, USA

Lawrence Speers

Global Biodiversity Information Facility (GBIF)

Copenhagen, Denmark

Abstract. More than 1 billion specimens are currently curated in museums and herbaria worldwide. These collections are the foundation of knowledge about the Earth’s past and present biological diversity. Some tens of millions of these are currently databased. The rate of databasing is on the increase with the development of tools and methodologies which assist in the process and, more recently, with the creation of the Global Biodiversity Information Facility (GBIF) with its aim to "make the world's primary data on biodiversity freely and universally available via the Internet".

Two categories of error are particularly common in biological collections: errors in spatial position (geocoding or georeferencing) and errors in taxonomic circumscription. Assessment of accuracy for both these categories is essential prior to using biodiversity data for ecological analysis, predictive modeling and synthesis, and conservation and natural resource management. This paper examines a number of best practices for preventing, detecting, and correcting spatial errors associated with biological collection information. We discuss guidelines, methodologies and tools that can assist data collectors, managers, and users to follow best practices in digitizing, documenting and validating biodiversity information.

Keywords: biodiversity, biological collections, georeferencing, best practices

 

Indexing Structure For Handling Uncertain Spatial Data

Bir Bhanu, Rui Li, Chinya Ravishankar, Michael Kurth, Jinfeng Ni

College of Engineering,

University of California, Riverside

{bhanu, rli}@vislab.ucr.edu ; {ravi, kurthm, jni,}@cs.ucr.edu

Abstract. Consideration of uncertainty in manipulation and management of spatial data is important. Unlike traditional fuzzy approaches, in this paper we use a probability-based method to model and index uncertain data in the application of Mojave Desert endangered species protection. The query is a feature vector describing the habitat for certain species, and we are interested in finding geographic locations suitable for that species. We select appropriate layers of the geo-spatial data affecting species life, called habitat features, and model the uncertainty for each feature as a mixture of Gaussian. We partition the geographic area into grids, assign an uncertain feature vector to each cell, and develop a filter-and-refine indexing method. The filter part is a bottom-up binary tree based on the automated clustering result obtained using the EM algorithm. The refine part processes the filtered results based on the "similarity" between the query and properties of each cell. We compare the performance of our proposed indexing structure with R-tree on the TIGER-Line dataset with synthetic uncertainty and real Mojave Desert data, and show that our scheme outperforms R-tree.

Keywords: Uncertainty, Indexing, Mixture of Gaussian, Expectation-Maximization, Mojave Desert

 

The Extended Probability Vector Method for Pixel-scale Assessment on Uncertainty of Remotely Sensed Data Classification

Yanchen Bo

Department of Environmental Science and Engineering

Tsinghua University

Beijing 100084, P.R.China

boyc@tsinghua.edu.cn

Jinfeng Wang

2 LREIS

Institute of Geographical Sciences and Natural Resource Research

Chinese Academy of Sciences

Beijing 100101, P.R.China

Abstract. The uncertainty assessment on the classification of remotely sensed data is a critical problem in both academic arena and applications. The conventional solution to this problem is based on the error matrix (i.e. confusion matrix) and the kappa statistics derived from the error matrix. There is much limitation inherent in this solution. Firstly, in this approach, the uncertainty is presented on the class scale, no spatial variation of uncertainty is presented; secondly, there is uncertainty inherent in the reference data; thirdly, the kappa statistics is influenced by the underlying sampling techniques of reference data; and finally, the way to present uncertainty in error matrix is not convenient for visualization. For these reasons, the assessment on the classification uncertainty at pixel-scale is necessary to investigate the spatial variation pattern of uncertainty, to visualize uncertainty and to avoid the sampling effect.

A probability vector-based method has been developed for assessing classification at pixel-scale. However, the use of this method is severely limited because the probability vector can be derived only through the Bayesian classification. In practice, other classifiers such as Artificial Neural Network Classifier, Minimum Distance Classifier, Mahalanobis Distance Classifier and Fuzzy Classifier are wildly used for remote sensing data classification. To assessing the uncertainty of maps by these classifiers at pixel-scale, an extended probability vector method is presented in this paper which extend the probability vector-based method to the assessment on the uncertainty classified by classifiers besides Bayesian classifier. This extension is realized through a transformation method, which transforms the "Member Vector" from various classifiers to the "transformed probability vector" so that they are comparable to the probability vector in Bayesian Classifier. Finally, the probability residual and entropy that derived from the extended probability vector are used as indicators to assess the absolute and relative uncertainty perceptively. The uncertainties by different classifiers are compared at pixel scale. Some examples of uncertainty at pixel scale are illustrated.

 

An Integrated Framework for Assessing Uncertainties in Environmental Data, Illustrated for Different Types of Data and Different Complexities of Problem

James Brown1 and Gerard B.M. Heuvelink2

1 Institute for Biodiversity and Ecosystem Dynamics, Universiteit van Amsterdam, Nieuwe Achtergracht 166, 1018 WV Amsterdam, The Netherlands, brown@science.uva.nl

2 ALTERRA and Laboratory of Soil Science and Geology, Wageningen University and Research Centre, P.O. Box 37, 6700 AA, Wageningen, The Netherlands, gerard.heuvelink@wur.nl

Abstract. Understanding the limitations of environmental data is essential both for managing environmental systems effectively and for encouraging the responsible use of scientific research when knowledge is limited and priorities varied. Explicit assessments of data quality, and the uncertainties associated with data quality, are important in this context.

Using a combination of quantitative and qualitative techniques for assessing probabilities, and acknowledging the importance of possibilistic techniques where probabilities are inappropriate, an integrated methodology is presented for handling uncertainties about environmental data.

The methodology is based on a three-fold distinction between the magnitudes of uncertainty in data (and the ancillary information, such as data ‘support’, required to interpret this correctly), the sources of uncertainty in data, and the ‘goodness’ of an uncertainty model (critical self-reflection). In particular, emphasis is placed upon the conditions and parameters required for estimating quantitative probability models, for which a number of data categories are introduced, and on the application of simplifying assumptions to quantitative models of uncertainty.

The methodology is illustrated with examples for different types of environmental data, including a discrete time-series, a categorical spatial variable and a continuous space-time variable, and for different complexities of problem. Here, three scenarios are introduced, including an ‘information-rich’ case, where probabilistic estimates of uncertainty are easily made, an ‘intermediate case’, and an ‘information-poor’ case, where the perceived quality of data, as well as the ‘goodness’ of an uncertainty model, becomes more case (person) dependent.

Keywords: Data uncertainty; probability; possibility; sources of uncertainty

 

Deepening the Information of Environmental Indices

Francesca Bruno, Daniela Cocchi, Meri Raggi.

Università di Bologna, Italy

{bruno, cocchi, raggi}@stat.unibo.it

Abstract. The main role of environmental indices is the synthetic representation of the investigated phenomena at the cost of loosing some information. Environmental indices are currently used for the different media, like air pollution and water quality, to synthesize complex phenomena. Many features can be deepened to enrich the evaluation of a synthetic index. They run from variability measures associated to indices to analyses aiming at recovering important information that are lost during the index construction. The importance of adequately characterizing variability and uncertainty assessment in the determination of indices has been emphasized both in scientific and policy-oriented documents. There is a variety of methods for characterizing uncertainty and variability. These methods cover a broad range of complexity, from the simple comparison of discrete points to probabilistic techniques like Monte Carlo analysis. In this paper, we propose and discuss some methods for associating information about influential covariates and uncertainty measures to synthetic environmental indices.

Keywords: environmental index, synthetic evaluations, uncertainty.

 

Combining Precipitation Measurements from Multiple Sources for Estimation of Rainfall Rate and Associated Uncertainty

Tamre Cardoso, Peter Guttorp, and Sandra Yuter

University of Washington

Box 354322

Seattle, Washington 98195-4322

tamre@blarg.net

Abstract. Surface rain rate is an important climatic variable and many entities are interested in obtaining accurate rain rate estimates. Rain rate, however, cannot be measured directly by currently available instrumentation. A hierarchical Bayes model is used as the framework for estimating rain rate parameters through time, conditional on observations from multiple instruments such as rain gauges, ground radars, and distrometers. The hierarchical model incorporates relationships between physical rainfall processes and data that are collected. A key feature of this model is the evolution of drop-size distributions (DSD) as a hidden process. An unobserved DSD is modeled as two independent components: 1) an AR(1) time-varying mean with GARCH errors for the total number of drops evolving through time, and 2) a time-varying exponential distribution for the size of drops. From the modeled DSDs, precipitation parameters of interest, including rain rate, are calculated along with associated uncertainty. This model formulation deviates from the common notion of rain gauges as "ground truth"; rather, information from the various precipitation measurements is incorporated into the parameter estimates. The model is implemented using Markov chain Monte Carlo methods.

Keywords: Precipitation, Rain rate, Hierarchical Models, MCMC

 

Iterated Confirmatory Factor Analysis for Pollution Source Apportionment

William F. Christensen

Brigham Young University

Department of Statistics

230 TMCB

Provo, UT 84602-6575

william@stat.byu.edu

Abstract. Many approaches for pollution source apportionment have been considered in the literature, most of which are based on the chemical mass balance equations. The simplest approaches for identifying the pollution source contributions require that the pollution source profiles are known. When little or nothing is known about the nature of the pollution sources, exploratory factor analysis, confirmatory factor analysis, and other multivariate approaches have been employed. In recent years, there has been increased interest in more flexible approaches which assume little knowledge about the nature of the pollution source profiles, but are still able to produce nonnegative and physically realistic estimates of pollution source contributions. Confirmatory factor analysis can yield a physically interpretable and uniquely estimable solution, but requires that at least some of the rows of the source profile matrix be known. In the present discussion, we discuss the iterated confirmatory factor analysis (ICFA) approach. ICFA can take on aspects of both confirmatory factor analysis and exploratory factor analysis by assigning varying degrees of constraint to the elements of the source profile matrix when iteratively adapting the hypothesized profiles to conform to the data.

Keywords: Chemical mass balance, multivariate receptor model, air quality modeling

 

Statistical Methods for Estimating the Spatial Average Over an Irregularly-Shaped Study Region

Mary C. Christman

University of Maryland

mc276@umail.umd.edu

Abstract. In fisheries management, it is critical to have efficient estimators of quantities such as standing stock biomass or species abundance since otherwise decisions rest on incomplete and possibly inaccurate information. Using data collected from fishery-independent surveys in the Chesapeake Bay (eastern U.S.), we compare several methods for estimating relative abundance from catch-per-unit-effort (CPUE) data. Of interest is estimating such quantities as average CPUE or total relative abundance for a study area that is irregular in shape. The methods are: an approximation to block kriging, approximate block kriging in the presence of trend, and design-based estimation based on multistage stratified random sampling. In this article we describe a method for estimating a spatial average and its SE using an approximation for block kriging and which incorporates a trend component. What make this work distinctive from universal block kriging is the potential use of covariates other than the spatial indices common in universal kriging and the use of block kriging over an irregular shape. We show that the kriging error for the spatial mean based on the new method is lower than the design-based method for estimating the variance. The method is genral and can be applied in many similar situations.

Keywords: block kriging, stratified random sampling, fisheries, transects

 

Combining Different Information Sources in Evaluating the Uncertainty Associated with Air Pollutant Emission Inventories

Daniela Cocchi, Enrico Fabrizi, Carlo Trivisano*

Dipartimento di Scienze Statistiche "P. Fortunati"

Università di Bologna, Italy

trivi@stat.unibo.it

Abstract. The EU program CORINE aims to provide unified guidelines for estimating the volume of emissions in all member countries. The underpinning methodology is based on a deterministic model. There exist some proposals for evaluating the uncertainty associated to point estimates of emissions in most of the cases based on bootstrap methods. In this work, we propose alternative methods based on hierarchical Bayesian models. The use of Bayesian models allows us to integrate different data sources, like ad hoc empirical studies, results published in the literature and subjective expert evaluations. The goal is the construction of global uncertainty measures, able to keep the different components of uncertainty which are present in evaluating the level of emissions into account. These components are the measurement errors and the approximations that are implicit in defining the emission factors themselves. Also the residual heterogeneity of the single emission sources with respect to the estimation computed on the basis of the emission factors is relevant. The use of hierarchical models allows us to control jointly different sources of variability like the measurement errors and the variability associated to the definition of emission factors.

Keywords: emission inventories, hierarchical Bayesian models, uncertainty

 

 

Using Spatial Data to Model Animal Abundance as a Mixture Process

M. Elizabeth Conners

Research Fishery Biologist

NMFS Alaska Fisheries Science Center

7600 Sand Point Way NE, Seattle WA 98115-6349

liz.conners@noaa.gov, (206) 526-4465

Abstract. The estimation of animal abundance (and other environmental variables) is often hampered by a highly irregular or patchy distribution of the animal over the study area. Spatial patchiness is often attributed to specific habitat preferences, but in many cases the exact habitat required or the abundance of different habitats in the study area is unknown. This is especially true in fisheries, where many important habitat variables are difficult to observe. This presentation uses Atka mackerel in the Aleutian Islands (Alaska, U.S.) as an example of an organism with an extremely patchy spatial distribution. A model-based estimation of Atka mackerel abundance treats sample observations as arising from a mixture process, where the underlying distribution is a mixture of two or more components, and the component class associated with each observation is unobserved. A combination of historical data and physical covariates are used to create a spatial model of the component classes. This spatial model is then combined with sample observations to estimate component densities and overall abundance. This approach can provide both increased precision of abundance estimates and enhanced understanding of the target species.

Keywords: mixture distribution, spatial modeling, fisheries

 

Spatio-temporal Analysis of Extreme Values from Lichenometric Studies

and their Relationships to Climate

 

Daniel Cooley and Philippe Naveau

Dept of Applied Mathematics, University of Colorado,

Boulder, CO 80309-0526 USA

Daniel.Cooley@colorado.edu

Vincent Jomelli and Antoine Rabate

Laboratoire de Geogrphie Physique

CNRS-Meudon-Bellevue, France

Abstract. Arctic and alpine regions are very important to understand the effects of climate change and other geophysical phenomena. The lack of relevant time series in such environments gave rise to lichenometry, the study of lichen growth for the purpose of dating rock features such as glacial moraines. Although lichenometry has been practiced for years, it has lacked a solid statistical basis. The statistical challenge is to propose a spatio-temporal model for extreme lichen diameters and to investigate the spatial structure between different glaciers with different environmental factors. We develop a spatio-temporal bivariate model (lichen sizes and their associated dates) based on extreme value theory. The statistical framework is a random effect model for the Generalized Extreme Value (GEV) distribution whose parameters vary in function geographical location of the site and the temporal effects.

In addition to providing for the first time a probabilistic framework to the field of lichenometry, the flexibility of our statistical model allows us to integrate the error associated with the dating process (i.e. estimating the age of each moraine). To validate our statistical methodology, simulated examples were analyzed and tested. Finally, the proposed techniques are applied to 14 different glaciers in Bolivia.

Keywords: Climate Change, Maximums, Extreme Value Theory

 

Characterizing Design-based Properties of a Spatial Sample to Quantify Design-based Variance of Model-based Estimators

Cynthia Cooper

Department of Statistics

Oregon State University,

44 Kidder Hall

Corvallis Oregon 97331 USA

cooper@stat.orst.edu

Abstract. The choice of a design-based or model-based (i.e. restricted randomization) strategy of sample design depends on the objectives of a study. Design-based objectives are employed for robust estimation of population totals or averages. Parameter estimation of an assumed underlying random process, and kriging prediction are model-based objectives. There are circumstances where multiple objectives may suggest the application of both strategies. One may want a model-free estimate of population characteristics, but require model-based estimation for small area estimation. For policy makers, there are liabilities of both approaches. The design-based approach requires no model assumptions, but strictly-design-based small-area estimates can have inadequate precision. A characterization of a sample by its inclusion probability density allows for a model-free approach to quantifying the model-based estimator variability, fortifying the basis of a model-based small area prediction. This study characterizes some restricted random samples by their inclusion probability densities. The first- and second-order inclusion probability densities are applied to estimate the design-based variance of a model-based prediction. The inclusion probability densities are used to weight the contributions of observations to a model-based estimate or prediction (such as kriging) to determine the design-based variability of the model-based estimator.

Keywords: Design-based variance, model-based variance, small area estimation, continuous domain sampling, kriging

 

A Comparison of Fisher’s Linear Discriminant Function and the Maximum Likelihood Method for Land Cover Classification

Miriam A. Cope

Center for GIS Research

California State Polytechnic University, Pomona

macope@csupomona.edu

R. Gil Pontius, Jr.

Clark University

rpontius@clarku.edu

Abstract. This paper examines error due to location and error due to quantity in land cover classification using Fisher’s Linear Discriminant Function and Maximum Likelihood. An artificial data set designed to create perfectly known means and variances of cases was divided into two samples to calibrate and validate the model. For each classifier, impacts of pure, impure and missing signatures were examined. Results demonstrated almost equal classification accuracy (overall Kappa) for both classifiers. However, differences occurred in the agreement between the classification image and truth image for each classifier with respect to accuracy of location and accuracy of quantity.

Keywords: Kappa index of agreement, error, fisher’s linear discriminant function, maximum likelihood

 

The Effects of Blurred Plot Coordinates on Ozone Risk Assessments

John Coulston

North Carolina State University

jcoulson@fs.fed.us

Greg Reams

USDA Forest Service

Southern Research Station

3041 Cornwallis Road

Research Triangle Park, NC 27709 USA

greams@fs.fed.us

Gretchen Smith

University of Massachusetts at Amherst

Abstract. Tropospheric ozone occurs at phytotoxic levels in the United States. The USDA Forest Service Forest Inventory and Analysis Program (FIA) employs a biomonitoring approach to document direct foliar injury to plants from ozone on a national-scale. Analysts use thisinformation to estimate the area of forestland and the biomass of susceptible species in four ozone injury risk categories. The estimates are based on the intersection of an interpolated ozone injury surface and other geospatial data such as forest cover or forest biomass. Typically, kriging orinverse distance weighting interpolators are used. These interpolators require the coordinates for each ozone biomonitoring plot. However because of privacy issues, FIA uses two methods to manipulate plot locations to insure landowner privacy. The influence these manipulations have on the accuracy of both estimates described above is unknown. We investigate the influence by comparing interpolated surfaces created from actual coordinates and manipulated coordinates using cross-validation techniques. Next we examine the impact that the coordinate manipulation has on estimates of the area of forestland and the biomass of susceptible species in four ozone injury risk categories. We expect this analysis to provide insight into the acceptableupper-limit for coordinate manipulation of biomonitoring plot locations.

Keywords: forest inventory and analysis, spatial statistics, cross-validation, biomonitoring, Food security act of 1985

 

 

An Object-based Method for Quantifying Errors in Meteorolgical Models

C. A. Davis, B. G. Brown, R. Bullock, M. Chapman, K. Manning, and A. Takacs

National Center for Atmospheric Research

P.O. Box 3000

Boulder, CO 80307 USA

cdavis@ucar.edu

Abstract. Measures-based characterization of errors in forecasts (e.g. root-mean

squared error) fail to provide useful information as increasingly complex spatial structures become evident in numerical weather forecasts. The perceived utility of such forecasts often lies in their ability to predict localized and episodic events. Subtle forecast timing and location errors yield low skill scores by traditional measures because the phenomena of interest (e.g. precipitation, turbulence, icing) contain large spatial gradients. Yet, the occurrence of such features in forecasts can provide forecasters with important clues to the possible occurrence of important weather events. An alternative verification strategy is to decompose a numerical forecasts into objects whose position and attributes can be objectively compared between models and observations. In this talk, we describe a recently developed method for defining rain areas for the purpose of verifying precipitation produced by numerical models integrated on grids as fine as 4 km. The phenomenological focus is on heavy rainfall systems during the warm season. The model is the recently developed Weather Research and Forecast (WRF) model, designed for numerical prediction on grids of 1-10 km spacing and time scales of 0-48 h. Observations consist of a national precipitation analysis (the so-called Stage IV product, on a 4-km grid, from the National Centers for Environmental Prediction). Objects are defined in both forecasts and observations based on a convolution (smoothing) and thresholding procedure. Nearby objects are merged according to a rule set involving separation distance and orientation. Objects in the two datasets are matched and a statistical analysis of matched pairs is performed. In addition, the raw rainfall values within each object are retained and the distribution of intensities is analyzed as another object attribute. Extension of this method to a wide variety of environmental forecast verification problems is also discussed. This approach allows us to more appropriately assess the spatial accuracy of these types of forecasts than previously used methodologies.

Keywords: Forecast verification, precipitation, object identification, spatial forecasts

 

Aerial Camera Network Design for 3D Urban Battlefield Reconstruction

Benoit Debaque

Systèmes d'Imagerie Évolués/Advanced Imaging Systems

INO

DEBAQUE Benoît <Benoit.Debaque@ino.ca>

Abstract. 3D urban world can be recovered from ground, aerial or satellite imagery. This paper deals with the problem of where to place aerial cameras in order to obtain a minimal error when reconstructing the urban area. This problem is also known as flight planning in aerial-triangulation in the photogrammetric community. Software exists for automatic flight planning in aerial triangulation, assumed no hostile areas will be flew over. Here we propose to optimally place the cameras point-of-views in order to minimize the 3D reconstruction AND avoid these areas. This problem has to be tackle for regular flight planning recall that some buildings or monument can not be fly over.

 

Multidimensional User Manual (MUM): A Tool to Manage and Communicate Data Quality Information

Rodolphe Devillers 1,2, Yvan Bédard1 and Robert Jeansoulin3

1 Centre de Recherche en Géomatique (CRG), Pavillon Casault, Université Laval, Québec, G1K 7P4, Canada

rodolphe.devillers.1@ulaval.ca

2 Université de Marne-la-Vallée, Institut Francilien des GéoSciences, France

3 Laboratoire des Sciences de l’Information et des Systèmes (LSIS), Centre de Mathématiques et d’Informatique (CMI), Université de Provence (Aix-Marseille I), 39 Rue Joliot Curie, 13453 Marseille Cedex 13, France.

Abstract. Geospatial data are increasingly being used in various domains such as natural resources, environment, transportation, urban planning, etc. Expert in geomatics as well as non-expert users can easily access geospatial data, display them on inexpensive GIS or free viewers and manipulate them to support decision processes, often without any knowledge related to the quality of the used data (e.g. precision, completeness, up-to-date). This frequently leads to misunderstandings about the possible uses of data and their inherent limits, sometimes resulting in important social, economical or legal problems.

This paper presents a project named Multidimensional User Manual (MUM) that aims at decreasing the risks of data misuse by providing contextual and aggregated information to geospatial data users regarding different aspects of data quality. Information about data quality is integrated and structured at different levels of aggregation within a multidimensional database. It is then communicated to users via a cartographic interface with contextual indicators providing information on different aspects of data quality. Users can easily and rapidly navigate into several types of quality information at different levels of details using OLAP (On-Line Analytical Processing) operators. Quality information can also be mapped in order to visualize the spatial heterogeneity of quality information.

Keywords: Geospatial data quality, Quality-aware GIS, quality visualisation, Multidimensional database, Metadata

 

Non-parametric Confidence Bands for b Diversity Profiles

T. Di Battista and S.A. Gattone

Dipartimento di Metodi Quantitativi e Teoria Economica

Università "G. D’Annunzio" Chieti

Viale Pindaro, 42 65127 Pescara Italy

dibattis@dmqte.unich.it

Abstract. The aim of the work is to construct simultaneous confidence bands for the b-diversity profiles that are a parametric family of indexes of diversity for a biological population. In this framework drawbacks arise when simultaneous inference has to be performed. Moreover, biologists rarely have at hand large sample sizes so that the derivation of asymptotic sampling distribution may be hazardous. We try to overcome these problems by building simultaneous confidence bands adopting a non-parametric bootstrap procedure.

Keywords: Biological population, Biodiversity, Bootstrap, Simultaneous Confidence Regions.

 

Effect of Support Size on the Accuracy of Spatial Models:

Findings of Rockfall Simulations on Forested Slopes

 

Luuk Dorren*,1, Gerard Heuvelink2 and Frédéric Berger1

1 Cemagref; 2, rue de la Papeterie, B.P. 76, 38402, Saint Martin d’Hères

France, Tel: +33 4 7676 2806; luuk.dorren@cemagref.fr

2 Laboratory of Soil Science and Geology, Wageningen University; P.O. Box 37, 6700 AA Wageningen, The Netherlands; gerard.heuvelink@wur.nl

Abstract. The accuracy of model output increases with a decreasing support size of the input data, due to the increase of detail. This paper examines whether this is true for various spatial models developed for simulating rockfall. We analyze the effect of the support size on the accuracy of a set of models and their parameters. The validation data were obtained from real-size rockfall experiments in which high-speed video cameras recorded the trajectories and velocities of more than 200 individual falling rocks with diameters between 0.8 and 1.5 meter. These observed data are thoroughly compared with the output of the various models. One of the main findings is that a larger support size can be a more important cause of a larger model error than poor data quality.

 

Comparison of Populations with Negative Binomial Distribution When Parameters Depend on Covariates

Lucie Doudová

MasarykUniversity

Brno

Czech Republic

ldoudova@centrum.cz

Abstract. The data with negative binomial distribution with parameters μ and κ will be considered in the contribution. If κ is known the standard GLM technique can be applied. The situation where κ is unknown and depends on covariates is more complicated. The aim of the study will be oriented to the comparison of populations with negative binomial distribution where both parameters μ and κ depend on covariates. Then the robustness of the comparison will be studied by simulation.

Finally the obtained results will be applied to the environmental study. The populations of roe deer (Capreolus capreolus) from different territories in the Czech Republic will be compared and analysed.

 

Statistical Inverse Methods for Marine Ecosystem Models

Michael Dowd

Dept. of Mathematics & Statistics

Dalhousie University

Halifax, N.S. Canada, B3H 3J5

Phone: 902-494-1048

Fax:902-494-5130

mdowd@mathstat.dal.ca

Abstract. Statistical approaches to the inverse problem associated with combining time-dependent dynamic models of marine ecosystems with observations are investigated. The goal is to combine prior information, in the form of model dynamics and substantive knowledge about uncertain parameters, with available measurements in order to produce posterior estimates of time-varying ecological state variables, along with their uncertainty. Ecological models of interacting populations are represented as stochastic difference equations. The estimation, or inverse, problem is then treated from the perspective of nonlinear, nonGaussian state space models. Some simple marine ecosystem models and observations are used to illustrate important aspects of the Bayesian approach to this inverse problem.

Keywords: ecosystem, state space models, inverse problems

 

Is It Possible to Model DEM Uncertainty with a Single Model of Spatial Data Errors?

Charles R. Ehlschlaeger

Department of Geography

Western Illinois University

1 University Circle

Macomb, IL 61455 USA

cre111@wiu.edu

Abstract. Modeling digital elevation model (DEM) uncertainty can be as "simple" as developing a distribution of application results using Monte Carlo simulation (MCS). MCS requires equiprobable realizations of input maps and parameters in order for the distribution of application results to be an accurate reflection of what could be reality. For a DEM, this process involves designing a single model that represents the distribution of elevation errors and a measure of spatial autocorrelation. Commonly, conditional stochastic modeling using mathematic principles from the kriging process is used to develop the DEM uncertainty model.

Two critical questions need to be asked. 1) Whether the kriging process is an appropriate model for developing a distribution of DEM errors. 2) Whether any single DEM uncertainty model can possibly be a representation for any application. This paper explores these questions and proposes a methodology similar to Beven’s Generalized Likelihood Uncertainty Estimation (GLUE) process.

Keywords: DEM, uncertainty modeling, Monte Carlo simulation

 

An Iterative Uncertainty Assessment Technique for Environmental Modeling

D.W. Engel, A.M. Liebetrau, K.D. Jarman, T.A. Ferryman, T.D. Scheibe, and B.T. Didier

Pacific Northwest National Laboratory

P.O. Box 999

Richland, Washington 99352 USA

dave.engel@pnl.gov

Abstract. The reliability of and confidence in predictions from model simulations are crucial because these predictions can significantly affect risk assessment decisions. For instance, the fate of contaminants at the Department of Energy’s Hanford site is a critical problem that impacts long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties due to model input uncertainties (which include uncertainties in physical or fitted parameters, in initial and boundary conditions, and in model parameterizations). The approach is designed for application to a wide diversity of problems across multiple scientific domains. An analysis has been performed on a simplified contaminant fate transport and groundwater flow model to illustrate this approach. The results show that our iterative method for computing uncertainty estimates of specified precision is more efficient (i.e., requires less computing time) than traditional approaches based upon simple random sampling approach and other non-iterative sampling methods.

Keywords: iterative, uncertainty, risk, groundwater

 

Two-stage Wavelet Analysis Assessment of Dependencies in Time Series of Disease Incidence

Nina H. Fefferman, Jyotsna S. Jagai, Elena N. Naumova

Tufts University School of Medicine

Family Medicine and Community

Health

Tufts University

136 Harrison Ave. Boston, MA 02140

elena.naumova@tufts.edu

Abstract. In epidemiology, techniques that examine periodicity in times series data can be used to understand weekly, biannual or seasonal patterns of disease. However, a simple understanding of periodicity is not sufficient to examine the possible influence of variation in incubation period, distributed sources of infection, and due to environmental factors, especially if these influences affect the rate of disease on various spatio-temporal scales.

In order to examine the feasibility of using wavelets to assess dependencies over different spatio-temporal scales in a time series of disease incidence, we abstracted 10 years of daily records of ambient temperature and precipitation in addition to daily disease incidence data for Massachusetts for five enterically transmitted diseases.

We eliminated periodic fluctuation in both seasonal and weekly case reporting using various techniques (Fourier transformation, "loess" smoothing, and ARIMA) on each time series of disease data. Different methods were employed in order to examine the possible effect of removed periodicities on the variance of the data. We then performed a wavelet decomposition to examine the residuals from these analyses on a variety of temporal scales and examined the resulting correlations to the environmental data.

Keywords: wavelet decomposition, variance heterogeneity, biosurveillance, outbreak modeling

 

Moderate Resolution Maps of Forest Characteristics: Challenges in Their Development and Assessment of Their Appropriate Uses

Mark Finco

Remote Sensing Band

Forest Inventory and Analysis

USDA Forest Service

Salt Lake City, UT

mfinco@fs.fed.us

Abstract: USFS Forest Inventory and Analysis (FIA) data have historically been used to produce estimates of forest population totals over large geographic areas. Recent emphasis has been placed on delivering this traditional forest resource information to a larger and more diverse audience by producing regional maps of forest characteristics. Maps provide a more flexible, easily understood, and visually accessible product that enhances the general utility of the information traditionally provided in tables and graphs. Over the last year, the FIA Remote Sensing Band and the USFS Remote Sensing Applications Center (RSAC) have collaboratively developed a strategy for producing national forest characteristics geospatial datasets. The first of these national maps is of forest biomass across the contiguous United States, Alaska and Puerto Rico.

In this paper we investigate and discuss the limitations and appropriate application of these maps. In this age of Geographic Information Systems (GIS), there is a great temptation to misuse moderate resolution datasets by calculating and comparing summaries for relatively small areas. Standard remote sensing methodologies for calculating map accuracy based on a confusion matrix are inadequate to providing the map user concrete guidance on how the map should be used. Here we present alternatives to the confusion matrix for assessing map accuracy and discuss appropriate applications of the FIA national biomass map.

 

Neighbor Comparisons of Cloud Ceiling Height and Visibility Observations

Tressa L. Fowler*, Jamie T. Braid, and Anne Holmes

Research Applications Program

National Center for Atmospheric Research

P.O. Box 3000

Boulder, CO 80307-3000

tressa@ucar.edu

Abstract. Ceiling and visibility forecasts are issued for use in flight planning, particularly for general aviation. METAR station information is used for verification of these forecasts. Unfortunately, stations are sparse in many areas and quite dense in others. Additionally, the quality and type of information available from METAR stations may differ considerably depending on whether the station is automated or manual, the location, the type of instrumentation, the level of training and expertise of a manual observer, etc. Thus, in order to reduce the effect of these differences on verification, it is desirable to identify a consistent subset of METAR stations to be used for verification.

A comparison of stations with their neighbors can aid in the identification process. All METAR stations in the CONUS are matched with their neighbors within 20 and 40 kilometer radii. The agreement of ceiling and visibility observations between the stations is assessed. Disagreement may be due to erroneous observations or environmental conditions that differ considerably over the distance between stations. Preliminary findings of the neighbor comparison indicate that observations between neighboring stations tend to correspond for a high percentage of the non-events. However, during ceiling and visibility events, measurements at neighboring stations correspond less often.

 

Evaluating Patterns of Spatial Relations to Enhance Data Quality

David Gadish

California State University Los Angeles

5151 State University Drive

Los Angeles, CA, 90032

dgadish@calstatela.edu

Abstract - Effective use of data stored in spatial databases requires methods for evaluation and enhancement of the quality of the data. Spatial data quality can be evaluated using a measure of internal validity, or consistency, of a data set. Capturing spatial data consistency is possible with a multi-step approach.

A distance measure is used to detect implicit spatial relations between neighboring objects. The next step involves identifying the types of relations between these neighboring objects using topology based constraints.

The semantic information of objects, together with topological relations are combined to discover patterns, or rules, in the data. These rules are based on the analysis of the relations between each object and each of its neighbors, as well as between each object and all of its neighbors.

Patterns of spatial relations, represented as rules are validated using available metadata, as well as trend analysis and Monte Carlo simulation techniques. These can now be used as the basis for automated detection of inconsistencies among spatial objects, where possible inconsistencies are detected when one or more rules are violated. Detected inconsistencies can then be adjusted, thus increasing the quality of spatial data sets.

Key Words - Consistency, Patterns, Spatial Relations

 

Analysis of Sulphur Dioxide Trends Across Europe

Marco Giannitrapani, Ron Smith, Marian Scott, and Adrian Bowman

Department of Statistics
Mathematics Building
University Gardens
University of Glasgow
G12 8QW Glasgow, Scotland, United Kingdom
marco@stats.gla.ac.uk

Abstract. From the 1970’s, a co-ordinated international programme monitoring acidifying air pollution was initiated in direct response to observed acidification. At the same time, several international protocols on the reduction of acidifying and eutrophying emissions (SO2, SO4 etc) were also agreed.

This work presents an evaluation of the observed spatial and temporal trends in SO2 in Europe for the last quarter of the twentieth century, on the basis of data from EMEP (Co-operative Programme for Monitoring and Evaluation of the long Range Transmission of Air Pollutants in Europe). The policy question of interest is whether the protocols have resulted in a real improvement in environmental quality and a real change in the acidifying environment

In the first part of the work, we report on non-parametric modelling of the temporal trends, accounting for the effect of meteorological covariates using generalised additive models (GAMs). The need of circular smoothers for some covariates (specifically wind direction and weeks of the year), and the correlation of the data, led us to develop and fit a generalised version of a Local Linear Regression smoother. The model fitting used a reformulated version of the back-fitting algorithm that provides the projection matrix at convergence, which is used for model testing purposes. A generalised version of a bivariate Local Linear Regression smoother has also been fitted and tested against the additive model, in order to test for changes in seasonality.

The second part of the work has considered the spatial patterns in the SO2 field and their temporal evolution. A regression surface has been fit to each month, the spatial correlation modelled and a time series analysis of the spatial parameters carried out.

At each time point, different spatial surfaces (i.e. linear models, nonparametric smoothers, etc…) have been fitted and variograms have been fitted to the residuals. Kriging analysis have produced estimates of the spatial distributions.

We report on our findings.

 

Optimizing METAR Network Design for Verification of Cloud Ceiling Height and Visibility Forecasts

Eric Gilleland

National Center for Atmospheric Research

Research Applications Program

Boulder, CO 80307-3000

ericg@ucar.edu

Abstract. Methods are given and explored for thinning METAR stations in order to make more meaningful verification analyses of cloud ceiling and visibility forecasts. Verification of these forecasts is performed based on data from surface METAR stations, which for some areas are densely located and others only sparsely located. Forecasts, which are made over an entire grid, may be penalized multiple times for an incorrect forecast if there are many METAR stations situated closely together. A coverage design technique in conjunction with a probability of detection analysis is employed to find an "optimal" network design to be used to better score forecasts over densely located regions. Preliminary results for a network of 48 monitors in the San Francisco bay area suggest that the removal of some stations would provide a more accurate verification of forecasts.

 

Analysis of U.S. Rainfall Using a Threshold Model in Extreme Value Theory

Amy Grady

National Institute of Statistical Science

agrady@niss.org

Richard L. Smith

Department of Statistic

University of North Carolina at Chapel Hill

rls@email.unc.edu

Abstract. In the past decade, marked emphasis in climate change research has been placed on extremes and their environmental impacts. In particular, precipitation has and continues to be looked at not only because current research indicates positive trends but also the direct impacts precipitation extremes can have – e.g. floods, landslides, and infrastructural damage. Focusing on the continental U.S., the general consensus within the climate change community indicates significant positive trends in both the average rainfall and its extremes across most of the U.S. In fact, work of Karl et al of the National Climatological Data Center suggest that the trends in the extremes are driving the trends in the total rainfall. To date, most of the analytical work on this data has been more empirical – first aggregating the data across large regions into summary statistics and then performing trend analyses on these aggregated statistics. As increased emphasis is placed on impact analysis and validation within general circulation models with respect to extremes, the need for more sophisticated statistics has also increased. Here, we analyze the individual precipitation series from 5873 stations across the continental U.S. from the Historical Climatological Network using a threshold model from extreme value theory. To combine the stations, we utilize a spatial integration model which not only takes into account a spatial covariance that models the dependence between the stations’ parameters but also measurement error.

Keywords: U.S. Precipitation, Extreme Value Theory, Threshold Models, Spatial Integration Models

 

Using Zero-inflated Count Assumptions to Model Macroinvertebrate Abundance Data

Brian Gray1, Roger Haro2, Jim Rogala1 and Jennifer Sauer1

1Upper Midwest Environmental Sciences Center

U.S. Geological Survey, La Crosse, WI USA

brgray@usgs.gov

2River Studies Center

University of Wisconsin, La Crosse

La Crosse, WI USA

Abstract. Zero-inflated count distributional assumptions have become increasingly popular for modelling counts with "excess zeroes." This modelling approach assumes that structural zeroes complement zeroes arising from count distributional assumptions. For this reason, zero inflated approaches have been helpful for modelling ostensible count data with zeroes deriving from observer error or from sampling under circumstances unsuitable for generating counts. The benefits and potential drawbacks associated with zero-inflated count models are discussed with reference to zero-inflated negative binomial (Poisson-gamma mixture) models of mayfly abundance data. For these data, a count model was substantially improved by the addition of a postulated zero process (χ2 LRT statistic = 78.0, df = 3). The model assumed mean counts varied across a range of habitat conditions while the zero process varied by “suitable,” “intermediate” and “unsuitable" habitat categories. The odds of a zero process was estimated as 0.00, 0.25 and 3.90, respectively. Negative binomial dispersion parameter estimates decreased following inclusion of the zero process component, and notably for heterogeneous, high-energy habitats. This example demonstrates that zero-inflated models may usefully augment count models in circumstances where count processes appear unlikely.

Keywords: negative binomial distribution, LTRMP, zero inflated count models

 

Characterization of the Spatial and Parameter Variability in a Subtropical Wetland

S. Grunwald, B.E. Weinrich, K.R. Reddy, and J. P. Prenger

Soil and Water Science Department

University of Florida

2169 McCarty Hall

PO Box 110290

Gainesville, FL 32611, USA

SGrunwald@mail.ifas.ufl.edu

Abstract. The eutrophication of subtropical wetland ecosystems results in changes of biogeochemical patterns, pedo- and biodiversity, and ecological function. We investigated an impacted subtropical wetland, which is undergoing natural succession. We collected 20 biogeochemical soil properties at 267 sites to characterize the current ecological status. This exhaustive dataset served as a reference. Our goal was to identify properties which accounted for much of the spatial and the parameter variability. We used Conditional Sequential Gaussian Simulation (CSGS) to generate realizations of biogeochemical properties to characterize spatial patterns and to assess explicitly the uncertainty of predictions. We used Principal Component (PC) Analysis to transform a number of possibly correlated variables into a smaller number of uncorrelated variables reducing character space. CSGS was used to generate realizations of PCs. Each biogeochemical property was mapped into a PC indicating its significance to explain variability in the dataset. We randomly reduced the number of observations before generating realizations and tested the accuracy with a validation dataset. Results for P indicated that a sample density of > 2.52 sites/100 ha was the minimum to reproduce the spatial patterns and variability across the site.

Our results are valuable to document the current ecological spatial patterns in this wetland and the constraints to characterize spatial and parameter variability.

Keywords: stochastic simulation, spatial variability, parameter variability, biogeochemical properties, wetlands

 

The Adoption of Variance Based Methods for Sensitivity Analysis of a Combined (slope) Hydrological and Stability Model (CHASM)

N. Hamm*, J. W. Hall* and M. G. Anderson**

*Department of Civil Engineering

**School of Geographical Sciences

University of Bristolr

Queen's Building

University Walk

Bristol BS8 1TR

United Kingdom

n.hamm@bristol.ac.uk

Abstract. This research investigated the sensitivity–analysis and uncertainty–analysis of a combined (slope) hydrological and stability model (CHASM) (Wilkinson et al., 2002). CHASM has been developed at the University of Bristol over the past two decades and its fast execution time make it appropriate for the implementation of large numbers of simulations. We implemented the variance–based sensitivity analysis approach (Chan et al., 2000) which allowed analysis of first and higher order effects attributed to the model parameters and input data. This was then extended to investigate the significance of spatial structure in the parameter uncertainty (Hall et al., in review). Some general sensitivity analysis had been conducted on CHASM in the past (Anderson & Kemp, 1992), however, this type of formal sensitivity analysis is novel in its application to geotechnical models of this type. It was found to be valuable for gaining a detailed understanding of the statistical importance of the model parameters, leading to an enhanced understanding of the physical processes. This is a necessary precursor for investigating the uncertainty imposed on the model output as attributed to climate change scenarios (which leads to uncertainty in the input data). Finally, the fast execution time means that this simulation based analysis can be used to investigate the viability of less computationally expensive (but more approximate) methods for sensitivity and uncertainty analysis.

Anderson, M.G. and Kemp, M.J. (1992) Towards an improved specification of slope hydrology in the analysis of slope instability problems in the tropics, Progress in Physical Geography, Vol. 5, pp29–52.

Chan, K., Tarantola, S., Saltelli, A. and Sobol, I.M. (2000)Variance–based methods. In Saltelli, A., Chan, K. and Scott, E.M. (Eds.) Sensitivity Analysis, John Wiley and Sons, New York, pp167–197.

Hall, J.W., Trantola, S., Bates, P.D. and Horritt, M.S. (in review) Distributed sensitivity analysis of flood inundation model calibration.

Wilkinson, P.L., Anderson, M.G., Lloyd, D. and Renaud, J–P. (2002) Landslide hazard and bioengineering: towards providing improved decision support through integrated numerical model development, Environmental Modelling and Software, Vol. 17, pp333–344.

Keywords: CHASM (combined hydrological and stability model); Variance–based sensitivity analysis; Uncertainty analysis; Geotechnical models

 

Reducing the Effect of Positional Uncertainty in Field–based Measurements of Feflectance on the Atmospheric Correction of Airborne Remotely Sensed Imagery

N. Hamm, P. M. Atkinson and E. J. Milton

School of Geography

University of Southampton

Southampton SO17 1BJ

United Kingdom

n.hamm@soton.ac.uk

Abstract. The empirical line method (ELM) is a used widely for the atmospheric correction of remotely sensed imagery. The ELM is based on a linear regression model, where at–surface reflectance (measured in the field) is the dependent variable and at–sensor radiance (from the remotely sensed imagery) is the predictor variable. Hence it is necessary to pair the field measurements with the spatially coincident remotely sensed measurements. The ELM has also been extended to ensure that the field and remotely sensed data are defined on the same support by using block kriging and block conditional–simulation to aggregate the field measurements to pixel sized supports (Hamm et al., 2002). Implementations generally assume (implicitly) that the location of the field measurements and the geometric correction of the imagery are perfect. This research relaxes that assumption and has sought to quantify the uncertainty in implementation of the ELM that is attributed to uncertainty in the location of the field measurements. This can be constrained by sample size and pixel size (which also affect prediction accuracy). Hence, if the remote sensing practitioner requires a specific prediction accuracy this allows us to specify both the number of field measurements he or she needs to collect and the accuracy with which they need to record location. The approach of Hamm et al. (in press) has been considerably refined in two ways. First, we have adopted a model–based rather than simulation–based approach to investigating positional uncertainty. Second, we have gone beyond demonstrating the problem to quantifying it and providing advice for reducing its impact. This has also provided further methodological benefits by demonstrating the impact of positional uncertainty on geostatistical estimation and prediction.

Hamm, N., Atkinson, P.M. and Milton, E.J. (in press) Evaluating the effect of positional uncertainty in field measurements on the atmospheric correction of remotely sensed imagery. In Sánchez–Vila, X. and Carrera, J. (eds.) geoENV IV: Geostatistics for Environmental Applications, Kluwer, Dordrecht.

Hamm, N., Atkinson, P.M. and Milton, E.J. (2002) Resolving the support when combining remotely sensed and field data: the case of atmospheric correction of airborne imagery using the empirical line method. In Hunter G. and Lowell, K. (eds) Accuracy 2002, Proceedings of the 5th International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences, Melbourne, July 2002, pp339-347.

Keywords: Remote sensing; atmospheric correction; positional uncertainty in spatial analysis; accuracy assessment.

 

Bayesian Palaeoclimate Reconstruction

John Haslett

Dept of Statistics

Trinity College

Dublin Ireland

John.Haslett@tcd.ie

M. Whiley, S. Bhattacharya, J.R.M Allen, B. Huntley, and F. Mitchell

Abstract. We consider the problem of reconstructing pre-historic climates using fossil data extracted from lake sediment cores. A hierarchical Bayesian modelling approach is presented and its use demonstrated in a relatively small but statistically challenging exercise, the reconstruction of pre-historic climate at Glendalough in Ireland using fossil pollen data. This computationally intensive method extends current approaches by (a) explicitly modelling uncertainty at all stages and (b) reconstructing entire climate histories.

The statistical issues raised relate to the use of compositional data (pollen) with covariates (climate) which are available at many modern sites but are missing for the fossil data. The compositional data arise as mixtures and the missing covariates have a temporal structure.

Novel aspects of the method include non-parametric modelling of two dimensional response surfaces, the exploitation of parallel processing and the use of a random walk with long tailed increments.

We present some details of the study, contrasting its reconstructions with those generated by a method in use in the palaeoclimatology literature. We suggest that the method provides a basis for resolving important challenging issues in palaeoclimate research but draw attention to several challenging statistical issues.

Keywords: Compositional data, Gaussian processes, long tailed increments,Markov chain Monte Carlo

 

Space-time Kalman filtering of soil redistribution

G.B.M. Heuvelink1), J.M. Schoorl, B. Minasny, A. Veldkamp and D. Pennock

1)ALTERRA and Department of Soil Science and Geology, Wageningen University and Research Centre, P.O. Box 47, 6700 AA

Wageningen, The Netherlands

gerard.heuvelink@wur.nl

Abstract. Soil redistribution is the net result of erosion and sedimentation. Knowledge of how soil is redistributed in space and time is of great importance for land managers, be it from an agricultural, land planning or land protection perspective. Assessment of soil redistribution in a given landscape over a given period of time may be done using process-based and empirical approaches. Process-based approaches rely on knowledge of how various environmental processes acting in the landscape cause soil to move from one place to another. Empirical approaches rely on observations of soil redistribution, which are interpolated in space and time using (geo)statistical methods. In this paper we use space-time Kalman filtering to combine the two basic approaches. At each time step, the Kalman filter first predicts soil redistribution using process knowledge as contained in the state equation. Next, these predictions are conditioned to the observations. The case study that is used to illustrate the methodology concerns a six hectare part of the Hepburn research site, located on the hummocky till plains of Saskatchewan, Canada. Tillage erosion causes soil to move downward along the steepest gradient, whereby the amount of soil loss per year is assumed linearly related to slope angle. Observations of soil redistribution are derived using Cesium 137 as a tracer.

Keywords: geostatistics, space-time interpolation, tillage erosion, data assimilation

 

Survival Data: Kernel Estimates and Dynamical Models

Ivana Horova, Zdenek Pospisil, and Jiri Zelinka

Department of Applied Mathematics

Masaryk University in Brno

60200 Brno

Czech Republic

horova@math.muni.cz

Abstract. The puropse of this contribution is to present both nonparametric and deterministic modelling of survival data.We consider the model of random censorship where the data are censored from the right.This type of censorship is often met in many applications,especially in clinical research and in life testing of complex technical systems.First,the hazard function for given breast carcinoma data set is estimated by a kernel method. On the other hand,the data on events observed over time can be described by a deterministic dynamical model depending on the parameters.The obtained kernel estimate of the hazard function now serves as a basis for the identification of parameters in the deterministic model.Further,these parameters make possible to find the solution of this dynamical model.This procedure offers a suitable tool for analyzing in hazard rates.

Keywords: Hazard function,kernel,dynamical model

 

Powers of ANOVA Tests for Variables with General Distribution from Exponential Class

Zuzana Hrdličková

Department of Applied Mathematics, Masaryk University Brno,

Janáčkovo nám. 2a, 662 95 Brno

Czech Republic

zuzka@math.muni.cz

Abstract. The approximation of the power of the test used in generalized linear models for comparing the population means (ANOVA type models) and based on scale deviance statistics will be derived in the contribution. The algorithm for calculation of the asymptotic power of the above mentioned test will be described and this algorithm will be computer implemented in MATLAB. The asymptotic power will be compared with the simulated power of the test for small sample sizes (for some variables with exact distribution such as Poisson or Gamma for example). Finally obtained results will be applied to environmetrical data for finding the suitable experimental design.

Keywords: power function, exponential class of distributions, generalized linear model, ANOVA type model

 

Model Testing for Spatial Strong-mixing Data

 

Rosaria Ignaccolo* e Nunziata Ribecco**

*Dipartimento di Staistica e Matmatica Applicata

University degli Studi di Torino, Italy

ignaccolo@econ.unito.it

** Dipartimento di Scienze Statistiche

University degli Studi di Bari, Italy

ribecco@dss.uniba.it

Abstract. In analysing the distribution of a variable in a space, each value is subject not only to the source of the phenomenon but also to its localisation. In this paper, we fit the model of the distribution, taking explicitly into account the spatial autocorrelation among the observed data. To this end we first suppose that the observations are generated by a strong-mixing random field. Then, after estimating the density of the considered variable, we construct a test statistics in order to verify the goodness of fit of the observed spatial data.. The proposed class of tests is a generalization of the classical chi-square-test and of the Neyman smooth test. The asymptotic behaviour of the test is analysed and some indications about its implementation are provided.

Keywords: goodness of fit; correlated data; spatial process; mixing random field.

 

Accounting for Error Propagation in the Development of a Leaf Area Index (LAI) Reference Map to Assess the MODIS MOD15A LAI Product

J.S. Iiames1*, R. Congalton2, D. Pilant3, and T. Lewis3

1Environmental Protection Agency

Research Triangle Park, NC USA

iiames.john@epa.gov

2University of New Hampshire

Durham, NH

3Environmental Protection Agency

Research Triangle Park, NC USA

Abstract. The ability to effectively use remotely sensed data for environmental spatial analysis is dependent on understanding the underlying procedures and associated variances attributed to the data processing and image analysis technique. Equally important, also, is understanding the error associated with the given reference data used to assess the accuracy of the image product. This paper details measurement variance accumulated in the development of a leaf area index (LAI) reference map used to assess the accuracy of the Moderate Resolution Imaging Spectroradiometer (MODIS) MOD15A LAI 1000-m product in the southeastern United States.

MODIS LAI was compared with reference data derived from Landsat Enhanced Thematic Mapper (ETM+) during the 2002 field season in the Albemarle-Pamlico Basin in Virginia and North Carolina. Ground-based optical LAI estimates were correlated with various ETM+ derived vegetation indices (VI’s) at a 30-m resolution pixel. These 30-m pixels were scaled up to the 1000-m MODIS LAI pixel resolution and averaged to give one LAI value. A detailed listing of error propagation for this reference data set includes uncertainty associated with: (1) two integrated optical LAI field estimating techniques (Tracing Radiation and Architecture of Canopies, TRAC instrument, and hemispherical photography), (2) choice of site-specific VI’s, and (3) image-to-image registration.

Keywords: MODIS, LAI, error propagation, accuracy

 

Complex Systems Analysis using Space-Time Information Systems and Model Transition Sensitivity Analysis

Geoffrey M. Jacquez

BioMedware, Inc.

516 North State Street

Ann Arbor, MI 48104

Jacquez@BioMedware.com

Abstract: Real-world systems are dynamic, complex and geographic, yet many modeling tools for analyzing complex systems are not spatial, and GIS do to adequately represent time. This presentation describes two new approaches: Space-Time Information Systems (STIS), and Model Transition Sensitivity Analysis (MTSA).

Current GIS are based on spatial data models (the "what, when" diad) that inadequately characterize the "what, where, when" triad" needed for effective representation of complex systems. Purely spatial GIS cannot deal readily with space-time georeferencing nor space-time queries, and instead are best suited to "snapshots" of static systems. These deficiencies prompted many geographers to call for a "higher-dimensional GIS" (a STIS) to better represent space-time dynamics. When formulating models of complex systems, critical choices are made regarding model type and complexity. Model type is the mathematical approach employed, for example, a deterministic model versus a stochastic model. Model complexity is determined by the amount of abstraction and simplification employed during model construction. A growing body of work demonstrates that choice of model type and complexity has substantial impacts on simulation results and on model-based decisions. This presentation describes STIS and MTSA approaches that allow researchers to transit seamlessly from a deterministic model, to its stochastic counterpart, and on to its individual event history representation.

Keywords: Complex systems; space-time information systems; Model Transition Sensitivity Analysis

 

Application of Geostatistics to Estimating Stock Size in Estuaries: A Non-Euclidean Approach to Variogram Calculation and Kriging

Olaf Jensen1, Mary Christman2, Glenn Moglen3, and Thomas Miller1

1Center for Environmental Science

Chesapeake Biological Laboratory

University of Maryland, MD 20688 USA

jensen@cbl.umces.edu

2Dept. Animal and Avian Sciences

University of Maryland

College Park, MD 20742 USA

3Department of Civil and Environmental Engineering

University of Maryland

College Park, MD 20742 USA

Abstract. Effective management of marine resources requires that we develop accurate, unbiased estimates of the abundance of those resources. Traditionally, abundances have been estimated from design-based survey methods. More recently, model-based geostatistical approaches have gained favor. Typically, a Euclidean distance metric is used for variogram calculation and kriging. While such a metric makes sense within convex polygons, it has the potential to introduce error in highly invaginated regions such as estuaries. A natural alternative is the use of "through the water distance" (TTWD). We used TTWD-based kriging to estimate the abundance of blue crab (Callinectes sapidus) within the Chesapeake Bay from 11 years of survey data. To do so, we developed efficient and adaptable algorithms in a GIS to calculate TTWD. The algorithms are based on a rasterized water body and a cost-distance function that estimates the shortest path through the water between two points. TTWD-based variogram and kriging routines were developed and used to predict the distribution and abundance of crabs throughout the Chesapeake Bay. Here we compare three different methods: the traditional survey method, kriging using Euclidean distance, and kriging using TTWD. Cross-validation confirmed that the TTWD metric resulted in consistently more accurate predictions than the Euclidean metric. Consistent reductions in the range and sill parameters of the variogram were observed as well.

Keywords: estuary, distance metric, Geographic Information System, geostatistics, variogram

 

Statistical Aspect of Stepwise Optimal Procedure

Kazutomo Kawamura

The National Defense Academy (NDA)

Dept of Mathematics, Hashirimizu 1-10-20

Yokosuka, 239-8686 Japan

kawamura@nda.ac.jp

 

Abstract. For given lattice space as in figure 1, we consider a path by movement of point P from initial point A to opposite corner point B. The movement consists of two single (1-step )movements, one is to east E and the other is to south S at each step.

We consider cost or risk function on the space having two types of particular distributions. One of the distributions is uniform and the other is exponential. The reason of the selection of the particular distributions comes from the idea of modification about the cost or the risk in natural environment. We specially select them to evaluate the risk in jungle exploration and the cost or the risk in a flight of aircraft in the sky.

We can evaluate a path by calculating the sum of costs we have to pay on it. The aim of this work is to find affordable optimal procedure, which yields a optimal path with lower evaluation.

We have developed PC software of simulation with Yoshihiro Honda and have practiced it. The simulation is based on deriving cost matrices N=1000 times to pursue the optimality. And have derived k-steps optimal (k=0, 1. 2, … ) path and its total cost corresponding those paths on each simulated cost matrix. Simply talking, we have derived N=1000 times data of total costs corresponding k-steps optimal (k=0, 1. 2, … ) paths on a simulated matrix.

As a conclusion, we can express followings. By statistical investigation, we have confirmed that our procedure with 1-step optimality has incredible good property. The other property is expressed that the relation of distribution from total cost are almost same regardless which distribution of the cost has applied.

Keywords: Mathematical Programming, Dynamic Programming, Random Cost Function, Step-wise Optimality, Risk Control.

 

Locational Errors in Spatial Point Patterns: Assessing the Spatial Accuracy

Konstantin Krivoruchko1 and Jorge Mateu2

1Environmental Systems Research Institute

380 New York Street

Redlands, CA 92373-8100, USA

kkrivoruchko@esri.com

2Department of Mathematics

Campus Riu Sec

University Jaume I , E-12071

Castellon, Spain

mateu@mat.uji.es

Abstract. Modern technology, including remote sensing, digital aerial photographs and spectrometer imaging, is used nowadays to develop inventories, posing new statistical problems. The objective of modern forestry includes a multitask problem field, for which forest ecology, landscape ecology and related statistical methods become increasingly important. Forestry uses numerous methods for statistical analysis. A considerable part of them belong to spatial statistics that includes point process statistics as a particular interesting area. Typically point process analysis and modeling in forestry comes down to such details as locations of single trees and tree characteristics such as diameter at breast height, stem increment during a given time span, species code or degree of damage by environmental factors.

In the context of spatial (marked) point patterns we have point locations in form of coordinates (Cartesian, UTM, etc) and marks associated to each location. It is still laborious to obtain such data sets by classical measurements methods. New technological developments are changing the situation drastically. For example, distance measurements can be done using laser techniques and the satellite-based global positioning system (GPS). And individual trees can be observed through image analysis with reasonable precision, with the exception of trees that are very close together or of small trees growing within the canopy of bigger trees. A compromise are indirect observations, such as aerial images combined with ground-based measurement. In any case, even in the most perfect situation, there exists a locational error concerning the precision of the coordinates of each point location.

In this paper we focus on the analysis, determination and specification of locational error in point patterns. We evaluate the importance of locational error when affecting to the interaction structure of the spatial pattern and define practical measures to control the amount of locational error. We analyze several real data sets in environmental applications.

Keywords: Environmental processes, Locational errors, Spatial accuracy, Spatial point patterns, Uncertainty.

 

Minimizing Information Loss in Continuous Representations: A Fuzzy Classification Technique based on Principal Components Analysis

Barry Kronenfeld

Department of Geography

University at Buffalo, The State University of New York

105 Wilkeson Quad, Buffalo, NY 14150

bjk3@buffalo.edu

Abstract. The increasing use of continuous classification methods to generalize environmental data has led to a persistent question of how to determine class membership values, as well as how to interpret these values once they have been determined. This paper integrates the above two problems as complementary aspects of the same data reduction process. Within this process, it is shown that a fuzzy classification technique based on Principal Components Analysis will minimize the amount of information lost through classification. The PCA-based fuzzy classification technique is analogous to linear spectral unmixing models in remote sensing, and differs from algorithms such as fuzzy k-Means in that primary attention is focused on preserving an accurate representation of the underlying attribute data, rather than maximizing the internal consistency of each class. This focus suggests that PCA-based fuzzy classification may be particularly appropriate for data modeling applications. The technique is demonstrated and compared to fuzzy k-Means by using forest inventory data from the U.S. Forest Service’s Forest Inventory & Analysis (FIA) program to produce fuzzy ecological regions of the eastern United States.

Keywords:

Data reduction, fuzzy classification, principal components analysis, FIA, ecoregions

 

Using Popular Coefficients of Agreement to Assess Soft-Classified Maps at Multiple Resolutions

Kristopher Kuzera

Clark University

IDCE

950 Main Street

Worcester, MA 01610-1477

kkuzera@clarku.edu

R. Gil Pontius, Jr.

rpontius@clarku.edu

Abstract. The cross-tabulation matrix is the basis for several popular statistics when comparing two maps of a common categorical variable. It is straight forward to compute the matrix when the pixels are hard classified. It is more challenging to compute the matrix when the pixels are soft classified, which is necessary to compare maps at multiple resolutions. This paper contrasts two methods to compute the matrix for soft classified pixels. The first method is based on conventional set theory, hence on a multiplication rule. The second method is based on a hybrid conventional-fuzzy set theory, hence a composite minimum-multiplication rule. For each method, we analyze several popular summary statistics: overall proportion correct, user’s accuracy, producer’s accuracy, conditional Kappa by row, conditional Kappa by column, tau, the Nishii-Tanaka coefficient, and the standard Kappa index of agreement along with its variants Kno & Klocation. We examine the behavior of each statistic at multiple resolutions to see which statistics give which information at which scales. We illustrate the methods by comparing maps of land cover in and near Worcester Massachusetts from 1971 and 1999.

Keywords: coefficient, matrix, resolution, scale, soft classification.

 

From Space to Scale: An Assessment of Uncertainty in Regionalized Multivariate Analysis

Guillaume Larocque

Natural Resource Sciences Department

Macdonald Campus of McGill University

21,111 Lakeshore

Ste-Anne-de-Bellevue

Québec, Canada H9X 3V9

guillaume.larocque@mail.mcgill.ca

Pierre Dutilleul

Department of Plant Science

Macdonald Campus of McGill University

Bernard Pelletier

Natural Resource Sciences Department and Department of Plant Science

Macdonald Campus of McGill University

James W. Fyles

Natural Resource Sciences Department

Macdonald Campus of McGill University

Abstract. Regionalized multivariate analysis (RMA) and factorial kriging analysis are multivariate geostatistical methods developed in the last two decades to unveil scales of variation in spatial datasets and to assess relationships among regionalized variables at specific scales. Although they have been widely used in the environmental and earth sciences, there has been no adequate assessment of the uncertainty associated with the estimation of parameters of interest in those methods. In this paper, we identify the sources of uncertainty in RMA and develop a theoretical framework to quantify the bias and the variance of estimators of common parameters such as sills, correlation coefficients and R-squares. Our framework is based on the generalized least-squares fit of the linear model of coregionalization. Through Monte Carlo simulations, we demonstrate the effectiveness of our procedure and show that large sample sizes are often needed to obtain accurate and precise estimates in RMA. An example with ecological data illustrates that an assessment of uncertainty is essential to avoid erroneous conclusions about the phenomenon studied.

Keywords: uncertainty, coregionalization, multivariate geostatistics, scale, ecology.

 

Uncertainty Analysis of an Environmental Model Chain

Ulrich Leopold (1), Gerard B.M. Heuvelink (2), Aaldrik Tiktak (3)

Institute for Biodiversity and Ecosystem Dynamics

Universiteit van Amsterdam

Amsterdam, The Netherlands

uleopold@science.uva.nl.

Department of Soil Science and Geology

Wageningen University

Wageningen, The Netherlands

Department of Soil and Groundwater

National Institute for Public Health and Environment

Bilthoven, The Netherlands

Abstract. Agricultural activities in the Netherlands cause high nitrogen and phosphorus emissions from soil to ground- and surface water. To study and predict ground- and surface water pollution under different agricultural scenarios, a model chain (STONE) has been developed. STONE has three main components. These are the emission model CLEAN, the atmospheric transport and deposition model OPS and the soil model ANIMO. There are also links to additional models, such as to the hydrological model SWAP and the crop growth model QUADMOD. The models incorporated in STONE operate at different spatial scales (i.e., supports), mainly because each of them describes different processes and uses different input data. Aggregation and disaggregation procedures have been developed to adapt the input supports to the model support and to bring the STONE output to a desired block support of 500 × 500 m. STONE predictions are then made on a regular grid covering relevant parts of the Netherlands. Apart from support problems, another problem is that input and model uncertainties are introduced and propagated to the STONE output. The goal of this study is to quantify the uncertainty in STONE output and to determine the main causes of uncertainty. The uncertainty analysis is based on a Monte Carlo simulation approach and is complicated by the fact that it must take spatial correlation, cross-correlation and differences in spatio-temporal support into account.

 

Comparison of Methods for Normalisation and Trend Testing of Water Quality Data

Claudia Libiseller

Dept. of Mathematics

Division of Statistics

Linköping University, SE-58183

Linköping, Sweden

cllib@mai.liu.se

Abstract. To correctly assess trends in water quality data it is necessary to take into account influencing variables, such as discharge or water temperature. We distinguish two basic methods: (i) one-step procedures, like the Partial Mann-Kendall (PMK) test or multiple regression, and (ii) two-step procedures, which include a normalisation step followed by a trend test on the residuals. Which procedure is most appropriate depends strongly on the relationship between the response variable under consideration and the influencing variables. For example, PMK tests can be superior if there are long and varying time lags in the water quality response. Two step procedures are particularly useful, when the shape of the temporal trend is of great interest. They can, however, be misleading if one of the influencing variables itself exhibits a trend or long-term tendency. In this paper we will present pros and cons of some trend testing techniques using Swedish water quality data for illustrations.

 

Evaluating Classified MODIS Satellite Imagery as a Stratification Tool

Greg Liknes and Mark Nelson

USDA Forest Service

1992 Folwell Avenue

St. Paul, MN 55108 USA

gliknes@fs.fed.us

Abstract. The Forest Inventory and Analysis (FIA) program of the USDA Forest Service collects forest attribute data on permanent plots arranged on a hexagonal grid across all 50 states and Puerto Rico. Due to budget constraints and the natural variability among plots, sample sizes sufficient to satisfy national FIA precision standards are seldom achieved for most inventory variables unless the estimation process is enhanced with ancillary data. When used as the basis for creating strata with stratified estimation, classified satellite imagery has been demonstrated to be an effective source of such ancillary data. In particular, the National Land Cover Dataset (NLCD), a national landcover classification based on satellite imagery, has been used to produce substantial increases in the precision of statewide forest inventory estimates.

Because inventories are conducted on an annual basis, it is desirable to create strata using a product that is updated more frequently than the NLCD. In particular, data from the MODIS sensor are available every 1-2 days, although at a much coarser spatial resolution than the Landsat data used in the creation of the NLCD (250m or 500m vs 30m). In this study, the effectiveness of strata created by classifying MODIS satellite imagery in increasing precision of FIA estimates is compared to that of strata created from the NLCD.

Keywords: Stratified estimation, MODIS, spatial resolution

 

Sources of Uncertainty in Landuse Pressure Mapping

Linda Lilburne and Heather North

Landcare Research

P.O. Box 69

Lincoln, Canterbury, New Zealand

lilburnel@landcareresearch.co.nz

Abstract. Probability of groundwater contamination due to leaching of nitrate from agricultural sources can be modelled at a regional scale. This requires spatial information on soil properties, climate, land use, standard management practices, and aquifer vulnerability. One of the key management pressures in Canterbury, New Zealand is the practice of leaving land fallow during the winter months, because any nitrate that is present is likely to be leached since there is no plant uptake. Remote sensing imagery has been successfully used to identify land that is bare for significant periods in the winter and early spring. This was achieved by use of simple rules on a temporal sequence of three Landsat 7 images.

This paper analyses the sources of error in mapping fallow ground using remote sensing image sequences. Potential sources of error include suitability of the logical model, timing of image acquisition, scale mismatch, incorrect reference data, geometric errors between images, and radiometric variation both within and between images (month to month and year to year). In particular we investigate our ability to correctly classify land into bare, sparsely vegetated and fully vegetated categories according to percentage cover of vegetation, using detailed field data along with the images. A probabilistic approach to providing estimates of uncertainty is applied to draw together the most important of these error sources.

Keywords: landuse pressure; temporal; radiometric error

 

A Spatial Mixed Effects Model to Determine the Prevalence of Parasitic Trematodes in Host Species along the New England Coast.

Ernst Linder *, Jeb Byers, Zhaozhi Fan, Andrew Cooper.

* Department of Mathematics and Statistics

University of New Hampshire
Durham, NH 03824, USA

elinder@math.unh.edu

Abstract. Parasitic trematodes in shallow marine habitats often have life cycles in which they progress through hosts at various trophic levels, e.g. from a snail to a crab or fish to a shorebird. We seek to address what factors most strongly influence trematode species diversity and overall prevalence within these communities and to determine at what spatial scale they operate. To address this question, we collected data at several intertidal sites along the coast of New England to track the trematodes’ population dynamics at the level of the snail hosts (Littorina littorea).

Snails were examined for their infection status and various within site predictors for infection were tested, such as shoreline height, individual snail size, etc. In this paper we further discuss the spatial scales and spatial variation in the determining factors of trematode infection. In particular we examine various spatial interaction schemes for random effects in a mixed effects regional model for trematode prevalence that incorporates within site and between site factors. The irregularity of the shoreline dictates an unusual "lattice", yet the spread of infection is determined by shorebird movement which we assume to be linear. This calls for various weighting schemes in the spatial interaction model.

In the regional model we examine the effects of host density, wave exposure, site topography, and bird abundance on trematode infection, and examine their spatial scales. Model estimation is performed within a hierarchical model framework and utilizes Markov Chain Monte Carlo schemes. By understanding the determinants of trematode population dynamics across spatial scales, we can begin to predict and address their impacts on commercially and ecologically important nearshore marine species

Keywords: Spread of Infection, Marine Zoology, Hierarchical Model, CAR, MCMC.

 

Maximum Likelihood Esimation of Regression Parameters with Spatially Misaligned Data

Lisa Madsen

Department of Statistics

44 Kidder Hall

Oregon State University

lmadsen@stat.orst.edu

David Ruppert

School of Operations Research and Industrial Engineering

Cornell University

Abstract. Suppose X(s), Y (s), and ²(s) are stationary spatially autocorrelated Gaussian processes that are related as Y (s) = ~0 + ~1X(s) + ²(s) for any location s. Our problem is to estimate the ~’s, when Y and X are not necessarily observed in the same location. This situation may arise when the data are recorded by different agencies or when there are missing X values.

A natural but na¨ıve approach is to predict (“krige”) the missing X’s at the locations Y is observed, and then use least squares to estimate ("regress") the ~’s as if these X’s were actually observed. This krige-and-regress estimator ˆ~KR is consistent, even when the spatial covariance parameters are estimated.  If we use ˆ~KR as a starting value for a Newton-Raphson maximization of the likelihood, the resulting maximum likelihood estimator ˆ~ML is asymptotically efficient. We can then use the asymptotic distribution of ˆ~ML for inference.

As an illustration, we relate ozone levels observed at EPA monitoring sta  tions to economic variables observed for most counties in the United States.

Keywords: Spatial regression, Maximum likelihood, Misaligned data

 

Effect of Category Aggregation on Measurement of Land-Use and Cover Change

Nicholas R. Malizia and R. Gil Pontius, Jr.

Clark University

George Perkins Marsh Institute

nmalizia@clarku.edu

Abstract. This paper investigates the influence of land category aggregation on measurement of land-use and land-cover change (LUCC). Category aggregation is common practice; however, the substantial effect the aggregation process can have on change measurement is commonly ignored.

This paper examines aggregated and unaggregated landscapes using cross-tabulation matrices. The technique to analyze the cross-tabulation matrices investigates the landscape change in greater depth than a traditional analysis. This methodology partitions the total change into net change (quantity change) and swap (location change). We derive the mathematics that dictate the effect of aggregation and then illustrate the principles using both simplified examples and empirical data. The data are from Human-Environment Regional Observatory (HERO) sites across the United States, which are funded by the National Science Foundation. The effect of aggregation was especially pronounced in our Kansas study area. At the detailed Anderson Level II classification, 60% of this landscape changes between 1985 and 2001, while only 13% changes at the more general Anderson Level I classification.

While this issue is crucial when examining land-use and cover change, it is relevant also to any aggregation of categorical data. This paper reveals the mathematics that govern this phenomenon.

Keywords: LUCC, aggregation, scale

 

The Credible Diversity Plot: A Graphic for the Comparison of

Biodiversity

Christopher J. Mecklin

Department of Mathematics and Statistics

Murray State University

Murray, KY 42071

christopher.mecklin@murraystate.edu

Abstract. A plethora of statistics exists for the estimation of ecological diversity. Common choices include the Shannon index and the Simpson index. Both of these indices are a function of the proportion of individuals found in each species. Essentially, we are estimating the parameters from a multinomial distribution in order to compute a diversity index. Unfortunately, the arbitrary selection of diversity index can lead to conflicting results. However, both Shannon's and Simpson's measures are special cases of Renyi entropy. Others have developed 'diversity profiles', which use Renyi entropy to graphically compare the diversity index collected from different locations or from the same location over time. We extend this work by first using the Gibbs sampler to find an interval estimate of Renyi entropy. We then usethis estimate to construct a new graphic called the 'credible diversity profile'. Case studies illustrating our graphic will be given.

Keywords: Statistical Ecology, Biodiversity, Entropy, Bayesian Estimation, Gibbs Sampler

\

Environmental Applications of Discriminant Analysis for Variables with Elliptically Contoured Distributions

Jaroslav Michalek,

Department of Applied Mathematics and Informatics,

Faculty of Economics and Administrations,

Lipova 41a, 602 00 Brno, Czech Republic

michalek@econ.muni.cz

Abstract. Classification problems are very frequent in environmental applications when it is necessary to separate distinct sets of observations. One of the most known and frequently used method is classification for two or more multivariate normal populations. Then as it is well known the normality assumptions lead to the exact solution of classification problem.

In many environmental applications the assumption of normality is too limited and other classification techniques such as regression trees or methods based on generalized linear models are used.

In the contribution we concentrate on classical discriminant problem but the normality assumptions will be substituted by the assumptions that the population we need to classify has elliptically contoured distribution. The special cases of elliptically contoured distributions are for example multivariate normal distribution, multivariate Cauchy distribution, multivariate Student’s distribution, multivariate Kotz distribution and many others. For such type of distribution the discriminant function will be derived and this methods of classification will be compared with other classification methods mentioned above.

The last part of the contribution will be devoted to the goodness-of-fit test for multivariate contoured distribution and the described method of classification will be used for analyzing real data sets obtained in natural reservation in Czech

Republic.

Keywords: Discrimainant analysis, Discriminant function, Elliptically contoured distribution, Cauchy, Kotz, normal and Student multivariate distributions.

 

A Comparison of Methods for Incorporating Spatial Dependence in Predictive Vegetation Models

Jennifer Miller1* and Janet Franklin2

1Department of Geology and Geography

West Virginia University

P.O. Box 6300 White Hall

Morgantown, WV 26506-6300 USA

jmiller@geo.wvu.edu

2Department of Biology

San Diego State University

5500 Campanile Dr.

San Diego, CA 92182-4614 USA

Abstract. Predictive vegetation modeling can be defined as predicting the distribution of vegetation across a landscape based upon the relationship between the spatial distribution of vegetation and important environmental variables. Often these predictive models are developed without considering the spatial pattern in biogeographical data. When explicitly included in the model, spatial dependence can increase the predictive ability significantly. Here we develop

presence/absence models for eleven vegetation alliances in a portion of the Mojave Desert (California, USA) using classification trees and generalized linear models, and explore two methods of incorporating spatial dependence in the models. The first method involves using geostatistical interpolation techniques to "fill in the blanks" of the sample data to derive an additional variable of neighborhood presence/absence. The second method assumes that spatial dependence in the model residuals represents one or more unmeasured yet important environmental variables, and adds the interpolated residuals to the original predictions. Accuracy was assessed with receiver-operating characteristic (ROC) plots, using a portion of the sample data not used for model development. In general, incorporating spatial dependence improved the accuracy of more common alliances, and had mixed results with rare alliances. The residual interpolation method had more consistently positive results in terms of increased accuracy.

Keywords: predictive vegetation modeling, spatial dependence, accuracy assessment

 

A Spatial Accuracy Assessment of Two Predictive Vegetation Modeling Methods

Jennifer Miller1* and Janet Franklin2

1Department of Geology and Geography

West Virginia University

P.O. Box 6300 White Hall

Morgantown, WV 26506-6300 USA

jmiller@geo.wvu.edu

2Department of Biology

San Diego State University

5500 Campanile Dr.

San Diego, CA 92182-4614 USA

Abstract. Predictive vegetation modeling seeks to quantify the relationship between vegetation distribution and environmental gradients and applies the resulting model to unsampled areas to generate vegetation maps. The sample data are typically divided into a portion used to build the models (training data), with the remaining used to assess the model (test data). Map accuracy is typically assessed using various metrics (kappa, AUC, error matrices), but none of these provides information on the spatial variation of accuracy. Spatial variation in accuracy may be related to landscape features or complexity and is an important, and often missing, component to an accuracy assessment report. In addition, many of the statistical methods commonly used can produce markedly different results using alternative portions of the same dataset. This research focuses on two conceptually different but commonly used methods: logistic regression and classification tree models, which are model-driven and data-driven, respectively. The data were randomly partitioned into 100 sets of train/test data and the models were developed and assessed to compare the range of overall accuracies for each model. The misclassification accuracy for each observation was calculated and a surface was interpolated from this to provide information on the spatial variation of accuracy for each model.

Keywords: spatial accuracy, misclassification probability, vegetation maps

 

Evaluating the Accuracy of Soil Erosion Estimates Associated with Exceptional Rainfall Events in South-East England

Mustafa Mokrech(1), Nick Drake(2), and John Wainwright(2)

(1) Department of Geography

UAE University

P.O.Box 17771

Al-Ain, UAE

m.mokrech@uaeu.ac.ae

(2) Department of Geography

Kings College London

Strand, London WC2R 2LS

Nick.Drake@kcl.ac.uk

john.wainright@kcl.ac.uk

Abstract. The accuracy of the land degradation estimates using GIS approach is the concern of this study. The Thornes soil erosion model is used; this model is parsimonious in its data requirements, allowing easy application. The investigation of the soil erosion is conducted in Kent (South-East England). Field studies showed that substantial soil erosion, caused by extreme rainfall events during the last two years, occurred particularly in fields of winter wheat that were largely bare (2-5% cover) at the time.

Quality models for all relevant data are developed and applicable techniques to propagate the errors of these data through the soil erosion model are provided. The linear inequality constrained mixture modelling is used to derive a map of vegetation cover. The error of this parameter is quantified via up-scaling from measurements obtained at field sites through air photos to TM imagery. The error of the estimated overland flow using the SCS model (Soil Conservation Service) has been obtained using Monte Carlo error propagation technique. Rainfalls are interpolated from the 200 raingauges in the region using Kriging. Soil texture and organic matter are derived by Kriging point data from the national soils inventory to estimate the error of erodibility. The error of the derived slope is estimated using Monte Carlo simulation. The obtained uncertainty of erosion rate is found to be generally high.

Keywords: Soil erosion, quality modelling, error propagation.

 

Investigating the Accuracy of High Resolution Slope Estimates Using

Fractal Method and Error Propagation

 

Mustafa Mokrech(1), Nick Drake(2), and John Wainwright(2)

(1) Department of Geography

UAE University

P.O.Box 17771

Al-Ain, UAE

m.mokrech@uaeu.ac.ae

(2) Department of Geography

Kings College London

Strand, London WC2R 2LS

Nick.Drake@kcl.ac.uk, john.wainright@kcl.ac.uk

Abstract. Accurate estimates of slope, derived from topographic data, are fundamental for hydrologic, geomorphic and other environmental models. These estimates are mainly influenced by errors in source data, slope algorithm and scale effects. In order to quantify and analyse these factors, we have used Monte Carlo simulation to investigate the propagation of errors in these slope estimates from DEMs and digital contour data, and a local fractal model is developed to calculate slope at a specified scale.

The Monte Carlo simulations into the effects of error propagation demonstrate that slopes generated from digital contour data are more realistic than those obtained from DEMs. Thus, different errors are predicted for the same study area using these different data types.

The main error not considered by the Monte Carlo simulations is the reduction in slope as the spatial scale of the DEM increases. We have used a fractal method to address this problem. The box and variogram techniques have been evaluated to calculate the local fractal dimension in order to estimate slope at a specified scale. The variogram method performs better than the box technique. Furthermore, we have investigated differences in slope estimate due to different window sizes. As the window size increases, the number of scale lengths considered increases and the fractal dimension tends to decrease. This finding allows us to place constraints on the range of scale over which slope can be accurately predicted using the local fractal method.

Keywords: Slope estimate, error propagation, slope quality assessment, fractal slope

 

Spatial Analysis of Wind Damage in the Boundary Waters Canoe Area Wilderness

W. Keith Moser, Mark D. Nelson, Greg C. Liknes

North Central Research Station

USDA Forest Service

1992 Folwell Avenue

St. Paul, Minnesota 55108  USA

wkmoser@fs.fed.us

Abstract.  In July of 1999, devastating winds impacted almost 400,000 acres in northern Minnesota, USA.  Centered primarily on the Boundary Waters Canoe Area Wilderness (BWCAW) on the US – Canadian border, the wind storm resulted in massive blowdowns scattered throughout this forest-and-lakes landscape.  At the request of the Superior National Forest, the North Central Forest Inventory and Analysis group of the U. S. Forest Service conducted an intensified inventory of the BWCAW.  Our analysis examined three themes: 1) temporal trends in forest vegetation response to the disturbance, 2) a comparison of disturbance response within the blowdown area to undisturbed areas within the BWCAW but outside the blowdown area and 3) the spatial arrangement of varying levels of disturbance within the blowdown damage area. This paper deals with the last topic – the spatial arrangement of disturbance and the methodologies for assessing these patterns.  Using data from the NCRS FIA plots, GIS layers of aspect, soil type and depth, topography and bodies of water, we will present an assessment of the temporal and spatial patterns of disturbance response and extrapolated successional trends.  By examining the change in overstory structure, we will infer the patterns of the strong winds that were present that day. We will examine several hypotheses: including:

  1. Overstory damage was influenced by pre-windstorm species composition.
  2. Overstory damage was proportional to depth to bedrock and distance from a body of water.
  3. Overstory damage was proportional to the size of the nearest upwind body of water.
  4. Overstory damage was proportional to the structural homogeneity of the pre-storm forest.

This paper will also discuss the methodologies of integrating these different information sources, the opportunities and constraints, and how each layer assists in accuracy assessment of the other layers.

Keywords:  blowdown, spatial analysis, inventory, accuracy assessment

 

Accurate spatial databases: the role of ontologies

Mir Abolfazl Mostafavi1, Geoffrey Edwards1, Robert Jeansoulin2

1Centre de Recherche en Géomatique

Pavillon Louis-Jacques-Casault

Université Laval Ste-FOY (Québec)

G1K 7P4 Canada

mir-abolfazl.mostafavi@scg.ulaval.ca

Laboratoire d'Informatique de Marseille,

Université de Provence, Marseille, France

jeansoul@gyptis.univ-mrs.fr

Abstract. This paper proposes a logical approach to studying the ontological consistency of spatial databases. The process of evaluating spatial database consistency is carried out at two levels. At the ontological level, the internal consistency of the specifications is considered. The specifications consist of the taxonomy of the spatial data defined by the producer and the spatial relations between objects. The relations may be defined purely geometrically or constrained by the semantics of spatial objects. These relations should be consistent and complete. At the data level, real objects and their relations are studied with respect to the specifications. For this purpose, the national topographic database of Canada is selected as a case study. The ontology of the spatial database is translated into a knowledge base coded in Prolog. The development of rules that define inconsistencies and the querying of the knowledge base to determine the existence of such inconsistencies were carried out on a very large fact base (over 300000 statements). In a manner this was both complete and automatic. The overall approach appeared to be justified. The results obtained from several experimentation illustrated the potential of the proposed method for the quality assessment of spatial data bases in both ontological and data levels.

Keywords: Spatial data, Consistency, Ontology, Logical methods

 

Adjustment Procedures to Account for Non-Ignorable Missing Data

in Environmental Surveys.

Breda Munoz and Virginia Lesser

Department of Statistics
44 Kidder Hall

Oregon State University

Corvallis, OR 97330

breda@stat.orst.edu

Abstract. The weighting adjustment procedure is a well-known technique used in survey practice for handling missing data. In this approach, sampling units (respondents and non-respondents) are classified in weighting classes, and the sampling weight for each respondent unit is weighted by the inverse of an estimate of its response probability (also known as propensity score). Optimal weighting classes are selected satisfying that conditional on adjustment variables, the variable of interest is independent of the response indicator. We explore the assumptions needed to construct optimal adjustment cells in the case of non-ignorable missing data in environmental surveys. We propose a modified Horvitz-Thompson estimator for the population total of the spatial random process of interest and study some of its properties. By using the weighting class adjustment, we will account for the non-ignorable missing data.

Keywords: Environmental Surveys, Missing Data, Non-ignorable, Weighting classes

 

Peripheral Trends of High Areas in Surface Data

Wayne L. Myers

The Pennsylvania State University

124 Land & Water Research Bldg.

University Park, PA 16802 USA

wlm@psu.edu

Abstract. The use of contemporary spatial analysis systems for environmental monitoring tends to promote an emphasis on high versus low areas in surface data. The analysis of elevated zones as upper-level sets is indeed important, but trends on the periphery of elevated areas can also be informative. Some elevated areas drop precipitously on the edges, others drop gradually, and still others have proximate reversals of decline leading up again to other areas that are elevated in greater or lesser or degree. The reversals of downward trend may be small and subtle or sustained as major features of the surface. Small reversals having high spatial frequency can be considered as a type of noise engendering local uncertainty for which suppression may be advantageous. Mapping and characterizing these transitional areas leads to major opportunities for etiology since these are zones in which causal factors are also varying and thus relatively amenable to study. Echelon analysis enables systematic investigation of such peripheral trend areas that would otherwise be difficult. The approach is explained and illustrated in terms of biodiversity information.

Keywords: surface analysis, spatial trends, upper-level zones, echelons

 

Ecosystem transformations and perceptions: A tourist spot scenario

S Nairy, L Noronha, S Sreekesh,

The Energy and Resources Institute, India

ksnairy@teri.res.in

Abstract. This paper is based on a larger case study that investigated the role that tourist induced and other population movements play in causing ecosystem changes in a coastal state of India. It focuses on the changes in three important coastal ecosystems namely, agro-ecosystems locally known as khazan lands, mangroves and sand dunes, in villages with different levels of tourism development.

The main objective of this paper is to report on the pattern of ecosystem transformation in tourist destinations and assess whether perceptions of changes in ecosystems vary with tourism development, the direction of perception and status of ecosystems. This study used 5 representative coastal villages with different levels of tourism development.

The pattern of ecosystem transformation with respect to tourism development is studied using land cover changes from 1966 to 1999. A household survey is conducted to study the ecosystem perceptions among households across villages with different levels of tourism development.

Spatial analysis reveals that there is an overall decrease in all three ecosystems and this is supported by the perceptions of the local people. The study suggests that it is not tourism per se, but the mode in which it is practised that brings about the changes in coastal ecosystems. These transformations influenced by tourism indicate that policy interventions are required to regulate tourism activity in ecologically vulnerable areas.

Key words: ecosystems, tourism development, perceptions

 

Overdispersion models for the violation of Nitrate concentration limits in Mid-Atlantic Region Watersheds

Nagaraj K. Neerchal1, Minglei Liu1, and Earl Greene2

1Department of Mathematics and Statistics

UMBC

Baltimore, MD 21250

nagaraj@math.umbc.edu

2United States Geological Survey

Baltimore, MD.

Abstract.  We investigate possible approaches for the spatial analysis of the number of violations when the nitrate concentration exceeded beyond the regulatory limits. We consider a spatial analysis where the basic unit of analysis is a watershed. We study distributional properties of the number of `hot’ [defined as a well that exceeds the regulatory nitrate limit] wells in a watershed. Because of possible within-watershed correlation, models based on the beta-binomial distribution and a more recently introduced finite mixture distribution, are possible candidates to model such data. We will introduce these overdispersion models in a logistic regression relating the number of violations to various explanatory variables such as land cover pattern and geology type. Advantages and disadvantages of modeling data at the watershed level will also be discussed.

Keywords: Logistic regression, beta-binomial distribution, finite mixture distribution.

 

Estimating amphibian occupancy rates in ponds under complex survey designs

Anthony R. Olsen

USEPA NHEERL

Western Ecology Division

200 S.W. 35th Street

Corvallis, OR 97333

Olsen.Tony@epamail.epa.gov

Abstract. Monitoring the occurrence of specific amphibian species in ponds is one component of the US Geological Survey’s Amphibian Monitoring and Research Initiative. Two collaborative studies were conducted in Olympic National Park and southeastern region of Oregon. The number of ponds in each study region precludes visiting each one to determine the presence of particular amphibian species. A two-stage cluster probability survey design was implemented to select a subset of ponds for monitoring. The first stage primary sampling units are 5th field hydrologic units and the second stage are individual ponds located within each selected hydrologic unit. A common problem is that during a single visit to a pond, it is possible not to detect an amphibian species even when it is present, that is, the probability of detection is less than one. The objective of the survey is to estimate the proportion of ponds in each region that are occupied. Estimation of site occupancy rates when detection probabilities are less than one have been developed by MacKenzie et al. (2002) under the assumption of a simple random sample. Using the notion of generalized estimating functions, their procedures are generalized to cover not only two-stage cluster samples but more general complex survey designs.

Key words: complex survey design, amphibian monitoring, maximum likelihood estimation, site occupancy, detection probability

 

Assessing the spatial uncertainty of boundaries on forest maps using an ecological similarity index

Maria-Gabriela Orzanco

Centre de recherche en géomatique

Université Laval, Pavillon Casault

Québec (Québec) - G1K 7P4 - CANADA

(418) 656 2131 ext. 12677

Maria-Gabriela.Orzanco@scg.ulaval.ca

Kim Lowell

Centre de recherche en géomatique

Université Laval, Pavillon Casault

Québec (Québec) - G1K 7P4 - CANADA

Kim.Lowell@scg.ulaval.ca

Marie-Josée Fortin

Landscape Ecology Laboratory

University of Toronto

Toronto (Ontario) - M5S 3G5 – CANADA

mjfortin@zoo.utoronto.ca

Abstract. Forest stand boundaries are usually represented in a vegetation map as fine lines all having the same width. Thus spatial accuracy and precision are considered to be uniform over the entire map. However, real forest boundaries -- i.e., those observed on the ground -- differ in degree of definition and contrast between adjacent stands. This study examines the association between ecological contrast and context of forest boundaries and the uncertainty associated with their spatial location.

In this study, the contrast between adjacent forest stands was quantified via an ecological similarity index (ESI). The attributes used to estimate the value of this index for each mapped boundary were species composition, density class, height class, age class and global contrast. In addition, the context, or neighbourhood, of each boundary was estimated by using a local autocorrelation index (local Moran’s I) using the contrast values. The relationships among the context, the contrast and the existence uncertainty (consistency) were explored using discriminant analysis (Consistency was measured by overlaying three multi-temporal forest maps).

It was found that the consistency was most closely related to the contrast between the forest types present on either side of a forest stand boundary. Context was found to have no relationship to stand boundary consistency.

Keywords: spatial uncertainty, forest boundary, ecological contrast and context

 

Accuracy Assessment and Uncertainty in Baseline Projections for Land-

Use Change Forestry Projects

 

Louis Paladino and R Gil Pontius Jr

Clark University

Graduate School of Geography

Department of International Development and Environment

loupaladino@yahoo.com

Abstract. This paper uses validation techniques to estimate uncertainty in the prediction of future disturbance on a landscape. Interpreted satellite imagery from 1975 to 1992 was used to calibrate the land change model. Data from 1992 to 2000 was used to assess the goodness-of-fit of validation as measured by the statistic Kappa for Location (Klocation), which is a variant of the traditional Kappa index of agreement. Based on the goodness-of-fit in the year 2000, Klocation is extrapolated to predict the goodness-of-fit for the year 2026. The extrapolation of Klocation allows the scientist to predict the model’s accuracy of the location of future disturbance.

For the validation year of 2000, Klocation is 0.22, which means that the model is 22% of the way between random and perfect in predicting the location of disturbed land versus undisturbed land. The predicted Klocation in the year 2026 is 0.008. Therefore, the estimated probability that a pixel will be disturbed in 2026, given that the model says it will be disturbed is 1.8%. The probability that a pixel will be disturbed given that the model says it will be undisturbed is 1.0%. The results allow us to understand the uncertainty when using models for land-use change forestry project baseline estimates.

Keywords: land change, prediction, uncertainty

 

Impact of Spatial Extent on the Analysis of Land Change

John Pearson

jpearson@clarku.edu

R Gil Pontius, Jr

Clark University

950 Main Street,

Worcester MA 01610

rpontius@clarku.edu

Abstract. This presentation examines the sensitivity of the measurement of land change to variation in the spatial extent of the study area. We consider land change between 1971 and 1999 in Central Massachusetts at various windows of different sizes and positions. We consider also the interaction between extent and resolution to evaluate the combined effect of changes in scale. Results show that the measurement of land change can vary tremendously as a function of extent, but that the variation is constrained by mathematical principles. It is important that a scientist knows how the spatial scale can influence the analysis of land change, because in many cases, the spatial extent and resolution are dictated arbitrarily by issues such as data availability or familiarity with the study area.

Keywords: Extent, Resolution, Scale, Land Change

 

A Method for Classifying Land Parcels as Receptive or Unreceptive to

Nitrate Leaching

Edzer J. Pebesma

Dept. of Physical Geography

Utrecht University

P.O. Box 80.115, 3508 TC

Utrecht, The Netherlands

e.pebesma@geog.uu.nl

Jaap de Gruijter

ALTERRA

Wageningen University and Research Centre

Wageningen, The Netherlands

Gerard B.M. Heuvelink

ALTERRA and Laboratory of Soil Science and Geology

Wageningen University and Research Centre

Wageningen, The Netherlands

Abstract. For legislation purposes, every agricultural land parcel in the Netherlands has to be classified as being receptive ('dry') or unreceptive ('wet') to nitrate leaching to groundwater, as farmers are allowed to supply different amounts of manure to each of them. A parcel is considered wet when the fraction of the parcel where the mean highest water table (MHW) is above 40 cm and the mean lowest water table (MLW) is above 120 cm, is larger than 2/3. Mean water tables are defined as the mean of the three (highest, lowest) water tables from a biweekly measured series over at least eight years. For the majority of the parcels however, no measurements are available and prediction from nearby measurement sites, helped by information on terrain altitude, drainage network, and regional hydrology was used. To assess approximate probability estimates of parcels being wet (or dry), geostatistical simulation was used that incorporates the spatial correlation of MHW and MLW, their spatial cross correlation, and their regression relationships to explanatory variables. A cutoff value of this probability leads to a classification. The final classification scheme aims at weighing the risks of falsely assigned wet parcels (an environmental risk), and falsely assigned dry parcels (a financial risk for the farmer, potentially leading to legal conflicts with the legislator). The paper addresses some details of the classification procedure, discusses risk trade-offs, and presents results.

Keywords: geostatistics, simulation, classification, change of support, groundwater

 

Detecting true land change with confused classifiers

R. Gil Pontius Jr. and Christopher D. Lippitt

Clark University

Department of International Development, Community and Environment (IDCE)

Graduate School of Geography

950 Main Street

Worcester MA 01610-1477 A

rpontius@clarku.edu

Abstract. This paper describes a method to assess uncertainty in the measurement of change among categories of land cover between two points in time. Our method acknowledges that error is an inherent part of map production, so differences between maps can be due to real landscape change and to map error. For example, if two producers create a map of the same location for the same time, we would expect there to be some disagreement between those two maps due to producer error. If we compare two maps of the same location from different times, we would expect to see disagreement between the two maps due to two reasons: 1) producer error, and 2) true landscape change between the points in time. Our method uses matrix algebra and conventional techniques of accuracy assessment to assess the disagreement due to error versus the disagreement due to true landscape change. We illustrate the method with maps from 1971 and 1999 of several land categories in Central Massachusetts. The method shows that a there is detectable change from Forest to Built, in spite of the fact that a substantial portion of the difference between the maps can be attributable to map error.

Keywords: accuracy, change, error, GIS, uncertainty

 

Use of Remote Sensed Image-Banks in Forest Information Systems: Data-Fusion of imagery over space and time for forest change detection.

Keith Rennolls

School of Computing and Mathematical Sciences,

University of Greenwich,

London SE10 9LS, UK.

k.rennolls@greenwich.ac.uk

Abstract. Forest Information Services and Systems are becoming increasingly prevalent (e.g. GFIS, EFIS, Canada FIS,…), largely with the ease of developing distributed systems using internet technology. Differing aims, and differing circumstances mean that such systems vary both in design and implementation. Also consistency of information and data resources over space and time can influence what kind of information is included into a FIS. In particular, rapid monitoring of fast acting processes, such as fire, or wind-blow, make primary use of remote sensed imagery. However, other systems, often those with a rather longer time scale, often do not make use of remote sensed imagery. The first part of this paper will review the main FISs currently developed, with a particular focus on the extent they make use of historical satellite remote sensed imagery.

In fact there are major technical challenges associated with making full and efficient use of remote sensed imagery which has a range of differing time-stamps, with differing resolutions, and are in the form of partial coverings of the landscape. Even in the simplest situation, when we have one raster image with some ground-truth calibration data, the use of statistical calibration techniques on the multi-spectral data (for classification and rectification) is not straight-forward, in theory or in practice. In the more complex situation when an FIS has available an historical image-bank, in which the imagery and training data are usually sparse in space and time, the problems of time-and-space dependent statistical classification are very challenging. Some of these technical issues and the use of spatio-temporal smoothing models to achieve a "image-fusion", will be discussed. Such an approach potentially allows spatial interpolation and extrapolation, and temporal prediction, and possibly forest change detection, all with increased accuracy.

Keywords: Remotely sensed images, classification, accuracy, rectification, spatio- temporal model, image-fusion, change detection.

 

Assessing Uncertainty in Large Area Maps Generated for Land Cover Change Monitoring

John Rogan1, Janet Franklin2, Jennifer Miller3 , and Doug Stow4

1 Clark School of Geography

Clark University

Worcester, MA 01610

jrogan@clarku.edu

2 Department of Biology

San Diego State University

5500 Campanile Drive

San Diego, CA 92182

3 Department of Geology and Geography

West Virginia University

425 White Hall
P.O. Box 6300

Morgantown, WV 26506

4 Department of Geography

San Diego State University

5500 Campanile Drive

San Diego, CA 92182

Abstract. In light of increasing use of large area map products, remote sensing practitioners and users increasingly require reliable map accuracy information to determine the adequacy of these maps for their specific purposes. Accuracy assessment in large area mapping is an especially important research topic because as the size of the area to be mapped increases, the positional and thematic accuracy will generally decrease, as a function of landscape heterogeneity, and a large degree of within-class variation. Although the topic of uncertainty is a central theme in land cover and land cover change mapping, it has received a relatively modest amount of attention in the remote sensing literature. Therefore, the aim of this paper was to provide spatially explicit means to visualize the magnitude of spatial geometric misregistration error and thematic map accuracy in multitemporal image data sets, and their source, used in a change mapping study in California. Results provide an effective means to spatially visualize image misregistration and thematic map error. Also, the creation of ‘error’ surfaces facilitated a robust analysis of the cause of spatial errors in large area maps, and the relationship between positional errors and thematic errors.

Keywords: Map accuracy, geometric misregistration, land cover change

 

Proximate Causes of Vegetation Change in the Boorowa Region, Canberra, Australia: a remote sensing modelling approach

Abdolrassoul Salmanmahiny and Brian J. Turner

SRES

The Australian National University

0200, Canberra, Australia

Rassoul.Mahiny@anu.edu.au

Abstract. Past change in remnant vegetation patches was modelled using remotely-sensed MSS and TM imagery and G.I.S. The images covering 27 years from 1973 to 2000 were used in detecting change in vegetation through post-classification comparison and modellling it through neural network and logistic regression methods. Physical factors, image-based layers and landscape metrics provided the 19 predictor variables used in the modelling. The area of study is thecatchment of the Boorowa River in New South Wales, Australia, around 110 kms northwest of Canberra.

Relative operating characteristic (ROC), and a modified version of multi-resolution goodness of fit (MGF) tests together with visual comparison were used to assess success of the modeling approaches. Also, the relative effect of the 19 predictor variables were evaluated through ROC and MGF methods using 19 reduced-variable models and the full model.

Overall, the neural networks method performed slightly better than the logistic regression. A surrogate layer for agricultural intensity (MAXNDVI) and the ratio of MSS band 4 to NDVI (MSS4toNDVI) were found the most important predictor variables of change in vegetation. This was also the case with the logistic regression method where additionally the slope and Para (perimeter to area ratio of the patches) parameters were demonstrated to be the other important predictor variables.

The behaviour of vegetation change as the dependent variable against the important predictor variables was studied using response graphs produced with the neural network modelling method.

Keywords: Vegetation change, remote sensing, modelling, logistic regression, neural networks

 

Digital Road Map Accuracy Evaluation: A Buffer-based Approach

Shashi Shekhar, Xiaobin Ma, and Hui Xiong

Computer Science Department

University of Minnesota

4-192 EE/CSci Building

200 Union SE

Minneapolis, MN 55455 USA

(shekhar,xiaobin,huix)@cs.umn.edu

Weili Wu

Computer Science Department

University of Texas

Dallas, Texas

weiliwu@utdallas.edu

Max Donath and Pi-Ming Cheng

Mechanical Engineering Department

University of Minnesota

Minneapolis, MN

(donath,pmcheng)@me.umn.edu

Abstract. The quality of digital road maps plays a critical role in many GIS applications. For example, the next generation of automatic road user charge systems will need to be able to identify the road on which a vehicle is travelling and report the total number of miles driven by a vehicle. This requirement imposes a direct challenge on the accuracy of the digital road maps. Certainly, failure to manage/evaluate errors in digital road maps may limit or invalidate road user charge assessment. Some research works have been proposed to evaluate the accuracy of digital road maps. However, most of them are point-based approaches and not scalable to large road network. In this paper, we present a spatial framework for evaluating digital road map accuracy. In this framework, we apply a buffer-based line-string association measure for field test site selection. This measure is line-based and can help discover high affinity road patterns, i.e., patterns of roads located very close together, which are important for the digital road map evaluation. Furthermore, we propose a line segment based digital road map accuracy measure that uses well-defined buffer computation. As demonstrated by our experiments on the Minnesota -Twin Cities basemap, our approach is applicable to large road networks and is effective for assessing positional accuracy in digital road maps. Finally, we find that the accuracy of popular digital road maps doess not satisfy the requirements of applications such as road user charge systems. In fact, current digital road maps are not designed for road user charge system and may lead to inaccurate and unfair charges.

 

Quantifying the Efficacy of Multicriteria Generalization (MCG) of Geospatial Data for AEM Groundwater Modeling

Gaurav Sinha1, Warit Silavisesrith1, and James R. Craig2

Department of Geography

Department of Civil, Structural & Environmental Engineering

University at Buffalo, NY, USA

gsinha@acsu.buffalo.edu

Abstract. Regional scale environmental simulation models need generalization of high resolution data to multiple lower resolutions to obtain an acceptable numerical solution within a reasonable interval of time. Methods for geospatial data generalization have been developed in the past but mainly for addressing cartographic concerns. We therefore use a new framework called Multicriteria Generalization (MCG) that generalizes geospatial data under constraints determined both by cartographic and geophysical considerations. These constraints are derived from experts’ domain knowledge and realized within the generalization system as multiple interactive criteria. This paper evaluates the efficacy of MCG for a groundwater model (SPLIT) that relies on the vector-based analytic element method (AEM) to conceptualize and implement the groundwater system. The tradeoff between computation time and the errors introduced in model predictions is analyzed at several generalization levels for different weighted combinations of input criteria. To minimize the uncontrolled perturbations introduced within the groundwater model due to modifications of elemental interaction patterns, only polylinear analytic elements are generalized in this study. The results are used to advocate the future use of this framework for applications other than analytic element models of groundwater flow.

Keywords: Multicriteria, Generalization, Analytic Element, Groundwater

 

Circumventing the Spatial Dependence Problem in a Diffusing Contaminant Cloud

C.J. Smith, T.P. Schopflocher and P.J. Sullivan

The Department of Applied Mathematics

The University of Western Ontario

London Canada N6A 5B7

pjsul@uwo.ca

Abstract. The sudden, accidental release of a quantity of contaminant into the environment occurs frequently, for example, in the rupture of storage tanks or in train derailments. Contaminant concentration values are reduced by molecular diffusion and the natural way to describe the contaminant concentration field is to use the probability density function (P.D.F.) p(2;x,t) that depends on the position located by vector x at time t. The P.D.F. is very difficult to work with both theoretically (extremely complicated and intractable equations describe its evolution) and experimentally (approximating an ensemble average of a nonstationary and nonhomogeneous statistic- even in a laboratory flow). The expected mass fraction (E.M.F.) Is an alternate and relevant measure which describes the fraction of release mass located in concentration intervals. It has the theoretical advantage of more simple moment evolution equations and of fewer realizations required to approximate an ensemble average.

In this paper we use the "-$ moment prescription of Chatwin and Sullivan to investigate the kurtosis K versus skewness S relationship,

K = A S2 + B, (1)

where A and B are order one constants, put forward by Mole and Clarke. Equation (1) has been shown to describe data from continuous emissions in the atmospheric boundary-layer over a range of stability classes and laboratory data on clouds with varying release densities and even in the presence of fences in the flow. A great advantage of (1) is that it can be validated from isolated, fixed-point, measurements in the flow. The interpretation of (1) in terms of the "-$ moment prescription provides confirmation that this simple moment structure can be used to generate, through moment inversion, an approximate representation of the E.M.F with few parameters. With this procedure a robust description of concentration reduction in a diffusing contaminant cloud is provided using approximations that are consistent with the inevitable error in the specification of the release conditions and which avoids the problems associated with spatially distributed measures.

Keywords: Environment, Atmospheric Diffusion, Moments, Probability Density Function

 

Detecting Pattern in Biological Stressor Response Relationships Using Model Based Cluster Analysis

Eric P. Smith

Department of Statistics

Virginia Tech, Blacksburg, VA 24061

epsmith@vt.edu

Ilya Lipkovich

Eli Lilly and Company

Indianapolis, IN

Keying Ye

Department of Statistics

Virginia Tech, Blacksburg, VA 24061

Abstract. Environmental monitoring of aquatic systems involves the collection of biological, chemical and physical measures of the system. An important concern is the effect of chemical and physical stressors on the biological community. To study this, regression methods are often used. From a management perspective, interest is on the types of the relationships that might occur and the spatial extent of their occurrence. The question of type of relationship and extent is difficult as models are not constant over space. We approach this problem using model based cluster analysis. For cluster analysis we used penalized classification likelihood that allows variation of the variables that are involved in the model and also the observations. This allows for partitioning of a set of data into groups of observations and variables that support various models. By constraining the observations to be spatially linked, the extent of the relationship may be determined. This method is applied to the analysis of a data set describing stressors/response relationship in one of the Ohio ecoregions.

Keywords: cluster analysis, stressor-response modeling, biological monitoring

 

Spatio-Temporal Modeling of the Abundance of Spawning Coho

in Oregon Coast

Ruben A. Smith and Don Stevens Jr.

Department of Statistics
44 Kidder Hall

Oregon State University

Corvallis, OR 97330

rsmith@stat.orst.edu

Abstract. In 1998, the Oregon Department of Fisheries and Wildlife (ODFW) implemented a probability design to monitor coho salmon and aquatic habitat in Oregon Coastal streams. Exploratory analysis of observed abundance of spawner Coho from 1998 to 2002 suggests the presence of spatial and temporal correlation. This research describes the formulation and assessment of a spatio-temporal model for the abundance of Coho on the Oregon Coast. We consider a hierarchical Bayesian approach to formulate a model where systematic structure, spatial and temporal correlation are considered in a sequence of conditional models. This type of model will allow the evaluation of the spatio-temporal variability of the process given the observed counts.

Keywords: Bayesian, Geostatistics, Markov Chain Monte Carlo, Spatio-temporal.

 

A Bayesian Approach to Determining the "Paternity" of Environmental Contamination

Douglas E. Splitstone

Splitstone and Associates

4530 William Penn Hwy. #110

Murrysville, Pennsylvania 15668 USA

Michael E. Ginevan

Blasland, Bouck & Lee, Inc.

1943 Issac Newton Square East

Suite 240

Reston VA 20190 USA

michael@ginevan.com

Abstract. Most evaluations of the contribution of different sources to environmental contamination at different locations begin with a set of measurements of chemical species at different locations. One then calculates either a variance-covariance or correlation matrix and performs some type of factor analysis. The resulting multivariate patterns shown in the factor solution are then used to identify the contribution of different sources to contamination at different locations. The present discussion takes a different approach that derives from the Bayesian analysis of genetic data to determine the likely paternity of a given child given a set of data from potential fathers, the mother and the child. We demonstrate the use of two variants this approach. One assumes independence of chemical contaminants while the other considers the specific multivariate signature of each source. In both we begin with a non-informative prior that assumes that all sources are equally likely to have generated a given point contamination and use the data to calculate a posterior probability of each source being responsible for the contamination.

Keywords: Source allocation, Paternity, Bayesian Inference

 

A Multidimensional, Composite Index for Assessing Environmental Sustainability

Tanja Srebotnjak and Daniel Esty

Yale School of Forestry and Environmental Studies

Tanja.Srebotnjak@Yale.edu

Yale Law School

Yale University, New Haven, CT
Daniel.Esty@Yale.edu

Abstract. We consider environmental sustainability as the environmental component of sustainable development and investigate the impact of socio-economic driving forces as well as institutional and technological capacities on the progress in building a more environmentally sustainable society. For this purpose a composite Environmental Sustainability Index (ESI) is developed by extending the classic Pressure-State-Response framework. The ESI comprises 20 core indicators, which are aligned along five dimensions: Environmental Systems, Environmental Stresses, Human Vulnerability, Social and Institutional Capacity, and Global Stewardship. Each indicator itself represents an aggregate of several variables selected on the basis of their explanatory relevance for the respective indicator. Using the country level as the geospatial and political reference unit, 142 countries are ranked with regard to the degree of environmental sustainability they have attained. The individual environmental sustainability indices permit inter and intra-country analysis of the impact of the multiple dimensions of economic and social development on the health of ecosystems and the maintenance of adequate environmental quality. The usefulness of the index as a policymaking tool is exemplified in a statistical analysis of the relationships between economic activity and environmental sustainability.

Keywords: index development, indicators, pressure-state-response framework, environmental sustainability

 

Comparison of variance estimators for two-dimensional, spatially-structured sample designs

Don L. Stevens, Jr. and Susan Hornsby

Department of Statistics

Oregon State University

44 Kidder Hall

Corvallis, Or 97331

stevens@stat.orst.edu

Abstract. Stevens and Olsen (2003a) derived a neighborhood variance estimator for spatially-balanced samples of an environmental resource. The estimator was developed for the Generalized Random Tessellation Stratified (GRTS) design (Stevens & Olsen, 2003b; Stevens & Olsen, 1999), but should also work well for any sampling design that distributes points over a two-dimensional domain in a more-or-less regular fashion. In particular, the estimator should work well for strictly systematic designs, as well as various perturbations that are "almost" regular, e.g., non-aligned systematic, randomized tessellation stratified, or Briedt’s Markov Chain Design. Here we compare the performance of the Stevens-Olsen estimator to other variance estimators that have been proposed for systematic or almost regular designs using a variety of two-dimensional surfaces with known spatial covariance structure or known spatial pattern.

Stevens, Jr., D. L. and A. R. Olsen. 1999. Spatially Restricted Surveys Over Time for Aquatic Resources. JABES 4:415-428.

Stevens, Jr., D. L. and A. R. Olsen . 2003a, "Variance Estimation for Spatially Balanced Samples of Environmental Resources", Environmetrics 14:593-610.

Stevens, Jr., D. L. and A. R. Olsen . 2003b, in press. Spatially-balanced sampling of natural resources. JASA

Key words: spatial sampling, spatial pattern, spatial correlation

 

Spatially-Explicit Ecological Risk Assessment and Foraging Models

R.N. Stewart (u74@ornl.gov)

S.T. Purucker (purucker@utk.edu)

C.J.E. Welsh (cwelsh@utk.edu)

University of Tennessee

The Institute for Environmental Modeling

1416 Circle Drive
569 Dabney Hall
Knoxville, TN 37996-1610

Abstract. Geographical Information Systems (GIS) are more regularly being coupled with traditional ecological exposure techniques to incorporate spatial variability into ecological risk assessments. The additional inclusion of movement models that simulate foraging and other behaviors allows for the comprehensive assessment of population exposures and risks. Inter- or intra- species differences in movement strategies and habitat valuation can have substantial effects on cumulative exposures, even when they share a common range. Movement models allow for important receptor information such as foraging area/home range, relative desirability of habitat areas, contaminant distribution, and individual behavior to be more realistically captured in the assessment, and their effects on the cumulative exposure distribution explored. Information concerning the habitat is integrated through the use of a Habitat Suitability Index (HSI) model, and different movement strategies that utilize HSI data are reviewed and compared, including habitat preference, correlated, and uncorrelated random walk algorithms. Implications for ecological decision-making are discussed and freeware (SADA) that implement the movement algorithms is demonstrated.

 

Incorporating Soft Information into Sampling Strategies

R.N. Stewart (u74@ornl.gov)

S.T. Purucker (purucker@utk.edu)

J.J. Roberts-Niemann (robertsn22@hotmail.com)

University of Tennessee

The Institute for Environmental Modeling

1416 Circle Drive
569 Dabney Hall
Knoxville, TN 37996-1610

Abstract. Incorporating soft information to optimize sampling locations has much potential benefit in reducing the time and resources necessary to characterize contaminated sites. Examples of soft information include prior knowledge that is quantifiable or measured such as historical data or geological conditions. This information may also be subjective such as historical events or professional judgment about the site. Different strategies are reviewed that use this information, with an emphasis on how they can be tied explicitly to both spatial and risk analysis models, and directly support decision-making processes. These methods can assist investigators with initial sample locations, hot spot confirmation, refining areas of concern, reducing model uncertainty, and locating areas of extreme values. Given previously existing data, these methods can support a single secondary sampling design to either refine additional information or to support a post-remedial survey. Freeware (SADA) that implements these approaches while simultaneously incorporating spatial information, risk analysis, and decision criteria is demonstrated.

 

Uncertainty in estimates of the proportion and area of land cover types from an accuracy assessment of a land cover map for North Dakota

Laurence L. Strong and Wesley E. Newton

USGS Northern Prairie Wildlife Research Center

8711 37th Street SE

Jamestown, ND 58401

701 253-5524

Larry_Strong@usgs.gov

Abstract. Land cover maps developed from remotely sensed imagery are becoming increasingly available to natural resource managers. Users need to be aware of the accuracy and limitations of the land cover maps so that their suitability for use can be assessed. We describe a probability based accuracy assessment of a land cover map for North Dakota.

The sample design was a stratified random single-stage cluster sample of 253 square mile sample units selected with unequal probability among 8 strata defined by combination of four physiographic regions and four anthropogenic land cover proportion classes. Ground surveys and aerial photographs were used to create land cover maps for the sample units. We present unbiased estimates of the area of land cover types from a statistical analysis of data collected for the accuracy assessment. We contrast these estimates with the area of land cover types from simple pixel counts of the land cover map. We discuss sources of uncertainty in the estimates for the land cover types.

Keywords: Remote sensing, accuracy assessment, probability sampling, maplets

 

Visualizing Spatial Uncertainty of Geologicl Structure Based on Multiple Simulations

P. Switzer, T. Zhang, and A. Journel

Dept of Geologic and Environmental Science

Stanford University

switzer@stanford.edu

Abstract. Training images that incorporate geologic structure are combined with observations to produce simulated maps. Training images are used to build probabilistic models of structural configurations on several spatial scales, such as complex layering and channel structure of sands and shales in petroleum reservoirs. Practical implementation involves reducing the configuration space to a manageable level. A given configuration is assigned a series of "scores" that are based on weighted linear combinations, and the probabilistic model then operates in the space of configuration scores. The model is then sampled to obtain multiple realizations of structural pattern that are conditional on the observations and are derived from the statistical properties of training images. These multiple simulations are used collectively to visually represent the uncertainty of spatial structure, conditional on the observations.

Key words: spatial uncertainty, multiple simulation, geologic structure

 

A hierarchical spatial count model with application to imperiled grassland birds

Wayne E. Thogmartin1, John R. Sauer2, and Melinda G. Knutson1

1 Upper Midwest Environmental Sciences Center

U.S. Geological Survey

2603 Fanta Reed Road, La Crosse, WI 54603, U.S.A.

La Crosse, WI, U.S.A.

wthogmartin@usgs.gov

2 USGS Patuxent Wildlife Research Center

Laurel, MD, U.S.A.

Abstract. We utilized a Markov Chain Monte Carlo approach to spatially predict abundance of 5 rare grassland birds (Bobolink, Grasshopper Sparrow, Sedge Wren, Upland Sandpiper, Henslow’s Sparrow) in the upper midwestern US. Twenty-one years of North American Breeding Bird Survey counts were modeled as a hierarchical loglinear function of explanatory variables describing habitat, spatial relatedness between route counts, year effects, and nuisance effects associated with differences in observers. The model included a conditional autoregressive term representing the correlation between adjacent routes. Explanatory habitat variables in the model included land cover composition and configuration, climate, terrain physiognomy, and human influence. The model hierarchy was due to differences in route counts between observers over time. We fitted this model with WinBUGS. Preliminary evaluation of the models based on independent data suggested generally good agreement with model predictions. Discrepancies between evaluation data and model predictions were due, in some unknown measure, to insertion of errors when translating the statistical model into a mapped model.

Keywords: MCMC, overdispersed Poisson regression, spatially-correlated counts, species-habitat models

 

Linking climatic variables to the applicability of biomass equations at the landscape scale

C.-H. Ung and M.-C. Lambert

Natural Resources Canada

Canadian Forest Service

1055, rue du P.E.P.S., P.O. Box 3800

Sainte-Foy (Quebec) Canada G1V 4C7

ung@cfl.forestry.ca

Abstract. Previous research has shown that biomass growth when expressed by the height growth of dominant trees, is strongly related to both stand attributes and climatic factors. However, in the current allometric equations used to predict stand biomass, only forest stand attributes (stand density, stand height) are considered as explanatory variables. We demonstrated that adding climatic variables in the allometric equation can decrease significantly the error of the biomass estimated when large climatic range is considered at the landscape scale.

Keywords: biomass, allometry, stand structure, climatic effects

 

Regional Hydrologic Methods for Use in Health Studies

Richard M. Vogel

Department of Civil and Environmental Engineering

Tufts University

Medford, MA 02155

richard.vogel@tufts.edu

Abstract. It is now well understood that the environmental impacts of agricultural, urban and other anthropogenic influences can have a significant impact on human health. As a result there is increasing interest and need to relate environmental processes to human health outcomes. This presentation will summarize a variety of regional statistical hydroclimatological studies which have developed tools for estimating the hydrological response of watersheds from readily available geospatial datasets. Existing regional statistical hydrologic models for predicting floods, droughts, sediment and other hydroclimatologic variables will be reviewed. In addition hydrologic methods for assessing the regional probability distribution and the stochastic structure of a variety of hydroclimatologic variables will be reviewed. In addition examples will be provided of the use of regional hydroclimatologic and land-use models for use in predicting water quality outcomes such as river bacteria concentrations and groundwater nitrate levels.

Keywords: hydrology, hydroclimatology, floods, droughts, regional.

 

Method of Evaluation by Order Theory (METEOR) applied on the Topic of Water Contamination with Pharmaceuticals

Kristina Voigt

GSF - National Research Center for Environment and Health

Institute of Biomathematics and Biometry

Department of Biostatistics

85764 Neuherberg, Germany

kvoigt@gsf.de

Rainer Brüggemann

Leibniz-Institute of Freshwater Ecology and Inland Fisheries

12587 Berlin, Germany

Abstract. Achieving sustainable development in the environmental and health sector it is absolutely necessary to keep the ground- and consequently the drinking water free of contaminants. Unfortunately several chemicals are detected in the media water: surface water, wastewater, groundwater and drinking water. Major problems pose the detection of pharmaceuticals in the water media mentioned above. An intensive literature survey will be performed for 12 selected drugs. Attributes to be looked upon are the availability of the pharmaceuticals in the media surface water, sewage sludge, groundwater and drinking water. Another criterion which will be examined is the possibility of removal of the pharmaceutical during the drinking water treatment process. Hence a 12 x 5 data-matrix results. In a different evaluation approach the publications can be chosen as objects and the 12 pharmaceuticals can be taken as attributes. The methods applied are the Hasse diagram technique (HDT), as well as the Method of Evaluation by Order Theory (METEOR). The aim of the aggregation procedure which can be performed by applying METEOR is to get a unique prioritisation scheme. Several different weighting procedures are considered and performed.

    Equal weighting

    W-Matrix: one attribute is left out

    Different weighting, normalization to 1

The results of the elaborated multi-criteria evaluation approaches might give deeper insight into the data situation of the contamination of the medium water with pharmaceuticals and might also provide the background of the urgent need, first to decrease and then stop the contamination of water with pharmaceuticals.

Keywords: Order theory, Hasse diagram technique, ranking approach, pharmaceuticals, groundwater

 

Reduced models of the retention of nitrogen in catchments

Karl Wahlin, Davood Shahsavani, and Anders Grimvall

Department of Mathematics

Linköpings universitet

SE-58183 Linköping, Sweden

kawah@mai.liu.se

Abstract. As a rule, process-oriented deterministic models of substance flows through catchment areas can satisfactorily explain prevailing spatial distributions of riverine loads of nitrogen. Furthermore, such models can usually clarify most of the seasonal variation and a considerable fraction of the short-term temporal fluctuations in the nitrogen loads. However, it is less certain how well the models can predict several-year-long lags in the water quality response to interventions in a drainage area. We carried out ensemble runs of the model INCA-N to elucidate how changes in the supply of nitrogen to agricultural land can influence meteorologically normalised riverine loads of nitrogen. In particular, we found that the shape of the impulse response function is determined primarily by the hydro-mechanical model parameters, whereas the parameters governing the turnover of nitrogen in soil mainly influence the magnitude of the model output. This raises the question of whether the soil nitrogen processes included in the model are elaborate enough to explain the widespread observations of slow water quality responses to changes in agricultural practices.

Keywords: model reduction, nitrogen retention, catchments

 

Assessing the Accuracy of Line Descriptions in Presettlement Land Survey Records, USA

Yi-Chen Wang

Department of Geography

105 Wilkeson Quad.

University at Buffalo, The State University of New York

Buffalo, NY 14261

ycwang@buffalo.edu

Abstract. Presettlement Land Survey Records (PLSRs) are the most important source of information to obtain insights into the original landscape prior to major human disturbances. The PLSRs may contain bearing tree data, survey line descriptions, plat maps, and township summaries. Bearing tree data are most often used in presettlement landscape studies because the exact locations of trees are easier to be located, but the limitation of their sparse sample size is noted. Information recorded in line descriptions includes vegetation summaries, soil quality, and disturbance occurrences, and has thus been suggested to be auxiliary information for presettlement landscape studies. It is important to understand the quality of line descriptions before embarking the incorporation of the data.

This study examines the attribute and positional accuracy of the Holland Land Company survey line descriptions in western New York. The attribute accuracy issue of whether species listed in line descriptions were in order of abundance is examined using probabilities generated from geostatistical modeling of bearing tree data. The positional accuracy of landscape features of streams and rivers is investigated using current stream distribution from USGS 7.5-minute topographic maps. Based on these analyses, researchers can make more appropriate uses of the line descriptions.

Keywords: presettlement land survey records, attribute accuracy, positional accuracy

 

Spatial Autoregressive Conditional Heteroscedasticity

Jun Yan

Department of Statistics and Actuarial Science

University of Iowa

Iowa City, IA 52242

j-yan@uiowa.edu

Abstract. In a Conditional Autoregressive (CAR) model for regular lattice data, the conditional variance of a response given its neighbors is usually modeled as a function of the number of its neighbors, independent of the spatial location. A CAR model may fit a dataset well when there is not much difference in the variablities across the spatial locations. In situations where the spatial uncertainty clusters, a spatial model is needed for the conditional heteroscedasticity itself. It may improve the efficiency of mean parameter estimates, and sometimes it is of scientific interest on its own. Volatility models long available for time series are not readily applicable due to the lack of unidirectional flow of evolving. This work explores two ways to model the spatial heteroscedasticity, presenting analogs of autoregressive conditional heteroscedasticity models and stochastic volatility models in time series analysis. First, the conditional variance of a response is modeled by the second moments of its neighbors. Second, a latent process for the spatial heteroscedasticity is specified by a CAR model. The inferences will be developed within the Bayesian framework. Simulation studies and real data application will be used to verify the usefulness of the methodology.

Keywords: Conditional autoregressive model, Gaussian Markov random field, Heteroscedasticity, Spatial nonstationarity, Volatility

 

Processing of environmental data - comparing of two methods

Jiri Zelinka1, Ivana Horova1, Vitezslav Vesely2

1Department of Applied Mathematics

Faculty of Science

Masaryk University in Brno

Czech Republic

2Department of Applied Mathematics and Computer Science

Faculty of Economics

Masaryk University in Brno

Czech Republic

zelinka@math.muni.cz

Abstract: The kernel functions (kernels) can be used in many types of non-parametric methods - estimation of the density function of a random variable, estimation of the hazard function or the regression function. These methods belong to the most efficient non-parametric methods. Another non-parametric method uses so-called frames - overcompleted system of functions of some type.

This paper compares the kernel smoothing and the frame smoothing with frames of a special kind - the kernel functions are used for their construction. Both the smoothing procedures are applied to environmental data. Obtained results will be presented graphically.

Keywords: kernels, kernel smoothing, frames.