Parallel geostatistics for sparse and dense datasets

Abstract

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Publication DOI: https://doi.org/10.1007/978-90-481-2322-3_32
Divisions: ?? 50811700Jl ??
College of Engineering & Physical Sciences > Systems analytics research institute (SARI)
Additional Information: geoENV 2008, 8-10 September 2008, Southampton (UK). The original publication is available at www.springerlink.com
Uncontrolled Keywords: spatially-referenced datasets,satellite-based sensors,monitoring networks,individual sensors,environmental decision making,generation of maps,specific locations,real-time data,geostatistical operations,interpolation,map-generation,emergency,risk,evacuation,exploratory analysis,grid based systems,data likelihood,parallel maximum likelihood variogram estimation,parallel prediction algorithms,Walker Lake data set
ISBN: 9789048123216
Last Modified: 29 Oct 2024 16:29
Date Deposited: 01 Mar 2011 11:43
Full Text Link:
Related URLs: http://www.spri ... 67xr3558222517/ (Publisher URL)
PURE Output Type: Chapter
Published Date: 2008
Authors: Ingram, Benjamin R.
Cornford, Dan (ORCID Profile 0000-0001-8787-6758)

Download

[img]

Version: Accepted Version


Export / Share Citation


Statistics

Additional statistics for this record