Spatial Statistics and Applications

Organizer and Session Chair: Hao Zhang, Purdue University


Modeling Point Processes with Spatially Dependent Marks

Chae Young Lim and Sarat C. Dass
Michigan State University

Abstract: This paper develops statistical models and methodology for marked spatial point processes with two important characteristics: (1) The underlying point pattern distribution is able to represent a variety of clustering tendencies in a spatial domain, and (2) the marks exhibit strong correlation depending on the spatial closeness of the associated points in the realized point pattern. The modern approach to analyzing spatial point processes is utilized to model a relatively complex process, realizing that assumptions of stationarity is certainly not valid for the observed point patterns. Inference is carried out in a Bayesian MCMC framework, where a dimension-changing Reversible Jump step is incorporated to update the number of clusters of the spatial point pattern. The proposed class of models is fitted to fingerprint images in the NIST Special Database 4 to demonstrate the flexibility of fit to different kinds of fingerprint feature patterns.


An Analysis of Kansas Temperature Data

Chunsheng Ma
Wichita State University

Abstract: This talk will address a strategic plan to analyze the daily minimum and maximum air temperature data in Kansas using stochastic and statistic techniques and high performace computers. We start by first developing time series models for temperature at each station around the state of Kansas, with care of specified trends and seasonal terms, and then work on spatial models among stations at each day. With the help of the purely temporal and purely spatial information, the spatio-temporal temperature data set is fitted using a spatio-temporal autoregressive model as the residual field.


Correcting for Signal Attenuation from Noisy Proxy Data in Climate Reconstructions

Bo Li
Purdue University

Abstract: Regression-based climate reconstructions scale one or more noisy proxy records against a (generally) short instrumental data series. Based on that relationship, the indirect information is then used to estimate that particular measure of climate back in time. A well-calibrated proxy record(s), if stationary in its relationship to the target, should faithfully preserve the mean amplitude of the climatic variable. However, it is well established in the statistical literature that traditional regression parameter estimation can lead to substantial amplitude attenuation if the predictors carry significant amounts of noise. This issue is known as "Measurement Error". Climate proxies derived from tree-rings, ice cores, lake sediments, etc., are inherently noisy and thus all regression-based reconstructions could suffer from this problem. Some recent applications attempt to ward off amplitude attenuation, but implementations are often complex or require additional information, e.g. from climate models. Here we explain the cause of the problem and propose an easy, generally applicable, data-driven strategy to effectively correct for attenuation, even at annual resolution. The impact is illustrated in the context of a Northern Hemisphere mean temperature reconstruction. An inescapable trade-off for achieving an unbiased reconstruction is an increase in variance, but for many climate applications the change in mean is a core interest.