Tag Archives: Geostatistical Analyst

Finding locations for new monitoring sites using the Densify Sampling Network tool.

Introduction Sampling design is a critical part of any study involving modeling and estimation based on data that is sampled from natural resources or other phenomena occurring in the landscape. Statistical considerations related to sampling are part of a larger … Continue reading

New in Geostatistical Analyst 10.1 : Areal Interpolation

For version 10.1, we’ve taken on a classic problem in GIS: how to reallocate data from one set of polygons to a different set of polygons.  For example, demographers frequently collect data from various sources, so their data might be a mixture of census block groups, postal codes, and county boundaries.  However, to perform an accurate analysis, they might need all of their data in the same administrative units.

While there are various methods for going from small polygons to large polygons (from census blocks to postal codes, for example), the benefit of areal interpolation is that it additionally provides a statistically accurate framework for going from large polygons to small polygons.  By convention, the starting polygons are called the “source” polygons, and the ending polygons are called the “target” polygons.

Posted in Analysis & Geoprocessing | | 3 Comments

New in Geostatistical Analyst 10.1 : Empirical Bayesian Kriging

Those of you familiar with kriging interpolation know that it is not always the easiest technique to implement successfully.  For a long time we’ve wanted to make a geoprocessing tool that can automate kriging, but the problem has always been in the complexity of calculating good default parameters.  At 10.1, through a combination of subsetting and simulations, we have a solution to the problem with a method called empirical Bayesian kriging (EBK).  The method is available in the Geostatistical Wizard and as a geoprocessing tool in the Geostatistical Analyst toolbox.

Dealing with extreme values in kriging

Introduction

One of the most common problems we have when attempting to interpolate data using kriging is the presence of outliers in the data.  An outlier is a data value that is either very large or very small compared to the rest of the data.  Outliers often result from malfunctions in the monitoring equipment or typos during data entry, such as accidentally removing a decimal.  These erroneous data points should be manually corrected or removed before attempting to interpolate.  However, not all outliers are the result of machine or human error.  Some outliers are valid values, and this blog will demonstrate how to deal with this kind of outlier. Continue reading

Posted in Analysis & Geoprocessing | | 5 Comments

Automating geostatistical interpolation using template layers

Do you regularly perform geostatistical interpolation using the same parameters? We know that copying and pasting the same parameters over and over can introduce error and can be pretty tedious. However, this process can be streamlined using the Create Geostatistical Layer Tool.