Tag Archives: Geostatistical Analyst
Introduction Sampling design is a critical part of any study involving modeling and estimation based on data that is sampled from natural resources or other phenomena occurring in the landscape. Statistical considerations related to sampling are part of a larger … Continue reading
For version 10.1, we’ve taken on a classic problem in GIS: how to reallocate data from one set of polygons to a different set of polygons. For example, demographers frequently collect data from various sources, so their data might be a mixture of census block groups, postal codes, and county boundaries. However, to perform an accurate analysis, they might need all of their data in the same administrative units.
While there are various methods for going from small polygons to large polygons (from census blocks to postal codes, for example), the benefit of areal interpolation is that it additionally provides a statistically accurate framework for going from large polygons to small polygons. By convention, the starting polygons are called the “source” polygons, and the ending polygons are called the “target” polygons.
Those of you familiar with kriging interpolation know that it is not always the easiest technique to implement successfully. For a long time we’ve wanted to make a geoprocessing tool that can automate kriging, but the problem has always been in the complexity of calculating good default parameters. At 10.1, through a combination of subsetting and simulations, we have a solution to the problem with a method called empirical Bayesian kriging (EBK). The method is available in the Geostatistical Wizard and as a geoprocessing tool in the Geostatistical Analyst toolbox.
One of the most common problems we have when attempting to interpolate data using kriging is the presence of outliers in the data. An outlier is a data value that is either very large or very small compared to the rest of the data. Outliers often result from malfunctions in the monitoring equipment or typos during data entry, such as accidentally removing a decimal. These erroneous data points should be manually corrected or removed before attempting to interpolate. However, not all outliers are the result of machine or human error. Some outliers are valid values, and this blog will demonstrate how to deal with this kind of outlier. Continue reading
Do you regularly perform geostatistical interpolation using the same parameters? We know that copying and pasting the same parameters over and over can introduce error and can be pretty tedious. However, this process can be streamlined using the Create Geostatistical Layer Tool.
The Geostatistical Analyst (GA for short) uses sample points taken at
different locations in a landscape and creates (interpolates) a continuous surface.
The sample points are measurements of some phenomenon, such as radiation leaking
from a nuclear power plant, an oil spill, or elevation heights. Geostatistical
Analyst derives a surface using the values from the measured locations to predict
values for each location in the landscape. Continue reading