Category: Analysis & Geoprocessing

Using Python 2.5.4 with ArcGIS 9.3.1

This blog post was written by David Wynne, Product Engineer on the Geoprocessing Team in ESRI’s Software Products group in Redlands.

The Python Community announced the release of Python 2.5.4 late in December of 2008. Python 2.5.4 is a bug fixes release.

While ArcGIS 9.3.1 was shipped with Python 2.5.1, Python 2.5.4 has now been certified and is supported. In limited testing with ArcGIS 9.3, no issues were encountered.  To take advantage of fixes in Python 2.5.4, it can simply be installed on top of Python 2.5.1. That means, it is not necessary to uninstall Python 2.5.1 first.* 

To see and to download the release notes go here.

* If you uninstalled Python 2.5.1 and you work with the Spatial Statistics geoprocessing tools, you must reinstall Numerical Python (NumPy) from the ArcGIS installation DVDs.

For more information, see FAQ: Can Python 2.5.4 be used with ArcGIS?

Posted in Analysis & Geoprocessing | Tagged , | 7 Comments

Spatial Statistics: What’s so HOT about Spatial Pattern Analysis?

This blog post was written by Lauren Scott, Geoprocessing/Spatial Statistics Product Engineer in the Software Products Group at ESRI in Redlands.

Hot Spot Analysis is just one of the pattern analysis tools in the Spatial Statistics Toolbox.You can use these tools to explore spatial patterns in order to answer questions like:

  • Where are crime rates unexpectedly high?
  • Are there regions in the country where people live longer
  • Where do we find anomalous spending patterns?
  • Are there sharp boundaries between affluence and poverty?
  • Is the disease remaining geographically fixed or is it spreading?
  • Which features are most concentrated?
  • Does the spatial pattern of the virus mirror the spatial pattern of the population at risk?
  • Which site is most accessible?
  • Where is the population center?
  • Which species has the broadest territory?

To learn more about spatial pattern analysis, check out some of these resources:

- The ESRI Guide to GIS Analysis, Volume 2
- Understanding Spatial Statistics in ArcGIS 9, a free one-hour Web seminar
- The Spatial Statistics Toolbox online documentation
- View a five-minute video showing a hot-spot analysis of 911 emergency call data (click on “Using Spatial Statistics Tools”)
- Download a hot-spot analysis model from the Geoprocessing Resource Center.
- Extend Crime Analysis with ArcGIS Spatial Statistics Tools , Spatial Statistics Provide New Insights, or Spatial Patterns of Disease Inspire New Ideas on Possible Causes in ArcUser Online
- Spatial Pattern Analysis concepts are discussed in the ArcGIS 9.3 Web help and include Modeling Spatial Relations, What is a Z Score?  What is a P value?, and Spatial Weights.
- Technical workshop slides are available from the ESRI Public Health and Homeland Security Conferences.

Have fun!

Posted in Analysis & Geoprocessing | Tagged , , , | 1 Comment

Spatial Statistics: Free Virtual Campus Web Seminar on Regression Analysis Basics

This blog post was written by Lauren Scott, Geoprocessing/Spatial Statistics Product Engineer in the Software Products Group at ESRI in Redlands.

There are a number of good resources where you can learn about the new regression analysis tools in ArcGIS 9.3.  The newest resource is a free one-hour Web seminar,  Regression Analysis Basics in ArcGIS 9.3, available for download from the ESRI Virtual Campus.

 Other Regression Analysis resources include:


Posted in Analysis & Geoprocessing | Tagged , , , | Leave a comment

2009 Developers Summit

The 2009 Developers Summit has come and gone.  Thanks to all who attended!  We enjoyed meeting with you.  We’ve uploaded the geoprocessing presentations to the Model and Script tool gallery. Session videos have been posted to the Media Gallery. 

Slides, scripts, and data from the two workshops on Designing and Building geoprocessing tools are available in the Model and Script tool gallery, or by clicking here.

Slides and other materials from the Building Geoprocessing Services session are available here.

Slides and scripts for the Python Scripting Advanced Techniques session are available here

Posted in Analysis & Geoprocessing | Tagged , , , , , , , | Leave a comment

Lidar Solutions in ArcGIS_part4: Estimating Forest Density and Height

This blog post is written by Clayton Crawford, a Product Engineer in the Software Products Group’s 3D Team in Redlands.

This is the fourth in a series on Lidar Solutions in ArcGIS.

Estimating Forest Canopy Density and Height

Canopy density and height are used as variables in a number of applications. These include the estimation of biomass, forest extent and condition, and biodiversity. Canopy density, or canopy cover, is the ratio of vegetation to ground as seen from the air. Canopy height measures how far above the ground the top of the canopy is. Lidar can be used to determine both.

What follows are steps to calculate canopy density and height from lidar points. First, you need lidar that’s been classified into ground hits (bare earth) vs. non-ground hits. This type of classification is usually performed by your data provider. Secondly, you need to consider when the lidar was collected and the type of vegetation in the study area. If there are a lot of deciduous trees and the data collection was performed during leaf off, then the density calculation is not going to work.

Loading points into the Geodatabase
To calculate canopy density load the ground, or bare earth, lidar points into one multipoint feature class and above ground points into another. Assuming your data are in LAS format, you do this with the LAS To Multipoint tool. Specify the proper class codes to filter on. Here are the LAS class codes as defined in the LAS 1.1 Standard:

Any LAS file made in the last few years should use these codes if the points have been classified. Unfortunately, there’s still some ambiguity in the standard. For example, we know class ‘2’ is ground but class ‘8’ is ground as well. Class ‘8’, or model key, points are a special set of ground points used for contouring or other application requiring a thinned set of ground points. Whether you have them depends on how the data was processed. If you don’t know specify both classes. If it turns out there aren’t any model key points it won’t hurt. Vegetation has a similar issue. Sometimes vendors place everything that’s above ground into class ‘1’ because they haven’t performed a more detailed classification on them. So, if you’re unsure of the specifics of your data’s classification, load non-ground points using classes 1, 3, 4, and 5; that’s a reasonable catch-all to get your vegetation points. Note: If buildings or other manmade non-ground features are in class ‘1’ you’ll get them too and they’ll skew the results somewhat.

Calculating the density
The most effective way to determine the canopy density is to divide the study area into many small equal sized units. Do this through rasterization. In each raster cell you compare the number of above ground hits to total hits. The trick is to figure out an appropriate cell size. It needs to be at least 4 times the average point spacing. You can go larger but not smaller.

1. Use the Point To Raster tool on the above ground points with the COUNT option.


2. Convert any resulting NoData cells to 0 so that subsequent operations treat zero points in a cell as 0. This is accomplished using the Is Null tool followed by Con.


Next use the Con Tool

3. Repeat steps 1 and 2 with the ground points.

4. Add the above ground and ground rasters to get a total count per cell using the Plus tool.


5. All the rasters we’ve made so far are longs. We need one to be floating point in order to get floating point output from the Divide function that we’ll use in step 6. Do this by sending the output from Plus through the Float tool.


6. Now use the Divide tool between the above ground count raster and the floating point total count raster. This gives us the ratio from 0.0 to 1.0 where 0.0 represents no canopy and 1.0 very dense canopy.

The following image represents canopy density. The lightest areas have little to no vegetation. These are areas where a large percentage of lidar shots could ‘see’ the ground. The dark green areas, where lidar could not penetrate to ground as well, indicate denser vegetation canopy.

Calculating the height
To determine canopy height you’ll need to subtract the bare earth surface (DEM) from the 1st return surface (DSM). Take a look at a previous blog to learn how to make these surfaces. Find it at this link: Creating raster DEMs and DSMs from large lidar point collections.

Once you have your first return and bare earth rasters the Minus tool gives you the difference which, over forest, represents the canopy height.

The following image represents canopy height above ground. It ranges from blue for little to no height, to orange which is the tallest.

Lidar can be used to calculate the density and height of vegetation. This is useful for a variety of purposes including biomass and carbon estimates as well as forest management.

That concludes part four of Lidar Solutions in ArcGIS. Subscribe to this blog or check back in a month or so for a discussion on the creation of intensity images from lidar.

Posted in Analysis & Geoprocessing | Tagged , , | 1 Comment

Tips and tricks – accessing feature shape in Calculate Field

With the Calculate Field tool, you can easily create expressions that use some property of a feature’s shape, such as length or area. You can also convert length and area to different units. The illustration below shows calculating the MILES field to the length (in miles) of each line feature.
Continue reading

Posted in Analysis & Geoprocessing | Tagged , , , , , | Leave a comment

Lidar Solutions in ArcGIS_part3: Data Area Delineation from Lidar Points

This blogs is written by Clayton Crawford, a Product Engineer in the Software Products Group’s 3D Team in Redlands. 

This is the third blog in a series on Lidar Solutions in ArcGIS. Links to the first and second posts are below. 

Data Area Delineation from Lidar Points

It’s common for lidar or photogrammetric data for a survey to be delivered without a detailed data area boundary. Often, the xy extents of the survey are defined by a tile system that covers an area of interest and the data fills these tiles (Figure 1) or the data are simply guaranteed to cover some minimal extent and there is no explicit or absolute boundary other than what can be inferred (Figure 2). Either way, the area of coverage is usually not a cleanly filled rectangle.


The problem

If a surface is made without declaring the data area up front (i.e., by including a clip polygon when defining a terrain dataset or TIN) then some of what are actually voids around the perimeter get treated as data areas. Analytic results in these areas are unreliable because height estimates are based on samples that can be far away.

Take, for example, the data shown in Figure 3. The graphic on the left depicts a dense collection of lidar points shown in green. The gaps in the interior are water bodies (where lidar is typically omitted). The irregularly shaped data boundary is easy to see but unless an explicit extent is provided, in the form of a clip polygon, TIN and terrain related tools will fill in voids, greatly oversimplifying the actual data extent.

We know areas outside the data collection extent should be excluded from the surface. The problem is coming up with the polygon that provides an accurate representation of this extent. Hand digitizing is laborious. There’s an easier way.

The solution

What we need to do is synthesize a data boundary from the points that can be used to enforce a proper interpolation zone in the surface (Figure 4).

The key to the equation is the average point spacing. This is the primary variable to use when going after the data area. Surveys usually have explicit minimums on point spacing in order to provide control for interpolators. Areas that don’t meet density requirements are exceptions. They usually fall in one of the following categories: water bodies, obscured areas, and ‘holidays’ (send the latter back to the data provider for repair). The vast majority of the data will meet sample density specifications. Point spacing is usually reported in metadata. If you don’t know it try the 3D Analyst Point File Information tool. Alternately, eyeball it using the Measure tool in ArcMap. To learn more about point spacing use the link at the top.

Once you know the point spacing you follow these steps:

  1. Rasterize the points with Point To Raster.
  2. Assign one value to all data cells with Con.
  3. Fill small NoData areas with Expand.
  4. Reduce the overall extent of data cells with Shrink.
  5. Vectorize the raster with Raster To Polygon.
  6. Use a VBA script to remove small holes (interior rings) from polygon(s).

Note that a Spatial Analyst license is required to run a number of these tools.

The remainder of this document goes into more detail on each of these steps.

Point To Raster

Rasterization of the points will help aggregate the area covered by the points and provides a good data structure to work with for subsequent steps. You just need to tell the tool what cell assignment type to use and the output cellsize. Use the COUNT as the assignment type. As far as cellsize goes, specify a value that’s several times larger than average point spacing. Otherwise, you’ll get a lot of noise because the points aren’t evenly spaced. From the standpoint of processing efficiency and noise reduction, the larger cellsize you use the better, but there will be a tradeoff with the tightness of fit in the end result. A good place to start is 4 times the average point spacing (Figure 5).


The use of the Con tool in this workflow is to simply turn any and all data cells into cells with one value. This value defines a raster ‘zone’ that will be expanded in the next step. All that’s needed is to take the output from Point To Raster and provide a constant value for a positive expression. All non-zero value cells will be considered positive and be assigned the constant. Since COUNT was used as the cell assignment method during rasterization, any cell with a point in it must have a value greater than zero (Figure 6).


Unless you used a very coarse cellsize relative to your average point spacing, there’s a likelihood many NoData cells remain. Most of these can be eliminated using Expand (Figure 7). You want to remove most of these so the polygon produced during vectorization in a later step isn’t full of a million holes. That would be unnecessarily expensive.


 Expand will grow the zone of interest outward. In our case the zone is all the data cells, coded with a value of 1. This effectively eliminates small gaps found in the interior.

It’s okay if some isolated NoData areas remain in the output. You just don’t want thousands upon thousands. The remainder will be handled in the last step.


While Expand eliminates isolated NoData cells it also grows the data area outward. It needs to actually be brought in a little. Clip polygons need to be smaller than the actual point extent, so when terrain or TIN tries to estimate z values along the polygon boundary points can be found on both sides. This is needed to get good z estimates. To reduce the raster’s data boundary use Shrink (Figure 9).

At this point you should have a relatively clean raster with the extent of its data cells slightly within the lidar point extent.

Raster To Polygon

The Raster to Vector tool will convert the raster to a polygon feature class. Make sure the Simplify polygons option is checked. If it’s not, the output will be stair-stepped, rather than smooth, and contain more vertices than necessary (Figure 10).

At this point, the process is almost complete. You need to review the output for correctness. Chances are there’s one more thing that needs to be done: the removal of remaining holes inside your clip polygon.

Remove Interior Rings from Polygons VBA Script

If holes are present in the resulting polygon, grab the RemoveInteriorRings VB script from ArcScripts online ( This will edit out the internal rings leaving just the exterior boundaries. To run the script, follow these steps:

  1. Have your polygon feature class output from Raster To Polygon as the first layer in a map document.
  2. Go into VBA inside ArcMap via Tools>Macros>Visual Basic Editor.
  3. From the Project window right click on the project, select Import File, and bring in the VBA module you downloaded from ArcScripts.
  4. Run the RemoveInteriorRings macro.

You should now have a clip polygon that can be used when defining a terrain dataset or TIN. It should conform to the data extent of the points but fall slightly inside them (Figure 11).

That concludes this part of Lidar Solutions in ArcGIS. Subscribe to this blog or check back in a month or so for a discussion on the estimation of forest canopy density and height from lidar.

The first post can be found here: 

The second post can be found here: 


Posted in Analysis & Geoprocessing | Tagged , , | 3 Comments

Lidar Solutions in ArcGIS_part2: Creating raster DEMs and DSMs from large lidar point collections

This blog is written by Clayton Crawford, a Product Engineer in the Software Products Group’s 3D Team in Redlands.

This is the second blog in a series on Lidar Solutions in ArcGIS. The first can be found here: Lidar Solutions in ArcGIS_part1: Assessing Lidar Coverage and Sample Density.

Lidar Solutions in ArcGIS_part2: Creating raster DEMs and DSMs from large lidar point collections

Raster, or gridded, elevation models are one of the most common GIS data types. They can be used in many ways for analysis and are easily shared. Lidar provides us with the opportunity to make high quality elevation models of two distinct flavors: first return and ground. A first return surface includes tree canopy and buildings and is often referred to as a Digital Surface Model (DSM). The ground, or bare earth, contains only the topography and is frequently called a Digital Elevation Model (DEM).



ArcGIS provides tools to take large lidar point collections, and optionally other surface related information like photogrammetric breaklines, and use them to produce high quality raster surfaces.

Coming up with a plan

Before continuing some basic factors need to be evaluated:

  • Extent of lidar coverage
  • Number of lidar points and point density
  • Desired output raster resolution
  • Extent of output raster(s)
  • Format of output raster(s)

Consideration of these factors will help determine whether you’ll be producing one raster or a collection. Part of this process requires you figure out how many rows and columns you’re willing to have in one raster. This depends on what you intend to do with the raster in terms of analysis, display, and potential sharing or distribution of the data. The desire to work off one dataset for analysis can be in conflict with practical constraints associated with dataset size. Another thing to consider is the amount of lidar you have. Trying to process 10 billion lidar points as one dataset, while possible, is likely to prove unwieldy. It’s pretty much a given you’ll be making multiple rasters from this amount of lidar, so consider splitting up the lidar processing as well. Not only does this keep individual datasets at reasonable sizes, it keeps process duration on those datasets shorter. The longer a process takes to execute, the more likely something’s going to go wrong (e.g., power outage). We all want to reduce risk, right?

If you’ve determined you need to split up your data, the next question is how. Will it be based on a regular grid system, political boundaries, or division based on an anticipated application? Since lidar collections tend to have multiple uses, splitting them up based on a regular grid system or political divisions like county boundaries makes the most sense. An engineer can mosaic the different pieces he or she needs for an individual project. If the intended use is weighted heavily for one type of application, such as hydrology, then use divisions logical for the application. For example, in the case of hydrology, watershed boundaries are a good candidate.

ArcGIS supports many raster formats, so you have a choice of what format to write to. This decision is best based on the intended use of the product. If it’s to be shared with the general public you might think about distributing in either TIFF or JPEG format. For analysis on the ArcGIS platform, the ESRI GRID format is best. This is the native format for many functions so to improve I/O efficiency, and therefore processing time, using GRID is the way to go.

The first step in getting from lidar points to raster is loading the points into a GDB. If you haven’t already done so, review the first part of this series. Use the link at the beginning of this blog to get to it.

Using the Point to Raster tool

If your only source of data is the lidar you can use the Point to Raster tool to produce a raster elevation models. While this does not produce the highest quality result possible lidar tends to be so dense that for many applications the accuracy is good enough and the convenience and speed of this tool make it worthwhile.

  • If producing a bare earth surface, or DEM, use just the ground lidar points.
  • Set the Value field parameter on the tool to Shape to use the z values from the multipoint vertices.
  • Set the Cell assignment type to either MIN or MEAN. MIN will bias output heights to local lows while MEAN is more general purpose.
  • To produce a first return surface, or DSM, use the 1st return lidar points with the MAX option of the tool since you want to bias the output to local highs.

The Point to Raster tool produces gridded elevation models from lidar point sets.

While Point to Raster offers the easiest and fastest way to produce a raster from lidar, it has a significant drawback. You can end up with many NoData cells since it only returns values for cells that have one or more points in them. The problem is exacerbated when only using ground points to make a DEM because gaps in the data occur where there’s vegetation and buildings obscuring the ground. To reduce the salt & pepper effect of NoData vs. data cells you can increase the output cellsize relative to the average point spacing. You can also reduce the number of NoData cells after execution of Point to Raster by using this expression in Spatial Analyst calculator on the output (in this example the output from Point to Raster is called ‘point2ras):


You can run the expression multiple times to fill in larger NoData areas, but I wouldn’t recommend running it more than a couple times. It’s better to just accept larger void areas as a consequence of using this approach.



Using the Terrain Dataset

If you have photogrammetric breaklines to go along with your lidar, or need higher quality results than can be produced with the Point to Raster tool, use the terrain dataset. For an overview of the terrain dataset and related help topics check out the online help here.


On the left is a surface made without breaklines along the river banks. The image on the right has breakline enforcement. Breaklines are important for maintaining the definition of water related features in the elevation model.

Breaklines are used to capture linear discontinuities in the surface. The most common types are edge of pavement, lake shorelines, single line drains for small rivers and double line drains for large rivers. Sometimes breaklines are also collected to help define and sculpt the surface without necessarily representing discontinuities. Examples of these include contour-like form lines and the crests of rounded ridges.

Breaklines, while frequently used in bare earth models, tend to be detrimental when used with first return surfaces because they can be in conflict with the above ground points. For example, breaklines capturing road edge of pavement can be coincident in XY but different in Z to points in tree canopy overhanging the road. Because of this, consider excluding breaklines from your first return surface or at least those where you know there’s potential conflict.

The most efficient means of organizing breaklines for use in a terrain dataset – see table below- is to separate them into different feature classes based on Surface Feature Type (SFType). SFType controls how the features are enforced in the model and how the natural neighbor interpolator, used during rasterization, interprets the surface as it crosses over these features. A distinct break in slope will occur across ‘hard’ features but not across ‘soft’ features.

Common input measurement types and recommend feature class storage type and SFType settings for terrain dataset definition.

It’s best for the sake of terrain performance to place all hardlines together in one feature class. It’s understood this might not be possible, for example, if you have the need to keep road and water features separate. It’s OK, just keep in mind the fewer feature classes used to define a terrain the better.

The ‘Replace’ SFType deserves special mention. This type is used to force everything inside a polygon to be set flat at a constant height. It’s used mostly for lakes when there’s inadvertently other data inside them, such as lidar points, whose heights are not exactly the same as the shoreline and therefore prevent the water bodies from being flat. Use of the Replace SFType does incur more processing cost than normal hard or softlines so it’s best to avoid. Ideally there ought not be lidar samples in your water bodies (consider adding this as a stipulation in the contract with your data provider), but if you do, you can either use the Replace SFType to handle them or get rid of the offending points before building your terrain using the Erase geoprocessing tool.

If you’ll be producing both bare earth and first return surfaces via terrain datasets, load the lidar points into two different multipoint feature classes, a feature class for the ground points and a feature class for the above ground points. Your bare earth terrain is defined with a reference to just the ground points. Your first return terrain references the same ground point feature class as the bare earth terrain and has the additional reference to the above ground points. Yes, this means two different terrains can reference the same feature class.

Starting with ArcGIS 9.3, terrain datasets can be pyramided using one of two point thinning filters: z-tolerance and windowsize. For DEM production you can use either. If you intend to rasterize from the full resolution point set, then use the windowsize filter for terrain construction because it’s significantly faster. If you’re willing to use thinned data for analysis, which is reasonable if the lidar is oversampled for your needs, use the z-tolerance filter. While more time consuming, it’s most appropriate because it provides an estimate of vertical accuracy of the thinned representation. For DSM production use the windowsize filter with the MAX option.

Use the Terrain To Raster tool to produce your rasterized elevation model. This provides options for interpolation, output cellsize, and which pyramid level to use from the terrain.


The Terrain to Raster tool produces gridded elevation models from terrain datasets.

For interpolation, the natural neighbors options is your best bet. While not as fast as linear interpolation, it generally produces better results both in terms of aesthetics and accuracy. Set the output cellsize relative to the lidar point sample density. You won’t gain anything by using a cellsize that’s substantially smaller than the average point spacing. Also, make sure to set the analysis extent, as set through the environment, for the extraction of subsets where appropriate. The use of a snap raster can also be of use for the sake of alignment of raster outputs.


Using either Point To Raster or Terrain To Raster geoprocessing tools you can process hundreds of millions, even billions, of lidar points into hi-resolution gridded DEMs and DSMs. These can then be used with the large collection of raster tools available in ArcGIS for analysis. They’re also great for making maps (see graphic below) and, due to their simple data structure, easy to share.

Color hillshade for a DSM of Jasper County, South Carolina. Made from a terrain dataset built on top of 1.7 billion lidar points for the county.

That’s it for the second part of this series on Lidar Solutions in ArcGIS. Check back for the next part: Data Area Delineation from Lidar Points.

Posted in Analysis & Geoprocessing | Tagged , , | 2 Comments

Tips and tricks – Error handling in Python script tools

I recently had a project that involved creating lots of Python script tools. Since other people were going to use these tools, I wanted my error handling to be robust and informative. This post is about some tips and tricks I discovered during this project. Continue reading

Posted in Analysis & Geoprocessing | Tagged , , , | 6 Comments

Lidar Solutions in ArcGIS_part1: Assessing Lidar Coverage and Sample Density

This blog post is written by Clayton Crawford, Product Engineer in the Software Products Group’s 3D Team in Redlands.

This post is the first in a series called “Lidar solutions in ArcGIS”. The series will cover Lidar processing tasks and workflows. And it will show you how to manage these vast point collections and outline approaches for mining information from them.

Let me state an important point up front: This series is about Lidar point processing. If you have Lidar derived raster data then it won’t be of direct use, but if you need to learn how to make those rasters then read on. Also important to note: the type of Lidar involved in this discussion is collected from plane or helicopter with a laser scanner pointed downward. With this type of Lidar you can make bare earth surfaces for topographic mapping and 1st return surfaces that include vegetation and buildings. It’s not about the type of Lidar where data is collected at side-on angles.

Note that some of the tasks covered in the series require a 3D Analyst extension license.

Here are the topics I plan to cover:


Lidar Solutions in ArcGIS_part1: Assessing Lidar Coverage and Sample Density

One basic QA/QC process is to ensure the Lidar points delivered by your data provider have the coverage and density expected. You want to catch problems with this early on and have them resolved before continuing. Two geoprocessing tools are useful in this regard: Point File Information found in the 3D Analyst toolbox and Point To Raster located in core Conversion Tools.

Point File Information
The Point File Information tool reports basic statistics about one or more point data files on disk. The tool’s primary purpose is to help you review and summarize the data before loading it into your geodatabase.  LAS (the industry standard format for Lidar data) and ASCII format files are supported as input. Since Lidar projects often utilize collections of data files, sometimes in the hundreds or even thousands, the tool lets you specify folder names in addition to individual files. When given a folder, it reads all files inside it that have the suffix you specify.


For each input point file it outputs one polygon with accompanying attribution to a target feature class. The polygon graphically depicts the xy extent, or bounding box, of the data in the file. Attributes include file name, point count, z-min, z-max, and point spacing.


The point spacing reported by Point File Information is not exact and deserves some discussion. For the sake of performance it uses a rough estimate that simply compares the area of the file’s bounding box with the point count. It’s most accurate when the rectangular extent of the file being examined is filled with data. So, files with significant numbers of points excluded over large water bodies or on the perimeter of a study area, only partially occupied with data, will not have accurate estimates. Therefore, the reported point spacing is more meaningful as a summary when looking at trends for collections of files. Something useful to do with the output feature class is to display it in ArcMap, open its attribute table, and sort the point spacing field in ascending order. You can also symbolize on the point spacing field using a graduated color ramp.


Point File Information works quickly on LAS files because it only needs to scan their headers to obtain the information it’s looking for. It takes significantly longer with ASCII files because with them the tool actually has to read all the data.

Assuming everything checks out OK, the next thing to do is load your Lidar points into a multipoint feature class with the LAS To Multipoint or ASCII 3D To Feature Class tools. Put this feature class in a feature dataset if you intend to build a terrain dataset from the points. While you have the choice between using LAS or ASCII format files, LAS is generally a better way to go. They contain more information and, being binary, can be read by the importer more efficiently.

Once the points are loaded into a multipoint feature class you can use the Point to Raster tool to get a more in-depth view of the point distribution.

Point to Raster
The Point to Raster tool creates rasters from points and it also supports multipoints. It’s a generic tool with many options and uses. For the sake of evaluating Lidar point density the tool’s COUNT option is the thing to go for. This uses the number of points falling in a raster cell as the cell value. Being able to look at this graphically over the extent of the project area is revealing.

There’re a couple parameters on the Point to Raster tool whose values for this exercise aren’t obvious. First, is the Value Field parameter. It doesn’t matter what this is set to. That’s because the Value Field is ignored when the Cell Assignment type is set to COUNT. Then there’s the cellsize. You might think the average point spacing is good but this typically results in too many empty, or NoData, cells because Lidar points just aren’t that evenly spaced. Also, the output raster could end up being unnecessarily large. Instead, it’s better to go with a cellsize that’s several times larger than the average point spacing but small enough to identify gaps or voids that warrant further investigation. A reasonable size is four times the point spacing. As an example, let’s say your data is sampled at 1 meter. If you set the cellsize to 4 then you can expect, on average, to get 16 points in a cell.

You can also evaluate the density for different types of points. While most of the time you’ll probably just check the density for all returns it can be useful to look at those that fall in a certain class like ‘ground’. For example, this can give you an idea of how good your ground penetration is in vegetated areas. The Point to Raster tool doesn’t know how to make the distinction between point types though. So, you control what points get used by how you go about creating the multipoint feature class with the LAS To Multipoint tool. It provides options for loading points by class code and return number.

Once your raster has been created have a look at it in ArcMap. Use a color ramp renderer to display it so it’s easy to distinguish between cells with high counts and those with low. You can also set the NoData color to something that stands out. Look for variance in density and data voids. Have your vendor explain anything that doesn’t look right.

Hopefully, you’ll find your data meets specifications and lacks surprises. It’s worth the effort to check.

That’s it for this installment of Lidar Solutions in ArcGIS. Subscribe to this blog or check back in a couple weeks for a discussion on the creation of raster DEMs/DSMs from Lidar.


Posted in Analysis & Geoprocessing | Tagged , , | 3 Comments