Tag: ArcGIS 10

Canvas Map Color Styles

By Jaynya Richards, Esri Research Cartographer

Canvas Map Color Styles thumbnail

In this blog entry, we announce a new set of color styles that you can apply to your operational overlays or to additional reference layers when using the Esri Light Gray Canvas Basemap. We created these styles to provide you with additional resources designed specifically with the Light Gray Canvas Basemap in mind.

Continue reading

Posted in Mapping | Tagged , , , , | Leave a comment

Cleaning up line data with geoprocessing

Editing and data compilation are less commonly thought of as operations that can be automated through geoprocessing. However, ArcGIS 10 introduced the Editing toolbox, which contains a set of geoprocessing tools to perform bulk edits. These tools combined with others in the geoprocessing environment can automate data import and maintenance work. Automated data compilation tools are especially useful for importing data into a geodatabase but can also be employed on a regular schedule to perform routine quality assurance (QA) checks. In this entry, I will discuss the use of geoprocessing to clean CAD line data as part of the import process.

Importing data with geoprocessing
Lines that are created without the use of spatial integrity measures, such as snapping or topology, almost always contain some inconsistencies. These errors are likely in data that originated in formats such as CAD, shapefile, or KML.  Fortunately, many common topological issues can be resolved in an automated manner by using ModelBuilder to link together tools that will import data into a geodatabase and perform standard data cleanup techniques.

I have a CAD file for a new subdivision that needs to be integrated with my existing GIS parcel data. The GIS data must be kept to stringent accuracy standards, so I need to fix any issues where lines do not connect to each other, overlap, or are duplicated. Rather than risk reducing the quality of the main parcels geodatabase, I can create a local temporary geodatabase where I can preprocess the CAD lines before introducing the features into the production geodatabase. Although the CAD file contains buildings, roads, text, registration tic marks, and other features, I plan to use only the parcel lot lines.

I have built a model that imports the CAD lines into a temporary scratch workspace, cleans and processes the lines, and then copies the corrected lines into an output file geodatabase. When importing CAD data into a geodatabase, I can choose from several available tools, including CAD to Geodatabase or Feature Class to Feature Class. The CAD to Geodatabase tool converts all the geometries in a drawing to individual feature classes, such as a line feature class for the parcel lines, annotation feature class for CAD text, and so on. In my case, I am using Feature Class to Feature Class tool because I need only the lot line geometry from the CAD file. This tool makes the model reusable because it can import many different formats and not simply CAD. In addition, the Feature Class to Feature Class tool allows for an SQL expression so I can further refine the import to include only the CAD features that satisfy an attribute query for lot lines (in this case, “Layer” = ‘LOT-L’).

Performing automated quality assurance on lines
Once the CAD parcel lot lines are imported into a geodatabase feature class, I can begin running tools to perform automated QA processes. Many tools are found in the Editing toolbox, although other toolboxes can be purposed for data compilation QA tasks. For example, I can start by using the Integrate tool in the Data Management toolbox to address minor inconsistencies in the line work. Integrate makes features coincident if they fall within the specified x,y tolerance. By using a small tolerance on Integrate (and other similar tools), I can avoid editing the data beyond the level of cleanup I intended. In addition, since I am running the tools on a copy of the data outside my production database, I can run the tools repeatedly to refine tolerance values to fix more issues in an automated manner. The intermediate data created as the model runs is maintained and can be reviewed in the scratch geodatabase.

After the dataset is integrated, I check for duplicated lines with the Delete Identical tool (Data Management toolbox). The dashed lines connecting to this tool represent preconditions, which are used to control the order of operations in a model. For example, the Integrated Lines output is a precondition to the Delete Identical tool. This way, the Delete Identical tool will not execute until the lines have been integrated.

The next part of the model identifies lines that are dangles. With the Feature Vertices to Points tool in the Data Management toolbox, I create a new point feature class containing the line endpoints that are not connected to any other lines. I can then use Select Layer By Location to identify the lines that intersect these dangling endpoints. The resulting selection represents lines with dangles.

Many of these dangle errors can be fixed by running the Editing toolbox’s Trim Line, Extend Line, and Snap tools. Effective use of the Editing toolbox geoprocessing tools can improve productivity because the tools apply edits in bulk, such as to all features or all selected features. In most cases, the similar editing function applies to only one feature at a time. Because I exposed the tolerances as variables and model parameters, I can easily run the model with different values because the tolerance settings appear as input boxes on the tool’s dialog box. For example, I am willing to extend or trim the lines from this CAD dataset initially up to a maximum length of five feet. After that procedure, I want to inspect the lines visually to see how many issues remain to ensure that I will not be making incorrect edits if I increase the tolerance value. I can change the tolerance as needed depending on the accuracy of the lines I am importing.

In addition, since my organization’s spatial integrity rules indicate the parcel lines should be split and not intersect themselves, I can use a sequence of spatial and attribute queries to find the locations where lines have intersecting endpoints. Lines are often split so that each length can be attributed separately.

Once these processes have run, the lines are output into a feature dataset in a geodatabase and are much cleaner topologically. After the model completes, I can run the Feature Vertices to Points tool again on the cleaned output to see the remaining dangles and compare the current number of endpoints that are dangles (the yellow circles in the graphic) to the number in the original CAD lines (the red circles). While there may be a few remaining issues, there are less than before running the model. At this point, I can build a geodatabase topology to check for and repair any other errors. When I am satisfied that the lines meet the standards for our spatial data, I can import it into the production database.

For more information:
The sample tools and data can be downloaded from the Editing Labs group on ArcGIS.com. An ArcInfo license in required to run the tools.

Posted in Analysis & Geoprocessing, Editing | Tagged , , , , , , , , , | 2 Comments

Displaying your geoprocessing raster output in your web application (using Silverlight)

When you are developing a Geoprocessing service that generates Raster dataset as output (e.g. Viewshed or Hillshade analysis), you will find the output Raster dataset cannot be rendered directly in your web application.

In order to display the Geoprocessing Raster output in the web application, you have to follow the following steps:

Continue reading

Posted in Analysis & Geoprocessing, Developer, Imagery, Services, Web | Tagged , , , , , , , , , | 15 Comments

Generating overviews only where needed

The overviews generated from mosaic datasets take additional storage space. Many users build overviews for their entire mosaic datasets, or use lower-resolution images in place of overviews. But sometimes, this is not suitable. That’s why there are many options you can define when creating your overviews.

In this example, I’m going to show you how you can use a feature class to limit the extent of the overviews.

I have 17 IKONOS scenes spread around the Greater Toronto metropolitan area in Canada (shown in green). I also have a polygon layer of some neighborhoods (‘neighbourhoods’ for the Canadians) in this area (shown in gray).

Continue reading

Posted in Imagery, Services, Web | Tagged , , , , | Leave a comment

Overviews and pyramids: Part 2 of 2, Doesn’t my mosaic dataset use both?

This is part 2 in a 2-part blog. Part 1 provided an overview of pyramids and overviews. Part 2 provides you with some guidance on generating them when creating a mosaic dataset.

When you are viewing a mosaic dataset and you’re zoomed in to a small number of images, the pyramids are typically used to generate the image you’re seeing.  When you zoom out and your view contains many images—to see the many images, overviews are typically being displayed.

How to know what you’re seeing?

If you want to know what is being displayed, open the attribute table. Then right-click the Image layer and click Selection > Select Visible Rasters. This will select the item in the attribute table (and will highlight the footprint if displayed).

General design

If the raster data contained within the mosaic dataset has pyramids, then generally fewer overviews are needed. In the diagram below there are three source images. They each have a pixel size of 0.5. The pyramids are built for each source image—in this case only two levels. The pyramids are resampled by a factor of 2 (this is the default). The overviews are then generated where the pyramids end, but using a resampling factor of 3 (this is the default). The overviews are a mosaic of the images and are limited by a tile size. In this example, there are only 2 overviews at the first level, even though there were 3 images.  Eventually, there is only one overview at the top level.

 

You can have various combinations. For example the source images above may not have pyramids. In that case the overviews will start with a pixel size of 1.5 (1:5670); which is a factor of 3, and it will continue until it reaches and appropriate overview size to display the entire image quickly.

If pyramids have been generated, then fewer overviews will be needed and it will take less time to generate the overviews.

Guidelines

Generally, overviews will perform faster than displaying the pyramids for each raster within the mosaic dataset. This is because they are often larger in extent than the source files, so fewer files are opened. You may consider building overviews instead of raster pyramids when using

  • Preprocessed tiled imagery, such as orthophoto quads
  • Edge-joined (non-overlapping) imagery, which will not be effected by changing mosaic methods
  • Imagery that will be processed on-the-fly, when the parameters and mosaicking method will not be changed

When building more complex mosaic datasets, especially where you will be taking advantage of the mosaic methods and on-the-fly processing, then it can be advantageous to build pyramids on the source rasters and to build overviews only where they are needed, such as when

  • The raster datasets are larger than 5000 columns
  • The rasters overlap and the mosaic methods will be used to control the order
  • On-the-fly processing will occur on the source rasters at all scales
  • Images are not static preprocessed rasters

When building pyramids or overviews, use the appropriate resampling method and compression

  • For imagery, bilinear resampling is recommended and a JPEG compression
  • For discrete data, nearest neighbor resampling is recommended and a LZW compression

Overviews are generated using the defaults of the mosaic dataset, such as the default mosaic method. If you have multiple overlapping scenes over time, then you won’t see the different dates until you’ve zoomed into a level where the source images (or their pyramids) are displayed. For example, in the diagram above if there were overlapping source images from different dates, you could use the Time Slider to examine each image; however, you would not be able to do this until you were zoomed into approximately 1:22 000.

When your view is zoomed in to view a small number of raster datasets, pyramids are typically involved. When the view zooms out to cover a mosaic comprised of many individual datasets, overviews are typically involved.

Additionally, if you plan to view individual rasters from within the mosaic dataset using Lock Raster (for example), the rendering may be faster at different scales for raster datasets with their own pyramids (and statistics). Previewing the rasters from the mosaic dataset’s attribute table will also be faster.

 

Submitted by: Melanie Harlow

Posted in Imagery, Services | Tagged , , , , , , , , , | 5 Comments

Managing layer visibility and selections

Managing layer visibility and selections with the table of contents’ List By Visibility mode

The table of contents in ArcGIS 10 has several ways of listing the layers in the map: by drawing order, source location of the data, whether layers are visible, and whether layers are selectable. A particular list type may be more useful than others depending on the current mapping task. For example, List By Drawing Order is best at setting which layers draw on top of others and List By Source works well to help repair broken data links for layers from different workspaces. In an earlier post, I focused on the table of contents’ List By Selectable mode when I wrote about refining the selected features while editing. In this post, I am going to show how I can use List By Visibility to manage layer visibility and selections.

Continue reading

Posted in Editing | Tagged , , , , , , | Comments Off

Overviews and pyramids: Part 1 of 2, What are they and why do I need them?

This is part 1 in a 2-part blog. Part 1 provides an overview of pyramids and overviews. Part 2 will provide you with some guidance on generating them when creating a mosaic dataset.

Basically—overviews are not pyramids and pyramids are not overviews.  But pyramids generated by ArcGIS have an .ovr extension (short for overview)…Wait, did I just write that?

Yes, the storage format for a pyramid is an .ovr file. But please don’t confuse this with overviews. Fortunately, overviews are organized in a folder named *.overviews. Both are similar but pyramids are created for raster datasets and overviews are created for mosaic datasets.

Pyramids

Overviews

Description

Lower resolution (downsampled) images of the original data.

Purpose

Improve display speed and performance.

Created for

Raster datasets

Mosaic datasets

Format

Writes .ovr files—with a few exceptions.

Writes as .tif files.

Reads pyramids stored externally as *.ovr or *.rrd or internally (e.g. MrSID)

Storage

In a single file that generally resides next to the source raster dataset and using the same name.

By default, in a folder next to the geodatabase with a *.overviews extension, or internally for ArcSDE.

Storage location is customizable.

Storage size

2 to 10% (compared to original raster datasets)

Downsampling factor

2

3 (default)

Extent

  • Each pyramid level covers the entire raster dataset.

  • You can specify the number of levels to generate.

  • Can cover part of or all of a mosaic dataset.

  • Each level can consist of one or more images.

Options when building

  • Number of levels to create

  • Resampling method

  • Compression method & quality

  • Number of levels to create

  • Tile size

  • Base pixel size

  • Resampling method

  • Compression method and quality

  • Output location

  • Extent Sampling factor

Why are you generating them?

Pyramids aren’t mandatory—but without them, the display speed of your raster dataset can be prohibitively slow, especially if the datasets are very large.

Overviews aren’t mandatory—but they are highly recommended. You do not have to create them or you can generate them in only particular parts of the mosaic dataset, such as a highly visited part of the imagery. However, if you don’t create them you may not see any imagery (you may see a wireframe instead or gray images), since there is a limit to the number of rasters that will be processed at one time (which you can change). Without them, the mosaic dataset may display slowly because of all the processing.

How to create pyramids

If a raster dataset doesn’t have pyramids, then you will often be prompted to create them when you display the data in an ArcGIS application, such as ArcMap. But it’s better to create them before you use the data. Pyramids can be created using geoprocessing tools. There are a few tool choices, since it depends on whether you have one or many datasets to process. To learn about these, see Building pyramids using geoprocessing tools. You can change the properties of the pyramids, such as the resampling method and compression via the geoprocessing environments.

Learn more about pyramids

How to create overviews

To create overviews, first they are defined, then generated. When they are defined, the application analyzes the mosaic dataset and using the parameters set for the overviews it defines how many are needed, at what levels, and where. Then they are added as items in the mosaic dataset which appear as new rows in the attribute table. At this point, only the rows have been created to identify the properties and number of the overviews. Next, the overview files are generated. Both defining and generating can be done with one tool—Build Overviews. However, if you need to modify any properties, such as defining a new output location or tile size, then you must run Define Overvews first (to define the properties and add the items to the attribute table), then run Build Overviews to generate the overview files.

When you do generate them, the mosaic dataset keeps track of any changes made to it, such as updating an image, adding or removing images, or altering the footprints. By running the Build Overviews tool or Synchronize Mosaic Dataset tool, with the appropriate options, the overviews will be updated.

Learn more about overviews

Associated blogs

Submitted by: Melanie Harlow

Posted in Imagery, Services | Tagged , , , , , , , , , , , | 1 Comment

Expanding the power of the Attribute Assistant

Here at Esri, we always try to help one another out.  Well the Local Government team asked if we could expand the functionality of the Attribute Assistant and make a series of construction tools to help with address data management workflows; and we obliged.  We came up with some pretty cool new rules for the attribute assistant and some interesting construction tools that will be used to streamline address maintenance workflows.  Even though these have been designed for managing address information, you may find them very helpful.  Let’s first take a look at these new attribute assistant rules.
The first rule we needed to create was CASCADE_ATTRIBUTE.  This rule will allow you to make a change to an attribute in a table or layer and push that new attribute value to every feature that contains the value. So in the address world, we have implemented a Master Street Name table.  Say a road was renamed, we can go into the table, change the road name, and the rule will open up the Road Centerline layer and make that change to every road with the old name, then open up the Site Address Point Layer and update the road name as well.  Pretty cool, huh?
The second rule we created was VALIDATE_ATTRIBUTE_LOOKUP.  This rule will validate an attribute change against a look up table or layer.  Let’s look out how you would use this in address land.  If I created a new road and I want to make sure the road name matches a road in the Master Street Name table, I can set this rule up to monitor my Street Name field and check that value against the Master Street Name table.  The cool thing about this rule is all I have to do is enter a piece of the road information.  If it finds more than one record in the street name table, it presents a prompt, where you can select any of the matching values.  How is that for data validation?
I also mentioned we are working on some new construction tools.  These are still in development, but here is what we are working towards.  One tool will allow the user to click a reference point, in our case, an address point that represents the location on the centerline of which the address is derived, create this point if it does not exist, then create a series of site address points with information from that reference point or the centerline underneath it.  So basically, you can create a series of points all with information from source point.  The second is a tool to draw a new road and split any intersecting roads and prorate their address information.
If you want to try the new rules out, as well as a few other enhancements, we have posted a beta of the tools on the forum.
Thanks
Mike from the Water Team

Posted in Water Utilities | Tagged , , , , , , , , , , , , , , , , | 2 Comments

Raster XML file contains a histogram in ArcGIS 10

At 10.0, ArcGIS began storing the histogram for a raster
dataset when it generates the statistics.  This allows the application to
provide more capabilities, such as adding stretches, like Percent Clip.  If you export the statistics file you can take a look all
the information.

Previously to 10.0 the exported XML file would look like…
<?xml version=”1.0″?>
<RasterStatistics xml:lang=”en”>
<DatasetName>dem30</DatasetName>
<Band Name=”dem30″>
<min>2081</min>
<max>3447</max>
<mean>2544.409567056246</mean>
<stddev>208.6604762988481</stddev>
</Band>
</RasterStatistics>

At 10.0 the exported XML file will contain the histogram…
<?xml version=”1.0″?>
<RasterStatistics xml:lang=”en”>
<DatasetName>dem30</DatasetName>
<Band Name=”dem30″>
<min>2081</min>
<max>3447</max>
<mean>2544.409567056246</mean>
<stddev>208.6604762988481</stddev>
<Histogram>
<npixels>2597680</npixels>
<min>2081</min>
<max>3447</max>
<v>23</v>
<v>45</v>
<v>54</v>
. . .
</Histogram>
</Band>
</RasterStatistics>

 

Submitted by: Melanie Harlow

Posted in Imagery | Tagged , , , , , , | Leave a comment