Monthly Archives: August 2011

Making and using tiled pattern fill symbols

By Wes Jones, Esri Design Cartographer

TPFS blog thumbnail

My mother loves to sew and knit. My grandmother was a professional seamstress and made countless wedding dresses. I have to admit, I like the odd sew from time to time. Growing up, I spent a lot of time at the local fabric store. When I say a lot of time, I mean a lot of time. The rolls of fabric became a babysitter while my mom sifted through the new designs and patterns. Continue reading

Posted in Imagery, Mapping, Migrate | Tagged , , | 2 Comments

Making golf course maps with ArcMap

By Wes Jones, Esri Design Cartographer

Golf blog thumbnail

How many of you out there are golfers? Can any of you remember your lowest round? I can – I shot a marvelous 83! Sure it was over 9 holes, but I was playing from the Pro’s tee box…

Whether you golf or not, how many of you have had to make a map of a golf course? A while back, one of you asked us on Ask a Cartographer how to make the checkered pattern found on a golf course:

Continue reading

Posted in Mapping, Migrate | Tagged , , , | 2 Comments

Versatile New Maps: The Medium Scale Hydro Basemap and Hydro Reference Overlay

In the bad old days, you may have done some work for a client and got to a point where you just want to make a map for a meeting or a report.  One of the most time consuming parts of making a map from scratch was finding data for rivers, streams, and lakes, then turning each an appropriate blue, making each line the appropriate symbol, then symbolizing each stream segment.  It takes time to find a dataset that is good enough, at the right scale, and that looks good when you are done symbolizing everything.  It may take you hours to find the right data, symbolize, and label everything.

In my past life as a consultant, sometimes I had to start from scratch like this. I spent time finding and downloading appropriate scale data, checking it to see if it looks OK on my map, then symbolizing each stream and lake at least a little bit (most places I have worked don’t need glaciers symbolized) so they show up with the right symbol.  Once that is all set, I haven’t even started labeling each stream, which can take quite a while to get right.

Sometimes to save my clients money I would give up and use USGS DRGs, turning only the blue symbols on.  I often thought wouldn’t it be wonderful if there was some kind of national hydrologic map service on the internet that you can just add to your map and it just works?

Now there is… in the form of the esri Mapping Center Hydro Team Hydro Basemap.  We have made a medium scale hydro basemap of the United States, from 1:147,000 to 1:18,000.  And we think it will make a lot of things easier for our community.

The Hydro Basemap of the United States is based on NHD, but with a focus on analytical cartography.  These maps are made to show hydrologic networks connecting through a system.  What’s known as the ‘Hydro Basemap’ is just the Hydro Reference Overlay plus relief… At times you may have your own relief or basemap and may not need any background behind the rivers and lakes.  For that you don’t want the whole Hydro Basemap, just what’s called the Hydro Reference Overlay. That’s the only difference in the two concepts… the presence of relief.  These two products are close companions to one another.

Hydro Basemap

Gardiner, Maine area, esri Mapping Center Team Hydro Basemap of the United States

Actually the Mapping Center Team has gone way beyond the concept I described earlier.  What we actually built is a hydro basemap, but one that is ready for a multiscale experience.  You can take one of the applications we built such as the High Water Map, then recycle/repurpose the javascript application for your needs, and the basemap is ready for you to use.  Just add your data.

 Hydro Basemap mashup with USGS Gauges

A map created on using the hydro basemap, mashed up with the USGS river gauge service.

As you zoom in and out, streams turn on and off, and labels rearrange for you.  As you zoom in, more and more stream segments appear that are important to your map view.  As you zoom out, smaller streams that would clutter your map view are selected out of the cache and removed.  Streams do not turn off and on indiscriminately or based solely on size or flow, there is a sophisticated algorithm at work here that will prioritize small streams with big important names (such as the upper Mississippi River in Minnesota).  In addition, different parts of the country have different methods of stream prioritization, and these are respected.  There is no one size fits all method to pruning streams as scales change, since different parts of the country have different soils and drainage characteristics. Don’t worry, we have done this for you so you don’t have to.

It’s easy to get used to something like this because (as we like to think at the Mapping Center) it’s how things should have been all along. We at the Mapping Center Hydro Team are proud of this product and would like you to give it a spin, and see how you like it.  We’d like to hear your comment to know how easy this is for you to use.  I wish they had this when I was a consultant.

Special thanks to Michael Dangermond for providing this post. Questions for Michael:

Posted in Hydro | Tagged , , , , , , , , , , , , , , , | 2 Comments

Managing and sharing elevation data in ArcGIS 10

Get out your reading glasses and find a comfortable chair. A new 3-part topic has been added to the WebHelp to guide users in managing and sharing their collections of elevation data with ArcGIS 10.  The first part is a discussion about elevation data. The second part discusses the data management plan and issues to consider. The third part walks you through the steps to manage and publish the elevation data.

This 3-part topic focuses on managing your data using a mosaic dataset and is complementary to Imagery: Data management patterns and recommendations.


Written by: Melanie Harlow 

Posted in Imagery, Services, Web | Tagged , , , , , | 2 Comments

New and Improved Business Analyst Server APIs Available on the Resource Center

  By Chris Wilcox

Check out the updated Flex and Silverlight sample application on the Business Analyst Server Resource Center.  These samples were upgraded to accompany the Business Analyst Server 10.0 Service Pack 2. In addition to using the latest software release, we’ve added the Average Drive Time Report, Match Level Summary Report and Summarize Points Report tasks to the Flex samples with this update.

I recommend taking a look at the new Business Analyst Server Flex and Business Analyst Server Silverlight APIs. They are there to help inspire, instruct and construct your own Business Analyst Server Applications. Follow the links below to the samples:

Business Analyst Server Flex API Samples

Business Analyst Server Silverlight Samples

Posted in Location Analytics | Tagged , , , , | Leave a comment

Python Multiprocessing – Approaches and Considerations

The multiprocessing Python module provides functionality for distributing work between multiple processes, taking advantage of multiple CPU cores and larger amounts of available system memory. When analyzing or working with large amounts of data in ArcGIS, there are scenarios where multiprocessing can improve performance and scalability. However, there are many cases where multiprocessing can negatively affect performance, and even some instances where it should not be used.

There are two approaches to using multiprocessing for improving performance or scalability:

  • Processing many individual datasets
  • Processing datasets with many features

The goal of this article is to share simple coding patterns for effectively performing multiprocessing for geoprocessing. The article will cover relevant considerations and limitations, which are important when attempting to implement multiprocessing.

1. Processing large numbers of datasets

The first example performs a specific operation on a large number of datasets, in a workspace or set of workspaces. In cases where there are large numbers of datasets, taking advantage of multiprocessing can help get the job done faster. The following code demonstrates a multiprocessing module used to define a projection, add a field, and calculate the field for a large list of shapefiles. This Python code will create a pool of processes equal to the number of CPUs or CPU cores available. This pool of processes will then be used to processes the feature classes.

import os
import re
import multiprocessing
import arcpy

def update_shapefiles(shapefile):

  # Define the projection to wgs84 — factory code is 4326., 4326)

  # Add a field named CITY of type TEXT., ‘CITY’, ‘TEXT’)

  # Calculate field ‘CITY’ stripping ‘_base’ from the shapefile name.

  city_name = shapefile.split(‘_base’)[0]
  city_name = re.sub(‘_’, ‘ ‘, city_name), ‘CITY’, ‘”{0}”‘.format(city_name.upper()), ‘PYTHON’)

# End update_shapefiles

def main():

  # Create a pool class and run the jobs–the number of jobs is equal to the number of shapefiles

  workspace = r’C:GISDataUSAusa’
  arcpy.env.workspace = workspace

fcs = arcpy.ListFeatureClasses(‘*’)
fc_list = [os.path.join(workspace, fc) for fc in fcs]

pool = multiprocessing.Pool(), fc_list)

  # Synchronize the main process with the job processes to ensure proper cleanup.


# End main

if __name__ == ‘__main__’:

2. Processing a individual dataset with a lot of features and records

This second example looks at geoprocessing tools analyzing an individual dataset with a lot of features and records. In this situation, we can benefit from multiprocessing by splitting data into groups to be processed simultaneously. For example, finding identical features may be faster when you split a large feature class into groups, based on spatial extents. The following code uses a pre-defined fishnet of polygons covering the extent of 1 million points (Figure 1).

Figure 1: A fishnet of polygons covering the extent of one million points.

import multiprocessing
import arcpy

def find_identical(oid): 

  # Create a feature layer for the tile in the fishnet.

  tile =’c:testingtesting.gdbfishnet’, ‘layer{0}’.format(oid[0]),
“”OID = {0}”"”.format((oid[0])))

  # Get the extent of the feature layer and set the extent environment.

  tile_row = arcpy.SearchCursor(tile)
geometry =
arcpy.env.extent = geometry.extent

  # Execute Find Identical

  identical_table =’c:testingtesting.gdbrandom1mil’, r’c:cursortestingidentical{0}.dbf’.format(oid[0]),  ‘Shape’)
  return identical_table.getOutput(0)

# End find_identical

def main():

  # Create a list of OID’s used to chunk the inputs

  fishnet_rows = arcpy.SearchCursor(r’c:testingtesting.gdbfishnet’, ”, ”, ‘OID’)
oids = [[row.getValue('OID')] for row in fishnet_rows]

  # Create a pool class and run the jobs–the number of jobs is equal to the length of the oids list

  pool = multiprocessing.Pool()
result_tables =, oids)

  # Merge the all the temporary output tables — this is optional. Omitting this can increase performance., r’C:cursortestingctesting.gdbfind_identical’)

  # Synchronize the main process with the job processes to ensure proper cleanup.


# End main

if __name__ == ‘__main__’:

There are tools that do not require data be split spatially. The Generate Near Table example below, shows the data processed in groups of 250000 features by selecting them based on object ID ranges.

import multiprocessing
import arcpy

def generate_near_table(oid_range):

i = oid_range[0]
j = oid_range[1]

lyr =’c:testingtesting.gdbrandom1mil’, ‘layer{0}’.format(i),
“”"OID >= {0} AND OID <= {1}”"”.format(i, j))

gn_table = arcpy.analysis.GenerateNearTable(lyr, r’c:testingtesting.gdbrandom10000′,
  return gn_table.getOutput(0)

# End generate_near_table function

def main():

oid_ranges = [[0, 250000], [250001, 500000], [500001, 750000], [750001, 1000001]]
arcpy.env.overwriteOutput = True

  # Create a pool class and run the jobs

  pool = multiprocessing.Pool()
result_tables =, oid_ranges)

  # Merge resulting tables is optional. Can add overhead if not required., r’c:cursortestingctesting.gdbgenerate_near_table’)

  # Synchronize the main process with the job processes to ensure proper cleanup.


# End main

if __name__ == ‘__main__’:


Here are some important considerations before deciding to use multiprocessing:

The scenario demonstrated in the first example, will not work with feature classes in a file geodatabase because each update must acquire a schema lock on the workspace. A schema lock effectively prevents any other process from simultaneously updating the FGDB. This example will work with shapefiles and ArcSDE geodatabase data.

For each process, there is a start-up cost loading the arcpy library (1-3 seconds). Depending on the complexity and size of the data, this can cause the multiprocessing script to take longer to run than a script without multiprocessing. In many cases, the final step in the multiprocessing workflow is to aggregate all results together, which is an additional cost.

Determining if multiprocessing is appropriate for your workflow can often be a trial and error process. This process can invalidate the gains made using multiprocessing in a one off operation; however, the trial and error process may be very valuable if the final workflow is to be run multiple times, or applied to similar workflows using large data. For example, if you are running the Find Identical tool on a weekly basis, and it is running for hours with your data, multiprocessing may be worth the effort.

Whenever possible, take advantage of the “in_memory” workspace for creating temporary data to improve performance. However, depending on the size of data being created in-memory, it may be necessary to write temporary data to disk. Temporary datasets cannot be created in a file geodatabase because of schema locking. Deleting the in-memory dataset when you are finished can prevent out of memory errors.


These are just a few examples showing how multiprocessing can be used to increase performance and scalability when doing geoprocessing. However, it is important to remember that multiprocessing does not always mean better performance.

The multiprocessing module was included in Python 2.6 and the examples above will work in ArcGIS 10.0. For more information about the multiprocessing module, refer the Python documentation.

Please provide any feedback and comments to this blog posting, and stay tuned for another posting coming soon about “Being successful processing large complex data with the geoprocessing overlay tools”.


This post contributed by Jason Pardy, a product engineer on the Analysis and Geoprocessing team

Posted in Analysis & Geoprocessing | Tagged , , | 11 Comments

Major Cities Chiefs & ArcGIS Explorer Online

The Major Cities Chiefs (MCC) is a professional organization of police executives representing the largest cities in the United States and Canada. The organization has recently published two very interesting maps on their website that were made using ArcGIS Explorer Online. Let’s take a closer look and see how they’ve leveraged Explorer Online’s capabilities.

The first map is an embedded map on their Member Cities site. The map enables visitors to navigate to an area of interest and click member city locations to view more information. Locations were obtained from a CSV file and imported onto the Explorer map. The configured pop-up includes a link to each department website:

Clicking the View Detailed Map link under the embeded map opens ArcGIS Explorer Online with a dashboard that includes several gadgets that enable users to learn more about staff composition in each department, population, budget, and crime activity compared to national averages. Just click a location to activate the gadgets:

To learn more about how to make similar maps for use on your website, see the following help topics:

Import CSV data

About configuring pop-up windows

Link to a map (scroll down to view the embedding a map in a website section)

About the map dashboard


Posted in ArcGIS Online, Services | Tagged , | Leave a comment

And now onto the Dev Meet Up in Charlotte

On our way to Charlotte, after waving at some highway patrolmen, Jim Barry ( @JimBarry) and I (@AmyNiessen), arrived at the hotel, and it was definitely a warm welcome. Charlotte was very much a downtown that I would imagine. On the road, the crosswalks were painted in checkers as a cute way of announcing the NASCAR Hall of Fame nearby. Being that we were only there one day, we had to get right down to business. We needed to prep for the meet up in the evening and wanted to get to the venue, Black Finn (@BlackFinnCLT), beforehand to set up.

When we arrived, we were introduced to the coordinator, Courtney Maddox, who was very kind in helping us get set up for the night’s event. She immediately offered us the entire upstairs bar, as opposed to the tiny room that we used the last time we visited. On top of that, we were getting a microphone. Hooray for microphones (don’t get me started…)! As people started to come in, I started to get a little hungry. I was able to sneak in a few bites of the gourmet food Black Finn had in store for us for the evening. Lucky me! As people checked in, I started to see some familiar faces from our Charlotte office, such as Garima Vyas and David Crosby. We also recognized another friend (although he was disguised in plaid), Glenn Goodrich (@ruprictGeek), who was all set perform the keynote speech.

Jim introduced the EDN Team and the sites that we have for all of our Dev Meet Up events. The site provides a way for users to network and plan for upcoming Dev Meet ups. The EDN Team can stay in touch with users this way and really cater the event toward what the users want. As soon as Jim turned the stage over to Glenn, he kicked off the evening by displaying Backbone.js. He was kind enough to share a little bit of code with everyone. Thanks, Glenn!

Now onto our lightning talks. Bryan Townsend from York County, SC presented “Customize by Configuration”. He presented features from the Geocortex Viewer for Silverlight. Interestingly enough, random sound bytes would interject with rock ‘n roll tunes into Bryan’s presentation. They came at good times, though and it seemed as though he planned it that way.

The final lightning talk presentation we had was from Shawn Carson of Rock Hill. Shawn showed off the website he built for the City using the ArcGIS API for JavaScript. At the same time, he used the City’s website to show all of us how easy it was to just pick it up and get building, even for someone like him who was really new to coding in JavaScript. The first release of his website contained about 90% copy-pasted code from the ArcGIS API for JavaScript Resource Center. There you’ll find well over a hundred sets of runnable JavaScript code that exercise pretty much all of the most common mapping and GIS functions.

Finally, we had a few trivia questions that needed to be answered in exchange for our Esri tote bag and some other very cool items. Some renamed the Esri tote bag as the official shopping bag of Colorado.

We want to thank everyone for coming out. To stay involved, please visit our Carolinas page where you can meet other developers in this area, find out more about our events, and be notified of our next visit. Ciao for now!

Posted in Developer | Tagged , | Leave a comment

Drag and Drop Files on Your Web Map


With the
latest release of ArcGIS Online, you can now add shapefiles, text files (TXT
and CSV), and GPX files directly to your web map. You can drag data from
your computer onto your map or, with just the click of a button, add it to your
map in the
map viewer
 or ArcGIS
Explorer Online
. Once you’ve added your data, you can configure pop-up
windows and change the symbols.

When you
add your data to a web map, the map viewer and ArcGIS Explorer
Online automatically add the location information from your file, draw features
for each item, and store the information in the map.

addition to the above-mentioned formats, you can also add Open Geospatial
Consortium, Inc. (OGC), Web Map Service (WMS) layers to the map
viewer and ArcGIS Explorer Online. Simply click the Add button and
enter the URL to the service. The map viewer also supports the
addition of KML layers.

You can
share your data or saved maps in ArcGIS Online so others can find them and use
them to create their own maps and mashups.

 about adding features to your map, or watch a
short video.

Posted in ArcGIS Online, Web | Tagged , , , , , , , , | 2 Comments

Make Your Own Social Media Map (in 3 easy steps)


Want to know what’s going on with Hurricane Irene and see for yourself what folks on the ground are saying? Here’s how you can quickly make your own ArcGIS Online Hurricane Irene map, add geo-located tweets, and share your map with others in three quick and easy steps.

Step 1: Get your map

You can start off with a new map and hunt for services to add, but there’s no reason to do that when there are a number of hurricane maps already available that have been publicly shared. So we’ll start by going to and search for the keyword “Irene:”

Continue reading

Posted in ArcGIS Online, Services | Tagged , , , | Leave a comment