Multiprocessing with ArcGIS – Approaches and Considerations (Part 1)

The multiprocessing Python module provides functionality for distributing work between multiple processes on a given machine, taking advantage of multiple CPU cores and larger amounts of available system memory. When analyzing or working with large amounts of data in ArcGIS, there are scenarios where multiprocessing can improve performance and scalability. However, there are many cases where multiprocessing can negatively affect performance, and even some instances where it should not be used.
There are two approaches to using multiprocessing for improving performance or scalability:

  1. Processing many individual datasets.
  2. Processing datasets with many features.

The goal of this article is to share simple coding patterns for effectively performing multiprocessing for geoprocessing. The article will cover relevant considerations and limitations, which are important when attempting to implement this approach.

1. Processing large numbers of datasets

The first example covers performing a specific operation on a large number of datasets, in a workspace or set of workspaces. In cases where there are large numbers of datasets, taking advantage of multiprocessing can help get the job done faster. The following code demonstrates a multiprocessing module used to define a projection, add a field, and calculate the field for a large list of shapefiles. This Python code is a simple pattern, which will create a pool of processes equal to the number of CPUs or CPU cores available. This pool of processes will then be used to processes the feature classes.

import os
import re
import multiprocessing
import arcpy

def update_shapefiles(shapefile):
    '''Worker function'''

    # Define the projection to wgs84 -- factory code is 4326., 4326)

    # Add a field named CITY of type TEXT., 'CITY', 'TEXT')

    # Calculate field 'CITY' stripping '_base' from
    # the shapefile name.
    city_name = shapefile.split('_base')[0]
    city_name = re.sub('_', ' ', city_name), 'CITY',
                    '"{0}"'.format(city_name.upper()), 'PYTHON')

# End update_shapefiles
def main():
    ''' Create a pool class and run the jobs.'''
    # The number of jobs is equal to the number of shapefiles
    workspace = 'C:/GISData/USA/usa'
    arcpy.env.workspace = workspace
    fcs = arcpy.ListFeatureClasses('*')
    fc_list = [os.path.join(workspace, fc) for fc in fcs]
    pool = multiprocessing.Pool(), fc_list)

    # Synchronize the main process with the job processes to
    # ensure proper cleanup.
    # End main

if __name__ == '__main__':

2. Processing a individual dataset with a lot of features and records

This second example looks at Geoprocessing tools analyzing on an individual dataset with a lot of features and records. In this situation we can benefit from multiprocessing, by splitting data into groups to be processed simultaneously.

Figure 1: A fishnet of polygons covering the extent of one million points

For example, finding identical features may be faster when you split a large feature class into groups, based on spatial extents.
The following code uses a pre-defined fishnet of polygons covering the extent of 1 million points (Figure 1).

import multiprocessing
import numpy
import arcpy
arcpy.env.overwriteOutput = True

def find_identical(oid):
    '''Worker function to perform Find Identical, and return'''
    '''a list of numpy arrays as the results.'''
    # Create a feature layer for the tile in the fishnet.
    tile =
                        """OID = {0}""".format((oid[0])))

    # Get the extent of the feature layer and set the environment
    # to use during Find Identical.
    tile_row = arcpy.da.SearchCursor(tile, 'shape@')
    geometry =[0]
    arcpy.env.extent = geometry.extent

    # Execute Find Identical
    identical_table =
                         'in_memory/identical', 'Shape')

    # Convert the resulting table into a numpy array and return
    result_array = arcpy.da.TableToNumPyArray(identical_table, ["*"])
    return result_array
    # End find_identical

def main():
    # Create a list of OID's used to chunk the inputs
    fishnet_rows = arcpy.SearchCursor(
              'C:/testing/testing.gdb/fishnet', '', '', 'OID')
    oids = [[row.getValue('OID')] for row in fishnet_rows]

    # Create a pool class and run the jobs--
    # the number of jobs is equal to the length of the oids list
    pool = multiprocessing.Pool()
    result_arrays =, oids)

    # Concatenate the resulting arrays and create an output table
    # reporting any identical records.
    result_array = numpy.concatenate(result_arrays,axis=0)

    # Synchronize the main process with the job processes to ensure
    # proper cleanup.
    # End main

if __name__ == '__main__':

The example above splits the data using spatial extents. However, there are tools that do not require data be split spatially. The Generate Near Table example below, shows the data processed in groups of 250000 features by selecting them based on object ID ranges.

import multiprocessing
import numpy
import arcpy
def generate_near_table(ranges):
    i, j = ranges[0], ranges[1]
    lyr =
                  'layer{0}'.format(i), """OID >= {0} AND
                  OID <= {1}""".format((i, j)))
    gn_table = arcpy.analysis.GenerateNearTable(
                  lyr, 'c:/testing/testing.gdb/random300',
    result_array = arcpy.da.TableToNumPyArray(gn_table, ["*"])
    return result_array
    # End generate_near_table function

def main():
    ranges = [[0, 250000], [250001, 500000], [500001, 750000],
                    [750001, 1000001]]
    # Create a pool class and run the jobs--the number of jobs is
    # equal to the length of the oids list
    pool = multiprocessing.Pool()
    result_arrays =, ranges)

    # Concatenate the resulting arrays and create an output table
    # reporting any identical records.
    result_array = numpy.concatenate(result_arrays,axis=0)
    arcpy.da.NumPyArrayToTable(result_array, 'c:/testing/testing.gdb/gn3')

    # Synchronize the main process with the job processes to
    # Ensure proper cleanup.
    # End main

if __name__ == '__main__':


Here are some important considerations before deciding to use multiprocessing:

  1. The scenario demonstrated in the first example, will not work with feature classes in a file geodatabase because each update must acquire a schema lock on the workspace. A schema lock effectively prevents any other process from simultaneously updating the FGDB. This example will however, work with shapefiles and ArcSDE geodatabase data.
  2. For each process, there is a start-up cost in loading the arcpy library (1-3 seconds). Depending on the complexity and size of the data, this can cause the multiprocessing script to take longer to run than a script without multiprocessing. In many cases, the final step in the multiprocessing workflow is to aggregate all results together which is an additional cost.
  3. Determining if multiprocessing is appropriate for your workflow can often be a trial and error process. This process can invalidate the gains made using multiprocessing in a one off operation; however, the trial and error process may be very valuable if the final workflow is to be run multiple times, or applied to similar workflows using large data. For example, if you are running the Find Identical tool on a weekly basis, and it is running for hours with your data, multiprocessing may be worth the effort.
  4. Whenever possible, take advantage of the “in_memory” workspace for creating temporary data. It can improve performance rather than writing data to disk. However, depending on the size of data being created in-memory, it may be necessary to write temporary data to disk. Temporary datasets cannot be created in a file geodatabase because of schema locking. Deleting the in-memory dataset when you are finished can prevent out of memory errors.


    These are just a few examples showing how multiprocessing can be used to increase performance and scalability when doing geoprocessing. However, it is important to remember that multiprocessing does not always mean better performance.
    The multiprocessing module was included in Python 2.6 and the examples above will work in ArcGIS 10.1. For more information about the multiprocessing module, refer the Python documentation,
    Please provide any feedback and comments to this blog posting, and stay tuned for another posting coming soon about “Being successful processing large complex data with the geoprocessing overlay tools”.
    This post was contributed by Jason Pardy, a product engineer on the analysis and geoprocessing team.
This entry was posted in Analysis & Geoprocessing, Python and tagged , . Bookmark the permalink.

Leave a Reply


  1. offermann says:

    Nice article about the considerations of using the power of multiple CPUs, especially the dependecies to the underlying workspace type (no fgdb). I some times have problems when it comes to geoprocessing of many features in a feature class. It would be helpful to write an article of how to separate one single feature class to many smaller feature classes, or an automatically created “fishnet” feature class like in the second example.

    • bruce_harold says:


      Here is a code snippet I use to break tables or feature classes up into chunks of arbitrary count, by creating a SQL expression usable for cursors, feature sets etc. It creates a list of expressions for ObjectID ranges defining the chunks.

      inDesc = arcpy.Describe(inTable)
      oidName = arcpy.AddFieldDelimiters(inTable,inDesc.oidFieldName)
      sql = ‘%s = (select min(%s) from %s)’ % (oidName,oidName,os.path.basename(inTable))
      cur = arcpy.da.SearchCursor(inTable,[inDesc.oidFieldName],sql)
      minOID =[0]
      del cur, sql
      sql = ‘%s = (select max(%s) from %s)’ % (oidName,oidName,os.path.basename(inTable))
      cur = arcpy.da.SearchCursor(inTable,[inDesc.oidFieldName],sql)
      maxOID =[0]
      del cur, sql
      breaks = range(minOID,maxOID)[0:-1:2000] #2K slices
      exprList = [oidName + ' >= ' + str(breaks[b]) + ‘ and ‘ +
      oidName + ‘ < ' + str(breaks[b+1]) for b in range(len(breaks)-1)]

      • g3martin says:

        Thanks for posting this.

      • rviger says:

        Yeah, thanks for the code. Just a caveat for others cutting-and-pasting this, I needed to replace the single quotes used in setting the “sql” variables. I don’t think it’s a code error, just something funky in the text representation (not UTF-8?) on either this web site or my fairly standard configuration of Win7.

  2. harrybowman says:

    Why doesn’t this cause a conflict in creating & writing to the in_memoryidentical table? Is there one per process?

  3. amarinelli says:

    Found that their were issues when multiprocessing with data in ArcSDE. The necessary functions would complete (AddField, Calculate Field) but the pool would never join/close to complete the script (i.e. script would hang). Clear Workspace Cache (Data Management) helped.

    # Release hold on ArcSDE workspace created in previous steps.
    arcpy.env.workspace = ""

    # Execute the Clear Workspace Cache tool
    # If you do not specify a connection, all ArcSDE workspaces will be removed from the Cache

  4. Curtis Price says:

    Just starting to experiment with this.

    I would expect that raster processing would have issues as raster map algebra always writes grids to a folder – best practice I guess for any of this to have each process do its geoprocessing in its own private workspace and clean up after itself when done. Copying a shapefile would be okay as there are no workspace dependent directory files at play (ie info folder, file gdb workspace directory tables).

  5. awulff says:

    Nice considerations and advice, Jason. Thank you. I was hoping to pose a question per the advice of ESRI Tech support:

    Some Background:
    We have a process to automate imagery and report changes for up to 300 fields simultaneously. As a result we want a non-serial solution to assist with getting through processing of new data as quickly as possible, hence the parallel processing.

    Our Systems: We have both ArcGIS10.3.1×86 and ArcGIS10.3×64 on the server machine where this procedure will be executed.
    We are running sde against a SQL 2012 server.

    (Note: I had already encountered the issues with multiple thread handles to fgdb’s and sde’s and have converted to a forked-process solution.)

    The problem:
    I have found consideration 1 to be false when attempting to use parallel calls through arcpy.CopyFeature_Management to an SDE location. The features that I do get to copy will periodically break on read operations as well. Where I get messages like:

    WARNING: Failed to convert in_memory/RICRfr1_AUG092015_RM39_vi. ERROR 999999: Error executing function.
    DBMS deadlock victim [[Microsoft][SQL Server Native Client 11.0][SQL Server]Transaction (Process ID 570) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.] [SGS_Raster_Results.dbo.GDB_ItemRelationships]
    No spatial reference exists.
    DBMS table not found
    Failed to execute (CopyRaster).

    To this end, I would like to inquire if there are any tips that can be recommended to help ensure that the SDE “locks” are clear before writes (the arcpy.TestSchemaLock(…) seems to be hit and miss).

    Also we have been thinking about using SQL/Spatial. Thoughts on this direction?

    Thank you,