Tag: Editing

Voting on a new park location

Park analysis and design:  Voting on a new park location (part 3)

In my previous blog post, I determined suitable locations for a new park by analyzing a series of datasets provided by the City of Redlands. The final output showed a number of parcels that matched the standards established in the model. The next task is to seek feedback from the public. To do this, I’ll take advantage of a web application I developed using ArcGIS Server.

Preparing the data
The park suitability model resulted in an output of a feature class containing many multipart features. A multipart feature, as the name suggests, is a feature with multiple parts. Think of Hawaii as one feature (state) with multiple parts (islands). To break the suitable areas for parks into separate features, I’ll use a tool called Multipart To Singlepart.

With every parcel being its own feature, I can calculate the area for each potential site by creating a field and using Calculate Geometry in the attribute table. Once I have the area in acres, I need to convert all the polygons to points using the Feature to Point tool so I can represent each park as a point location in the web application.

The final dataset contains fields for the park’s area and an identification number, which I derived by copying the OBJECTID to a field called ParkID. This number is used to link the park feature to the voting results table, which also has a field for the ID named ParkIDVoted (so I can distinguish it in the Flex code).

Building the web service and application
I’m developing my application using the ArcGIS API for Flex, so I first check if there are any existing samples that I can use as a starting point to help me collect votes. I find the Editing a related table sample, which demonstrates a similar scenario that I can modify for the needs of my own project. This sample takes a set of incidents (stored as points) and allows the user to flag an incident as important. In the code, there’s a map service that holds the points, as well as a table to hold the results. In the geodatabase, these are linked using a relationship class. These datasets need to be in an ArcSDE geodatabase with feature access enabled to allow web editing. Accordingly, I can set up my data this way and publish it with ArcGIS Server, which makes the parks and the table become layers in a map service.

I need to change a few things in the sample to customize it for my own application: the URL of the parks layer and the URL of the table holding the votes. Some field names are different, but other than that, the logic of casting the vote is fairly straightforward.

In terms of the interface, the sample shows how to use the pop-up window (infoWindow) when a park is clicked. I used the same thumbs-up icon and added a bit more information to the information window. Additionally, I published the park access map and the final suitable parcels layers, which can be turned off and on in the application using simple Flex components.

Submitting a vote
When users find a park they are interested in, they click the icon on the map. This sends a query to the server using the x,y location of the map click, which also triggers a relationship query that gets the number of votes of the record in the related table. The infoWindow then displays the ID of the park that was clicked, the park size, and the total count of current records in the related table, which are votes in favor of this location.

To vote for this park, the user clicks the thumbs-up icon, which sends a message to the server (applyEdits) that puts the ID of the park, plus a value for “like” into the related table through the relationship class. The count is increased by one and the total vote count can be seen immediately.

Counting the results
On the server, the related table collects the votes. Each record in the table is a vote, which includes the Park ID the user clicked, an attribute for the vote (“true”), and the date of the vote.

When the voting period is over, I can run a summary on the final table using the Summary Statistics tool. This counts the number of records with the same ID and creates a table, which I can then build a report on using the new reporting tools in ArcGIS 10.

Now that I have a winner, the next task is to design the park using the sketching tools in ArcGIS 10. I will cover this in my next blog post.

Accessing the Data
The data, Flex source code, report template, and a few other parts of the workflow can be found here

The rest of the data and tools for this blog series can be found in the Park Analysis and Design group (make sure to filter by Show: All Content at the top of the page)

Content for the post from Matthew Baker

Update

Part 1 – Park analysis and design – Measuring access to parks

Part 2 – Park analysis and design: Locating a park through suitability analysis

Part 3 – Park analysis and design:  Voting on a new park location

Part 4 – Park analysis and design:  Sketching the design of a new park

Posted in Analysis & Geoprocessing, Editing, Web | Tagged , , , , , , , | Comments Off

Locating a park through suitability analysis

Park analysis and design: Locating a park through suitability analysis (part 2)

In my previous blog post, I analyzed park accessibility in the City of Redlands and discovered several areas of the city that were farther than one mile from an existing park along the walkable street network. Now, I want to determine where to best locate a new park within the areas I identified as being underserved by current parks.

To answer this question, I’ll conduct a suitability analysis to find parcels that are most appropriate for a new park.

There are two main types of suitability analysis: binary and weighted. Binary suitability analysis involves a binary final answer —1 or 0, or in our case, suitable and unsuitable. A weighted suitability analysis allows for a range of final answers, from 1 to 10, for example, and allows certain layers to have more influence (weight) on the result of the model. For this example, I’m going to create a binary suitability analysis model.

As with our park accessibility analysis, I’ll start with several datasets from the City of Redlands, including parks, schools, roads, trails (off-road and on), existing and proposed bicycle lanes, and vacant parcels. Before I construct a model, I should know the distances the new park should be from certain features. In most cases, I’m looking to be close to certain features, but in other cases, I want to make sure I’m far enough away, such as with highways and existing parks. 

Remember that any of these values can be changed to suit any criteria. ModelBuilder allows a workflow to be created, run, and then modified to suit different ideas of how far each feature should be from a new park.

Creating a data processing workflow
My analysis should read like a flowchart: buffer the schools, trails, and bicycle lanes to make the ‘good’ areas. Buffer the existing parks and highways to make the ‘bad’ areas. Then remove the bad areas from the good areas, and find the areas that are common to the vacant parcels.

Developing a suitability model
To use the data and tools found in ArcGIS to accomplish suitability analyses, I’ll develop a model using ModelBuilder. ModelBuilder acts much like a living flowchart, with data elements connecting to tools creating outputs just like the flow processing diagram. A model serves not only as an organizational tool for doing data processing, but the elements of the model store parameter values and data paths that can be changed, and the model itself can be shared and run on different data. For example, other users can change the input datasets to their own parks and street network to achieve the same analysis.

By definition, geoprocessing tools take one or more pieces of geographic data, run a process based on parameters I define, and create a new piece of data as the result. That first result can be fed into another tool which results in yet another piece of data. Once the new data has been created, the old result can be discarded. This data is called intermediate data. Each piece of intermediate data should be written to a scratch workspace, which is defined in the environment settings of the map or model.  Keeping intermediate data in a scratch workspace is a great way to ensure I don’t end up with random datasets all over my computer.

Tools for models can be found using the Search window. The Search window will allow me to type in the name of a tool, dataset, or script and show results across all types of data. To add a tool to a model, drag the tool by its name, and drop it on the model canvas. Model elements can be connected using the Connect tool from the model window. Double-clicking a tool or element will open a dialog box that allows me to ensure the settings are correct before I run the model. ModelBuilder will also check the inputs are valid before running, and I can check them all manually by clicking the Validate Entire Model tool from the ModelBuilder toolbar. I can save the model in a toolbox, which can be stored anywhere on disk or in a geodatabase, as I am doing.

When the model runs, a dialog box shows me the progress, notification that it is finished, and any messages, warnings, or errors that might have occurred. The Results window is the location to track the status of a model or other geoprocessing operation.

Reusing models as tools
Another nice feature of models is they can be used in other models as tools. Since I already proved the effectiveness of measuring distances along the road network versus straight-line buffers, I can take the method I developed and use it as a tool in my park suitability model. I’ll call the tool Buffer Along Roads and use it for the schools and existing parks, which are the only datasets that require travel to be measured along the road network.

My model tool will operate as any other tool: it requires an input point dataset and will create a polygon dataset containing buffers along the roads using the distances exposed in the reclassification scheme. Once I’ve created these distance polygons, I then choose the ones that meet my criteria—in this case those that are ½ mile from existing parks and within ½ mile of schools.  From there, the rest of my analysis can continue using straight-line buffers from bike lanes, trails, and highways.

Determining the final location
When the model is finished, I see that there is more than one suitable location for a new park. I then have some work to do to figure out the final parcel or location. For example, perhaps I’m looking for the area that is closest to downtown. Using my park access analysis as an example, converting the final suitable polygons to points and running them through a cost distance tool would be one method to use.

However, I want to allow the citizens to provide input. In the next entry in this series, I’ll use ArcGIS Server to collect volunteered geographic information, crowd-sourced, or user-generated content to allow users to vote on their favorite location for a new park. This concept is now being referred to as “participatory planning”.

Accessing the data and models
The data and models for this blog post can be found here
The rest of the data and tools for this blog series can be found in the Park Analysis and Design group here (make sure to filter by Show: All Content at the top of the page)

Update

Part 1 – Park analysis and design – Measuring access to parks

Part 2 – Park analysis and design: Locating a park through suitability analysis

Part 3 – Park analysis and design:  Voting on a new park location

Part 4 – Park analysis and design:  Sketching the design of a new park

Content for the post from Matthew Baker

Posted in Analysis & Geoprocessing, Editing | Tagged , , , , , , , | Comments Off

Measuring access to parks

Park analysis and design – Measuring access to parks (part 1)
Have you ever wondered how far you are from a park? In this post, I’ll examine the placement of parks in Redlands, California, and determine which areas are best and worst served by a park. In future posts, I’ll discuss siting a new park using binary suitability analysis, web-based tools for evaluating and increasing park access, and the design of a new park using ArcMap and feature template-based editing.

Over the last year, I’ve been attending various urban planning conferences and have discussed with several urban planners the need to design healthier communities, and I have heard this notion echoing throughout the planning community.

One concern is to figure out how well areas are served by parks. In my analysis, I want to determine which areas are within one mile of a park and visualize the results in a way that is easy to understand. I chose one mile, assuming most people can visualize how long it would take them to walk a mile, but this analysis could certainly be easily altered to measure any distance and present the results in a similar manner.

To do this, I could use a simple one-mile buffer around the parks, as the first map shows. However, a map created that way does not consider modes of travel. I want to measure pedestrian access to parks, so the best route is to travel along a road, preferably on the sidewalk.

The more accurate way to measure park access is to determine areas around the parks that fall within a specified distance from the parks along the road network. Using network analysis, we call this a service area analysis, or drive time, but this uses the road network only.

There are tools within the Spatial Analyst toolbox to run a cost-distance analysis: essentially a distance map calculated against a surface describing how difficult it is to travel across a landscape. This gives me the ability to rank our landscape by how easy it is to travel, road or not.

I want to then create a map showing areas that are ¼, ½, 1 mile, and greater than 1 mile from a park along the road network and show the distances on the map as well as on a graph.

Creating a travel cost surface
For my analysis, I am first going to create a cost surface that describes ease of travel through Redlands, with areas along roads being easier (cheaper) to travel through, and areas farther from roads more difficult (expensive) to travel.

To do this, I start by creating a raster surface where every cell has a value for the distance it is from itself to the nearest walkable road segment; that is, I don’t have to drive a car to get to a park and can even get exercise on the way.

First, I’ll need to map the road network. From the City of Redlands roads dataset, I can simplify all the roads into three main types: minor, major (arterial), and highway.


Since pedestrians cannot safely or legally walk on the highways, I can remove them from the analysis. The first tool in the model will be the Select tool, which allows a set of features to be removed for analysis by an SQL statement. In this case, I’ll use Road Type not equal to Highway to remove the highways from the analysis and create a walkable road dataset.

Of course, this would be a good place for a better road dataset in which each street had an attribute for whether or not it is walkable. I have heard of a few communities and organizations starting to capture this data, and it would be most useful for this application.

Once I have extracted the walkable roads, I’ll run the Euclidean Distance tool to create a surface in which each raster cell holds a value for the distance between itself and the nearest road.

The Euclidean Distance tool creates a surface where every part of the study area is broken down into a square (cell), and every square is assigned the distance to the nearest road segment. I’ve adjusted the symbology to group cells into distance bands.

Creating a cost surface
I’ll now borrow a concept from a weighted overlay (suitability) model and reclassify the road distances onto a scale of 1 to 6, where 1 is the cheapest (easiest to travel), and 6 is the most expensive (difficult to travel).To do this, use the Reclassify tool. It allows me to define the number of classes into which I want to reclassify the data. The Old Values column describes the distances from the Euclidean distance raster. The New Values column is the breakdown of the new values for the ranges of the old distance values.

Notice I’m going to reclassify the distances using the same distance bands I used earlier to describe how far each part of town is from the nearest road. Each cell in each distance band then gets a new value describing its cost on a scale of 1 to 6.

Here are the new reclassified distances. Notice the values become more expensive when moving away from the roads.

This now becomes the cost surface that I’ll use to measure park access.

Evaluating park data
Because the park data is stored as centroid points, they may not necessarily reflect the true access points to the parks themselves. By creating points at the corners of the park, I can have a more suitable location from which to measure park access.

Borrowing again from the City of Redlands dataset, I’ll simply select the parcels that intersect the park points and run those intersecting parcels through the Feature Vertices To Point tool in the Data Management toolbox.

Depending on the geometry of some of the parcels, I might end up with a little more than just the corners, but this is a much more accurate representation of how to get into the park than just a point in the middle of the parcel.

Calculating cost distance
Next, I’ll run the new park points against the cost surface using the Cost Distance tool in the Spatial Analyst toolbox. Using this tool, I can create a raster surface where each cell has a distance from itself to the nearest park point along the cheapest path—in this case, the cells that are nearest to the roads as described by our cost surface.
The resultant raster gives a picture of how far each location is in the entire city to the nearest park, which is somewhat hard to visualize. I can then reclassify the distances into simple ¼-, ½-, and 1-mile areas.

Visualizing the results
Taking the walkable road network into consideration certainly does give a much better picture of areas served by parks—and notice the areas that now show up as underserved that the buffer didn’t expose. These areas are over a mile from a park, which meets our criteria of underserved.

In addition to mapping, I can also create a graph that visualizes the percentages of the city that are served by parks by their respective distances.

Using the graphing tools in ArcMap, I can create a new field of data to hold the percentage, calculated by using a variable in the model that stored the area of the City, and divide that by the area of each feature in my walkability analysis. I can create a table that stores the values of the output of my reclassification (1,2,3,5,9) and their respective labels (500’, ¼ Mi, ½ Mi, 1 Mi, and More than 1 Mile) and join that table to my walkability output. It’s an extra step, but one that can be repeated if my underlying data changes and I want to run it again.

Now that I have identified that there are areas underserved by parks, the task of my next blog post will be to determine the best location for a new park using a simple binary suitability analysis.

Data credits
Data is provided by the City of Redlands. The data and models for this blog post can be found here

Update

Part 2 – Park analysis and design: Locating a park through suitability analysis

Part 3 – Park analysis and design:  Voting on a new park location

Part 4 – Park analysis and design:  Sketching the design of a new park

Content for the post from Matthew Baker

Posted in Analysis & Geoprocessing, Editing | Tagged , , , , , , , | Comments Off

New release of Infrastructure Editing Map

This week, we posted a new release of the Infrastructure Editing Map.  This release includes many enhancements you’ve requested and addresses several problems. 

You’ll find the June 20, 2011 release of the Infrastructure Network Editing template for ArcGIS 10 addresses the following:

New Functionality

1.    Redesigned isolation trace routine to incorporate new functionality

2.    Added a new Secondary trace that uses the trace point from an isolation trace, disables all valves selected, and reruns the trace

3.    Added a new configuration file tag for angle of the dogleg “<Hook_Angle>45</Hook_Angle>”

4.    Added a new configuration file tag for the operable values of a valve “<add key=”TraceIsolation_Operable_Values” value=”0|1″ />”

5.    Added a new configuration file tag for additional SQL Query for remove valve used in the trace

6.    Added a new option to list multi value class for the isolation trace

7.    Redesigned the Profile Graph and exposed labels for configuration

8.    Redesigned the Update_Intersecting_Feature so it does not stop after processing one intersecting feature; it now runs on all intersected features

9.    Added a new option to subtype rules that allows users to list multi subtypes or all subtypes in a list

10.    Redesigned how rules are processed and evaluated to enhance performance

11.    Added the On_Manual flag to the Dynamic Value table to allow users to specify a rule to only rule when you click new button. Note: The “new button” has been added to the end of the Infrastructure Editing toolbar.

12.    Added a RUNORDER field to the DynamicValue table to allow users to list a hierarchy to how the rules are processed

13.    Added a new configuration file tag to use an Envelope to do spatial searches and not the geometry

14.    Redesigned the INTERSECT_STATS and JUNCTION_ROTATION methods in the Dynamic Value table

15.    Added several new methods to the Dynamic Value table (FROM_EDGE_STATS, TO_EDGE_STATS, FROM_EDGE_MULTI_FIELD_INTERSECT, TO_EDGE_MULTI_FIELD_INTERSECT, FEATURE_STATS, MINIMUM_LENGTH, NEAREST_FEATURE_ATTRIBUTES, SPLIT_INTERSECTING_FEATURE, MULTI_FIELD_INTERSECT, and INTERSECT_STATS)

16.    Redesigned the Generate ID and Generate ID by Intersect rules to use a new GenerateID table

17.    Redesigned the installation location of the configuration file to now be installed in the user directory. Note: it is copied there the first time ArcMap is opened once the tools are installed.

18.    Enhanced the refresh rate of the map when using add laterals and traces

19.    Added an option to set search distance on Add Laterals

20.    Added an option to search the mains featureclass or the featurelayer in the add laterals

21.    Added support for multi add lateral rules defined on the same point layer

22.    Enhanced the X_COORDINATE, Y_COORDINATE, LAT,LONG rules to allow users to choose the centroid, start or end coordinate to start

23.    Enhanced the elevation layer in the profile graph to make it now optional

24.    Added a new rule “CREATE_LINKED_RECORD” to create a new record in a table and creates a relationship to table using the primary key in the edited record

25.    Added a new rule “INTERSECTING_LAYER_DETAILS” to extract the details from the intersecting layer

26.    Simplified the Network Editing map document

27.    Added the latest LocalGovernment.gdb and Data Dictionary

Resolved Problems

1.    Resolved an issue with multi returns values in a geocoder

2.    Resolved an issue with Create Laterals with a dogleg that would preclude lateral from snapping to main

3.    Resolved an issue with Lat and Long rules to correct coordinate values returned

4.    Resolved an issue with intersecting rules to use feature geometries and handle mixed projections

5.    Resolved an issue with Intersect_Layer_Details and network path layers

6.    Resolved an issue with the CREATE_LINKED_RULE that prevented a trigger from being fired when a create event was fired

7.    Resolved an issue with the add laterals that caused issues with connection to the geometric network

8.    Resolved an issue with Generate_ID_By_Intersect rule that caused an issue when the ID field was not a string field

9.    Resolved an issue with the JUNCTION_ROTATION rules that precluded users from applying additional spin angle

10.    Resolved an issue with the construction tools that precluded them from registering properly

11.    Resolved an issue with split lines at click location and when clicking for a profile trace

12.    Resolved an issue with the fixed EXPRESSION rule

13.    Resolved an issue with the Attribute Assistant that precluded the EXPRESSION rule from being used more than once

14.    Resolved an issue that precluded a prompt for the debug log file location when opening an ArcMap document

15.    Resolved an issue that caused additional edits when opening an ArcMap document

16.    Resolved an issue with the Municipal Boundary layer in the InfrastructureEditing.mxd and associated documentation

17.    Resolved an issue that precluded the Attribute Assistant from interacting with layers in the basemap group layer

18.    Resolved an issue with the Connection Closest function that created features in the wrong order

19.    Resolved an issue with the Merge Features function that introduced features from incorrect feature classes

Known Issues

N/A

The update to the Network Editing app is a major release that include direct feedback we have been getting
from users. We encourage you to download these updates and tell us how
they can improve the management of your water, sewer and storm water
infrastructure.   

Posted in Local Government | Tagged , , , , , , , , , , , , , , , , , , | Leave a comment

The Road Ahead for ArcGIS

The primary theme for ArcGIS 10.1, which is expected to be released in early 2012, is sharing and collaboration. Users will find that this release makes it simpler to put mapping and geospatial analytics into the hands of more people without requiring that they be GIS experts. ArcGIS users will be able to deliver any GIS resource, such as maps, imagery, geodatabases, and tools, as a web service. The ability to access these services will be built into ArcGIS, as well as any application built with one of the ArcGIS APIs.

With this release, cloud computing—both public and private clouds—will play an increasingly critical role in how users get their work accomplished. ArcGIS software will take advantage of the powerful, scalable, and ubiquitous nature of cloud infrastructures to store and distribute geospatial content. Users will be able to easily package their maps and layers and make that content available to staff, stakeholders, partners, or the public via online groups while maintaining complete control and ownership of their content. Additionally, users will be able to quickly deploy GIS servers in the cloud when they need them as fully functional production systems for publishing services and supporting desktop, mobile, and web applications.

At 10.1, ArcGIS for Server will run natively on 64-bit operating systems. Users will notice significant performance improvements for activities such as web editing, map caching, spatial analysis, finding addresses, and using imagery.

Imagery will also be better integrated into the core of ArcGIS. ArcGIS will not only make it simpler to use imagery but also support more imagery sources, as well as lidar and radar.

An exciting addition to ArcGIS at 10.1 will be ArcGIS Runtime, which lets developers create and deploy focused, stand-alone GIS applications for desktop users, who have been asking for a small, lightweight deployment that, in terms of capabilities, fits between ArcGIS Engine and the ArcGIS Web Mapping APIs. The new runtime is designed for both desktop and cloud development. It has a fast display and does not require installation; it can be run directly from a CD. The learning curve for the new runtime is expected to be very gentle for developers familiar with the web APIs.

In addition to these enhancements to ArcGIS, Esri has also concentrated on providing core GIS tools to help users create better maps. These tools range from dynamic legends to contextual generalization, the ability to track edits, parcel editing tools, analysis tools, and a whole lot more.

Finally, as Esri moves toward ArcGIS 10.1, Python is becoming foundational to ArcGIS. It essentially bridges the gap between GIS analysts and programmers.

We will be sharing more and more about what’s coming in ArcGIS 10.1 over the next few months so check back often.

Posted in Analysis & Geoprocessing, ArcGIS Online, Developer, Editing, Geodata, Mapping | Tagged , , , , , , , , , , , , , , , , , , , | 1 Comment

ArcGIS Editor for OpenStreetMap

The ArcGIS Editor for OpenStreetMap is a free, open source add-on for ArcGIS for Desktop that helps you become an active member of the growing OpenStreetMap (OSM) community. OpenStreetMap is an open and freely available database of geographic data. The editor makes it easy for you to download OSM data, make changes to the dataset, and contribute those changes back to the entire OSM community.

The OSM Editor provides

  • Simple tools to upload and download OSM data
  • An OSM-compatible geodatabase schema to locally store OSM data
  • An OSM symbology template for faster editing
  • Conflict-resolution tools for reconciling data back to the OSM database

The ArcGIS Editor for OpenStreetMap, its documentation, and its source code are available on CodePlex, Microsoft’s open source repository, and released under the Microsoft Public License.

 

For more information, read the documentation and discussions on Codeplex.

Download location

Posted in Editing | Tagged , , , | Leave a comment

Making Data and Map Production Better and Faster with Esri Production Mapping

For our first Production Mapping blog topic, we wanted to talk about what is this product named “Esri Production Mapping” and how does it help users? Let me first set the stage by talking about the common challenges that GIS organizations are facing today and then go on to describe Production Mapping’s role in solving these.

We all know that GIS organizations make up the core of the geospatial infrastructure, creating the foundational spatial information for GIS systems. These organizations are essentially the ‘authoritative content producers’ who have the responsibility of providing high quality, accurate geospatial data and maps that other organizations, and sometimes even lives, depend on.

Continue reading

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

Welcome to the new Esri Production Mapping Blog!

The Production Mapping Team is pleased to announce the launch of the new Esri Production Mapping Blog! This blog is an important extension of the software and is designed to bring you the latest information about Esri Production Mapping.

This is a virtual space for communication, collaboration, and knowledge sharing. Feel free to interact and post your own tips, comments, and questions. We’ll use your input to help guide us in our own efforts to better serve you. We’ll be posting new topics continuously, so make sure to check back regularly, or feel free to subscribe to our RSS feed by simply clicking on the link to the left.

We hope you’ll find this blog dynamic, engaging and instructive. We appreciate the important work you do. For those new to Production Mapping, check out:

Posted in Uncategorized | Tagged , , , , , , , | Leave a comment

Updating a feature's shape by sketching with the Continue Feature tool

ArcGIS 10 Service Pack 2 includes the new Continue Feature tool. Continue Feature allows me to resume digitizing an existing feature so its shape can be updated by sketching. For example, I can use Continue Feature to extend a line or add a part to a multipart feature by updating a feature’s edit sketch. These workflows were not originally available in ArcGIS 10 and required installing an Esri code sample to perform. However, they have been reincorporated and are now built into the software with the introduction of the Continue Feature tool.

Once I install ArcGIS 10 SP2, I need to add the Continue Feature tool to a toolbar from the Customize dialog box since it is not on the ArcMap interface by default. I click the Customize menu > Customize Mode, drag Continue Feature from the Commands tab, and drop it onto a toolbar. I can put it on any toolbar I want, but I like to add it to the Edit Vertices toolbar. This toolbar appears automatically when I am editing the shapes of features with the Edit tool, which is when I will be using Continue Feature.

Extending a line by sketching
One of the most common workflows I can accomplish with Continue Feature is to extend a line by adding new segments from the last vertex at the end of the existing sketch. Previously, in ArcGIS 10, I either needed to use the Esri code sample or create a new line and merge it with the existing one. Now, I can easily do this with Continue Feature.
I have a roads layer that needs to be updated to include a newly built portion. To extend the line, I double-click it with the Edit tool and click Continue Feature. When I do this, I see that the sketch is reversed from the direction I need to extend it. Right now, the last vertex in the sketch (shown in red) is actually at the location that should be the start point of the line when extended. I can right-click directly over the sketch and click Flip to change the line’s direction so I can digitize in the proper direction.

Similar to creating a new feature, I have access to any available sketch construction methods on the Feature Construction mini toolbar. These segments only need to be straight, but I could create the line with curves or by tracing it from the shapes of other features. I also can use keyboard shortcuts or right-click to access a menu of commands to help place vertices in the sketch. I need to finish the sketch when I am done digitizing the additional segments. Any attributes from the original feature are maintained in the updated feature.

Adding a part to a multipart feature
Continue Feature allows me to add a part to an existing feature to create a multipart polygon, line, or point (multipoint) feature. A multipart feature contains more than one part on the map but references one row in the attribute table. I have a layer containing wildlife habitats that needs a new zone added to it. This habitat should be represented as a multipart feature since it contains polygons that are geographically separated but have the same ID and ecological characteristics. I can use Continue Feature to digitize the new part without having to create a new feature and merge it with the existing one.

As with extending lines, I use the Edit tool to double-click the habitat polygon and click Continue Feature. When I am creating polygons or adding points to a multipoint feature, I just begin digitizing the new part. To create multipart lines or add even more parts to the polygon, I can right-click, click Finish Part, then continue digitizing the parts. I can also hold down the SHIFT key and double-click the last vertex as a shortcut to finishing the part.

I can now sketch the shape of the new part using any available construction method or shortcut. In this case, the coordinates of the habitat corners were obtained from a GPS track, so I can use absolute x,y to enter the vertices. When I am done, I finish the sketch to update the feature’s shape and make it a multipart feature. The original habitat zone’s attribute values are preserved and associated with both parts of the feature.

For more information about the Continue Feature tool, see Extending a line by sketching and Adding a part to a multipart feature in the ArcGIS Desktop Help.

Content provided by Rhonda from the Editing Team

Posted in Editing | Tagged , , , | 4 Comments

Considerations for ArcGIS Server developers: A look toward 10.1

Not long ago, we published an early deprecation notice announcing upcoming changes to ArcGIS Server in its 10.1 release. Reading between the lines, many of you realized that in the future, ArcGIS Server will become purely a server of GIS Web services. This is an architecture that has many advantages, but considering the code you may be accustomed to writing on top of local (DCOM) connections,it will bring significant changes to the way you think about developing with ArcGIS Server. Particularly, the 10.1 changes affect applications built with the Web ADF that make use of ArcObjects and nonpooled services.

This post is an early discussion of the path to move to a pure Web services pattern for communicating with ArcGIS Server. We will also disclose some additional information regarding our ArcGIS 10.1 plans so you have a better context to some of the changes in the deprecation notice. The intent of this post is not only to help you evaluate the impact of ArcGIS 10.1 on your existing applications, but also to outline and promote a Web services pattern in your development that aligns with the ArcGIS Server road map.

When ArcGIS Server first hit the market, we provided two hooks into it: you could access its Web services through HTTP, and you could also use local connections over DCOM. Local connections were handy when out-of-the-box Web services did not give you all the functionality you needed. In fact, through local connections, you could execute ArcObjects code on your server that your Web application could use. While the concept of executing ArcObjects logic in the server remains a powerful concept, the way in which this happens is changing.

At ArcGIS Server 9.3, we introduced new APIs for the development of Web applications. These APIs (for Flex, Javascript and Silverlight) did not use local connections at all. Instead, their only hook into the server was Web services. All these APIs invoked GIS functions through stateless REST-ful Web services.

This new Web services architecture is a simpler one. It results in easier-to-maintain applications, and faster and more scalable deployments. As of today, most ArcGIS Server development is happening through these APIs.

A particular point of concern for many developers revolves around ArcObjects access. The pattern for accessing ArcObjects functionality in the Web ADFs is widely known, but how do you get to ArcObjects from the ArcGIS Web Mapping APIs? And how will ArcObjects access occur in ArcGIS 10.1 with the deprecation of local connections?

In this post we’ll describe specific scenarios where Web ADF developers have traditionally used ArcObjects programming and local connections to ArcGIS Server. For some of these scenarios, we’ll describe how you might accomplish the same thing using alternative techniques. Where appropriate, we’ll also describe how you could address these scenarios using a Web services programming pattern with ArcObjects.

Reasons people use ArcGIS Server local connections…and some alternatives

As we browse the user forums and attend Esri conferences, we hear several common reasons that developers still use local connections in their Web applications. Here are a few of those reasons, accompanied by some alternatives.

Creating layouts for printing

We’ve met some developers who have used ArcObjects to access the Layout object of the map service, specifically for putting high quality printing functionality in their Web applications. They use ArcObjects to work with ArcMap-quality maps and their surrounding elements, generate printable documents in large formats, and so on.

At ArcGIS 10.0, we recommend you use the arcpy.mapping Python module to script your layout construction and map printing. Then, expose the script through a geoprocessing service. With arcpy.mapping, you can manipulate map documents to a very fine degree: you can add layers dynamically to the map, update their symbology, and more. You can also access the layout of the map, manipulating elements such as text and images.

See this blog post for a brief introduction to the arcpy.mapping module and its use within ArcGIS Server.

As we move on to ArcGIS Server 10.1, keep in mind that we will enhance the arcpy.mapping capabilities. Specifically, we plan to make it very easy to dynamically load the contents of a Web mapping application (including map services and graphics) into a map document using arcpy.mapping.

Changing symbols and renderers

Another reason developers use local connections is to change the symbology of a particular layer in a map service. This workflow often required the use of non-pooled services, limiting the scalability of your applications. Some developers managed to do this with pooled services, although switching the state of a service back and forth to satisfy requests from multiple users often resulted in poor performance and left a great deal of responsibility in the developer to keep a healthy state on the map instance itself. Let’s discuss a couple of alternatives here.

The ArcGIS Web APIs give you an easy way to symbolize features using a client-side feature layer or graphics layer, whose rendering properties can be changed at any time. The idea is that the geometry and the attributes of visible features are all downloaded to the client, so it’s easy for the client to draw the features using any colors, widths, or class breaks defined by the application developer.

The feature layer technique is particularly effective for thematic mapping, interacting with and highlighting features, and so on; but it falls short when you are working with thousands of features or very complex polygon features. In those cases, the best approach is to request symbology changes at the service level and have the map service render the map image. This currently requires ArcObjects.

ArcGIS Server 10.1 will enhance the map service such that you can alter the contents and symbology of the map at runtime (like you could with ArcIMS). You will no longer need to use nonpooled services and fine grained ArcObjects to change symbology on your map service layers. Instead, you will be able to decide, on a per request basis, the contents or symbology to be used in the map you want to create.

This enhancement will make your applications significantly more scalable, while simplifying development and maintenance. Defining the symbology of your layers in the map service at runtime will be done by including the renderer information in your Web service request to draw the map, along with the typical information such as visibility of layers, extent, and so on. All the Web Mapping APIs will include utility classes so you can easily define content, renderers, class breaks, symbology, and so forth.

Web editing

In the early releases of ArcGIS Server, editing data over the Web had to be accomplished purely through ArcObjects custom code leveraging local connections. In version 9.2, an Editor Task was introduced in the Web ADF allowing basic editing such as creating, moving, and deleting features. Any customization of this task or the creation of editing tools from scratch still required extensive ArcObjects programming.

The REST-based APIs did not initially expose Web editing; however, with the introduction of the feature service at ArcGIS Server 10.0, editing is now possible through these APIs. Not only is editing possible through REST, it’s convenient to customize because many common editing methods such as cut, trim, extend, auto-complete polygons, and reshape are exposed through the REST implementation of the geometry service. Finally, when you edit with REST, you can use pooled services. This is a huge boon to performance.

To get an idea of how Web editing works through REST, we recommend this article on editing using the ArcGIS API for JavaScript. Most of the concepts also apply to the ArcGIS API for Flex and the ArcGIS API for Silverlight/WPF.

There is one workflow for which support will not be continued in ArcGIS 10.1: long transactions. In the Web ADF, you can leverage nonpooled services to perform edits following a long transaction model. In essence, you can start updating features and roll back at any time. With the feature service, all operations are stateless, meaning that you can’t really roll back at the database level (although you could with business logic in your application). A long transaction model for Web editing is one of the very few workflows where the new architecture of ArcGIS Server 10.1 will not offer alternatives.

It is important to highlight that lack of support for long transactions does not preclude you from implementing undo-redo operations. In fact, in the 2.2 version of the ArcGIS Web APIs, undo-redo operations are possible right from the API at the application level.

Another unique capability that nonpooled services give you is the ability to change versions while editing. This allows, for example, Web users to store their changes in different versions that can be reconciled later from ArcGIS Desktop. While feature services support editing on versioned data (as well as non-versioned), it is not possible to switch the version at runtime in ArcGIS Server 10.0. That is, the only version that Web clients can edit through a feature service is the one that was used when authoring the map service itself. For the 10.1 release, we are aiming to enhance feature services so you can efficiently target a version for your edits at runtime from any web application.

Performing business logic

Some GIS applications run a specific series of tools to perform advanced GIS business logic. These tools might predict timber yield from a forest, identify suitable sites for a restaurant, or estimate where a toxic cloud could spread. Many developers went the ArcObjects route to accommodate this.

In many cases, these processes can be expressed in ArcGIS Model Builder, where you can graphically “chain” tools together. These geoprocessing models can be exposed as Web services and consumed from your Web applications. Go here for a quick introduction to geoprocessing in ArcGIS Server. The benefits of this are obvious: Using a geoprocessing service can save you a lot of coding. Also, you can take advantage of the asynchronous execution of geoprocessing services, which is challenging to achieve by writing your own ArcObjects code.

Custom geoprocessing tools

Aside from the flexibility of having hundreds of out of the box tools that you can combine in Model Builder, geoprocessing gives you the ability to develop custom tools. The simplest way is to create Python scripts that can either run on their own, or in combination with other tools within a model. We described an example of this earlier with the use of arcpy.mapping to create high quality maps over the Web. For general guidance on building python script tools for the geoprocessing environment, go here.

For even further control, you can go beyond Python to create a custom geoprocessing tool with C#, VB.Net or C++. This allows you to embed your own fine-grained ArcObjects logic within your models. Here is a help topic with details on how to create a custom geoprocessing tool with C#.

Whether you use Python or another language, the advantage of creating custom tools is that you can reuse them in different workflows, since they will behave just like any other out of the box tool. Additionally, your ArcObjects or python code can execute within the asynchronous execution model of geoprocessing services, which is quite handy for long-time running processes.

The geometry service

Geoprocessing services are handy, but just like you should not jump into ArcObjects development when something can be easily achieved with geoprocessing, it’s wise to avoid jumping into geoprocessing if out-of-the-box services can already do the job. You may check to see if the SOAP or REST-based geometry service offers the methods that you need. A geometry service can perform basic GIS operations such as buffering, determining spatial relationships, measuring lengths and areas, and so on. Calling a series of methods on a geometry service and combining that with the query capabilities of map services and client side logic can be simpler and faster than using a geoprocessing service. In this white paper we explore a scenario comparing the use of the Geometry service versus a geoprocessing service for solving a simple selection by distance problem.

For simple problems, geoprocessing and ArcObjects are too heavy-handed of a solution. You might save yourself some work by studying and understanding what is possible with map service queries, geometry service calls and so on.

Geoprocessing models are generally appropriate for long running and complex processes. As a developer you can always extend this framework with Python and custom ArcObjects geoprocessing tools to face the most challenging problems. In the next section we discuss yet another alternative, which may be the way to go for tasks that need to run very quick.

Server object extensions

If you have well-defined business logic that you want to execute quickly, an alternative approach is to develop a server object extension (SOE). SOEs are an advanced developer option that allow you to expand the base functionality of ArcGIS Server services. SOEs have two great advantages:

  1. SOEs can be exposed as SOAP or REST Web services, allowing clients built with the ArcGIS Web APIs (for JavaScript, Flex, Silverlight, iOS, etc.) to easily invoke them. In fact, your SOEs will appear in the ArcGIS Services Directory and can expose the typical object types that the ArcGIS APIs understand such as feature sets, primitive types etc.
  2. SOEs encapsulate ArcObjects efficiently, and provide an ideal environment to execute your calls very quickly.

You might build an SOE to extract an elevation profile from a raster, or work with dynamic segmentation to get the location of a mile marker. In these cases, SOEs expose functionality that is either not available through any other means (through out of the box geoprocessing tools or other ArcGIS Server services) or functionality that needs to be executed quickly.

Here is an example showing an SOE being used from an ArcGIS API for Flex application. This SOE uses the 3D Extension libraries in ArcGIS Server to perform a quick terrain profile. In this particular scenario, we did not find geoprocessing tools that could help us with the profile generation, so we knew we were going to write our logic in ArcObjects. Given that this is a very quick operation not requiring asynchronous execution, we decided to use an SOE.

SOE development requires knowledge of ArcObjects,.NET or Java, and Web service communication technologies such as REST and SOAP. The ArcObjects SDK has several samples that you can examine, both for Java and .NET.

If you plan on developing new SOEs, keep in mind that in ArcGIS Server 10.1 you will be required to analyze your map documents before publishing them and fix any errors that appear. Thus, one of the things you can do to get your SOE’s ready for 10.1 is to make sure they work against MSDs at 10.0. Building SOE’s on top of MSD based map services with ArcGIS Server 10.0 is perfectly possible. The biggest requirement is that you avoid using MXD-specific interfaces (such as IMap, ILayer, anything to do with page layouts, and so on). Here is a blog post describing how you can use IMapServerDataAccess to get to the data sources of an MSD-based map service.

We realize that accessing layers and layouts is handy when you want to do some of the things we described earlier, like changing the symbology of layers, high quality printing, and so on. Much of the new functionality in 10.1 will help you through the places where it’s currently challenging to make an MSD-compliant SOE. By looking at some of the scenarios described in this post such as map printing through the arcpy.mapping module and changing the symbology of layers, you will begin to understand how to properly address your workflows at 10.1.

Conclusion

This post offers important direction for ArcGIS Server developers as you make plans for current and future releases. Information provided is based on the current status of our 10.1 work and is subject to adjustment during the 10.1 beta cycle. We look forward to communicating with you further on these and other topics of interest to Server developers. Please use the comments section of this blog post to leave questions.

Contributed by Ismael Chivite of Esri Product Management and Sterling Quinn of the ArcGIS Server development team

Posted in Services | Tagged , , , , , , , , , | 4 Comments