Tag Archives: Analysis
The Query Analysis Add-In is available for download. At 10.1, the Query Analysis Add-In expands support for rasters to include building complex queries over multiple single-band rasters. The Query Analysis Add-In is designed to rapidly create a query that consists … Continue reading
The multiprocessing Python module provides functionality for distributing work between multiple processes on a given machine, taking advantage of multiple CPU cores and larger amounts of available system memory. When analyzing or working with large amounts of data in ArcGIS, … Continue reading
This blog describes the use of Geoprocessing tools including the Solar Radiation Graphics tool in the Spatial Analyst extension for evaluating poor performing weather stations that both, recorded and, were powered by solar radiation. Overview In order to stimulate economic … Continue reading
Imagery Analysts frequently have to measure features and determine their height. At ArcGIS 10.1, the Image Analysis window provides tools that give you the ability to take measurements of building heights directly from imagery. The process of making such measurements on imagery is referred to as mensuration. Mensuration tools apply geometric rules to find the length of lines, surface areas, or volumes using information obtained from lines and angles. Mensuration can include measuring the height and absolute location of features. Any georeferenced raster dataset can provide distance, area, point, and centroid location. Height measurements can be obtained when the sensor model is known. Sun angle information is required for measurements using shadows, while 3D measurements require a DEM.
This image shows how you could use the Base to Shadow tool to find the height of a building. The height is calculated by selecting a point at the base of the building and the corresponding point at the top of the shadow. For more information on the Mensuration tools and how to use them, see the ArcGIS Online Help.
Contributed by Natalie Campos.
Park analysis and design: Sketching the design of a new park (part 4)
In my previous blog post, I used a voting application allowing citizens to vote on their favorite location for a park based on choices derived from a suitability analysis. Using ArcGIS Server, their choices went into a database and allowed the parks to be ranked based on their popularity. We have a winner, so now our task is to design the new park. Continue reading
Park analysis and design: Voting on a new park location (part 3)
In my previous blog post, I determined suitable locations for a new park by analyzing a series of datasets provided by the City of Redlands. The final output showed a number of parcels that matched the standards established in the model. The next task is to seek feedback from the public. To do this, I’ll take advantage of a web application I developed using ArcGIS Server.
Preparing the data
The park suitability model resulted in an output of a feature class containing many multipart features. A multipart feature, as the name suggests, is a feature with multiple parts. Think of Hawaii as one feature (state) with multiple parts (islands). To break the suitable areas for parks into separate features, I’ll use a tool called Multipart To Singlepart.
With every parcel being its own feature, I can calculate the area for each potential site by creating a field and using Calculate Geometry in the attribute table. Once I have the area in acres, I need to convert all the polygons to points using the Feature to Point tool so I can represent each park as a point location in the web application.
The final dataset contains fields for the park’s area and an identification number, which I derived by copying the OBJECTID to a field called ParkID. This number is used to link the park feature to the voting results table, which also has a field for the ID named ParkIDVoted (so I can distinguish it in the Flex code).
Building the web service and application
I’m developing my application using the ArcGIS API for Flex, so I first check if there are any existing samples that I can use as a starting point to help me collect votes. I find the Editing a related table sample, which demonstrates a similar scenario that I can modify for the needs of my own project. This sample takes a set of incidents (stored as points) and allows the user to flag an incident as important. In the code, there’s a map service that holds the points, as well as a table to hold the results. In the geodatabase, these are linked using a relationship class. These datasets need to be in an ArcSDE geodatabase with feature access enabled to allow web editing. Accordingly, I can set up my data this way and publish it with ArcGIS Server, which makes the parks and the table become layers in a map service.
I need to change a few things in the sample to customize it for my own application: the URL of the parks layer and the URL of the table holding the votes. Some field names are different, but other than that, the logic of casting the vote is fairly straightforward.
In terms of the interface, the sample shows how to use the pop-up window (infoWindow) when a park is clicked. I used the same thumbs-up icon and added a bit more information to the information window. Additionally, I published the park access map and the final suitable parcels layers, which can be turned off and on in the application using simple Flex components.
Submitting a vote
When users find a park they are interested in, they click the icon on the map. This sends a query to the server using the x,y location of the map click, which also triggers a relationship query that gets the number of votes of the record in the related table. The infoWindow then displays the ID of the park that was clicked, the park size, and the total count of current records in the related table, which are votes in favor of this location.
To vote for this park, the user clicks the thumbs-up icon, which sends a message to the server (applyEdits) that puts the ID of the park, plus a value for “like” into the related table through the relationship class. The count is increased by one and the total vote count can be seen immediately.
Counting the results
On the server, the related table collects the votes. Each record in the table is a vote, which includes the Park ID the user clicked, an attribute for the vote (“true”), and the date of the vote.
When the voting period is over, I can run a summary on the final table using the Summary Statistics tool. This counts the number of records with the same ID and creates a table, which I can then build a report on using the new reporting tools in ArcGIS 10.
Now that I have a winner, the next task is to design the park using the sketching tools in ArcGIS 10. I will cover this in my next blog post.
Accessing the Data
The data, Flex source code, report template, and a few other parts of the workflow can be found here
The rest of the data and tools for this blog series can be found in the Park Analysis and Design group (make sure to filter by Show: All Content at the top of the page)
Content for the post from Matthew Baker
In my previous blog post, I analyzed park accessibility in the City of Redlands and discovered several areas of the city that were farther than one mile from an existing park along the walkable street network. Now, I want to determine where to best locate a new park within the areas I identified as being underserved by current parks.
To answer this question, I’ll conduct a suitability analysis to find parcels that are most appropriate for a new park.
There are two main types of suitability analysis: binary and weighted. Binary suitability analysis involves a binary final answer —1 or 0, or in our case, suitable and unsuitable. A weighted suitability analysis allows for a range of final answers, from 1 to 10, for example, and allows certain layers to have more influence (weight) on the result of the model. For this example, I’m going to create a binary suitability analysis model.
As with our park accessibility analysis, I’ll start with several datasets from the City of Redlands, including parks, schools, roads, trails (off-road and on), existing and proposed bicycle lanes, and vacant parcels. Before I construct a model, I should know the distances the new park should be from certain features. In most cases, I’m looking to be close to certain features, but in other cases, I want to make sure I’m far enough away, such as with highways and existing parks.
Remember that any of these values can be changed to suit any criteria. ModelBuilder allows a workflow to be created, run, and then modified to suit different ideas of how far each feature should be from a new park.
Creating a data processing workflow
My analysis should read like a flowchart: buffer the schools, trails, and bicycle lanes to make the ‘good’ areas. Buffer the existing parks and highways to make the ‘bad’ areas. Then remove the bad areas from the good areas, and find the areas that are common to the vacant parcels.
Developing a suitability model
To use the data and tools found in ArcGIS to accomplish suitability analyses, I’ll develop a model using ModelBuilder. ModelBuilder acts much like a living flowchart, with data elements connecting to tools creating outputs just like the flow processing diagram. A model serves not only as an organizational tool for doing data processing, but the elements of the model store parameter values and data paths that can be changed, and the model itself can be shared and run on different data. For example, other users can change the input datasets to their own parks and street network to achieve the same analysis.
By definition, geoprocessing tools take one or more pieces of geographic data, run a process based on parameters I define, and create a new piece of data as the result. That first result can be fed into another tool which results in yet another piece of data. Once the new data has been created, the old result can be discarded. This data is called intermediate data. Each piece of intermediate data should be written to a scratch workspace, which is defined in the environment settings of the map or model. Keeping intermediate data in a scratch workspace is a great way to ensure I don’t end up with random datasets all over my computer.
Tools for models can be found using the Search window. The Search window will allow me to type in the name of a tool, dataset, or script and show results across all types of data. To add a tool to a model, drag the tool by its name, and drop it on the model canvas. Model elements can be connected using the Connect tool from the model window. Double-clicking a tool or element will open a dialog box that allows me to ensure the settings are correct before I run the model. ModelBuilder will also check the inputs are valid before running, and I can check them all manually by clicking the Validate Entire Model tool from the ModelBuilder toolbar. I can save the model in a toolbox, which can be stored anywhere on disk or in a geodatabase, as I am doing.
When the model runs, a dialog box shows me the progress, notification that it is finished, and any messages, warnings, or errors that might have occurred. The Results window is the location to track the status of a model or other geoprocessing operation.
Reusing models as tools
Another nice feature of models is they can be used in other models as tools. Since I already proved the effectiveness of measuring distances along the road network versus straight-line buffers, I can take the method I developed and use it as a tool in my park suitability model. I’ll call the tool Buffer Along Roads and use it for the schools and existing parks, which are the only datasets that require travel to be measured along the road network.
My model tool will operate as any other tool: it requires an input point dataset and will create a polygon dataset containing buffers along the roads using the distances exposed in the reclassification scheme. Once I’ve created these distance polygons, I then choose the ones that meet my criteria—in this case those that are ½ mile from existing parks and within ½ mile of schools. From there, the rest of my analysis can continue using straight-line buffers from bike lanes, trails, and highways.
Determining the final location
When the model is finished, I see that there is more than one suitable location for a new park. I then have some work to do to figure out the final parcel or location. For example, perhaps I’m looking for the area that is closest to downtown. Using my park access analysis as an example, converting the final suitable polygons to points and running them through a cost distance tool would be one method to use.
However, I want to allow the citizens to provide input. In the next entry in this series, I’ll use ArcGIS Server to collect volunteered geographic information, crowd-sourced, or user-generated content to allow users to vote on their favorite location for a new park. This concept is now being referred to as “participatory planning”.
Accessing the data and models
The data and models for this blog post can be found here
The rest of the data and tools for this blog series can be found in the Park Analysis and Design group here (make sure to filter by Show: All Content at the top of the page)
Content for the post from Matthew Baker
Park analysis and design – Measuring access to parks (part 1)
Have you ever wondered how far you are from a park? In this post, I’ll examine the placement of parks in Redlands, California, and determine which areas are best and worst served by a park. In future posts, I’ll discuss siting a new park using binary suitability analysis, web-based tools for evaluating and increasing park access, and the design of a new park using ArcMap and feature template-based editing.
Over the last year, I’ve been attending various urban planning conferences and have discussed with several urban planners the need to design healthier communities, and I have heard this notion echoing throughout the planning community.
One concern is to figure out how well areas are served by parks. In my analysis, I want to determine which areas are within one mile of a park and visualize the results in a way that is easy to understand. I chose one mile, assuming most people can visualize how long it would take them to walk a mile, but this analysis could certainly be easily altered to measure any distance and present the results in a similar manner.
To do this, I could use a simple one-mile buffer around the parks, as the first map shows. However, a map created that way does not consider modes of travel. I want to measure pedestrian access to parks, so the best route is to travel along a road, preferably on the sidewalk.
The more accurate way to measure park access is to determine areas around the parks that fall within a specified distance from the parks along the road network. Using network analysis, we call this a service area analysis, or drive time, but this uses the road network only.
There are tools within the Spatial Analyst toolbox to run a cost-distance analysis: essentially a distance map calculated against a surface describing how difficult it is to travel across a landscape. This gives me the ability to rank our landscape by how easy it is to travel, road or not.
I want to then create a map showing areas that are ¼, ½, 1 mile, and greater than 1 mile from a park along the road network and show the distances on the map as well as on a graph.
Creating a travel cost surface
For my analysis, I am first going to create a cost surface that describes ease of travel through Redlands, with areas along roads being easier (cheaper) to travel through, and areas farther from roads more difficult (expensive) to travel.
To do this, I start by creating a raster surface where every cell has a value for the distance it is from itself to the nearest walkable road segment; that is, I don’t have to drive a car to get to a park and can even get exercise on the way.
First, I’ll need to map the road network. From the City of Redlands roads dataset, I can simplify all the roads into three main types: minor, major (arterial), and highway.
Since pedestrians cannot safely or legally walk on the highways, I can remove them from the analysis. The first tool in the model will be the Select tool, which allows a set of features to be removed for analysis by an SQL statement. In this case, I’ll use Road Type not equal to Highway to remove the highways from the analysis and create a walkable road dataset.
Of course, this would be a good place for a better road dataset in which each street had an attribute for whether or not it is walkable. I have heard of a few communities and organizations starting to capture this data, and it would be most useful for this application.
Once I have extracted the walkable roads, I’ll run the Euclidean Distance tool to create a surface in which each raster cell holds a value for the distance between itself and the nearest road.
The Euclidean Distance tool creates a surface where every part of the study area is broken down into a square (cell), and every square is assigned the distance to the nearest road segment. I’ve adjusted the symbology to group cells into distance bands.
Creating a cost surface
I’ll now borrow a concept from a weighted overlay (suitability) model and reclassify the road distances onto a scale of 1 to 6, where 1 is the cheapest (easiest to travel), and 6 is the most expensive (difficult to travel).To do this, use the Reclassify tool. It allows me to define the number of classes into which I want to reclassify the data. The Old Values column describes the distances from the Euclidean distance raster. The New Values column is the breakdown of the new values for the ranges of the old distance values.
Notice I’m going to reclassify the distances using the same distance bands I used earlier to describe how far each part of town is from the nearest road. Each cell in each distance band then gets a new value describing its cost on a scale of 1 to 6.
Here are the new reclassified distances. Notice the values become more expensive when moving away from the roads.
This now becomes the cost surface that I’ll use to measure park access.
Evaluating park data
Because the park data is stored as centroid points, they may not necessarily reflect the true access points to the parks themselves. By creating points at the corners of the park, I can have a more suitable location from which to measure park access.
Borrowing again from the City of Redlands dataset, I’ll simply select the parcels that intersect the park points and run those intersecting parcels through the Feature Vertices To Point tool in the Data Management toolbox.
Depending on the geometry of some of the parcels, I might end up with a little more than just the corners, but this is a much more accurate representation of how to get into the park than just a point in the middle of the parcel.
Calculating cost distance
Next, I’ll run the new park points against the cost surface using the Cost Distance tool in the Spatial Analyst toolbox. Using this tool, I can create a raster surface where each cell has a distance from itself to the nearest park point along the cheapest path—in this case, the cells that are nearest to the roads as described by our cost surface.
The resultant raster gives a picture of how far each location is in the entire city to the nearest park, which is somewhat hard to visualize. I can then reclassify the distances into simple ¼-, ½-, and 1-mile areas.
Visualizing the results
Taking the walkable road network into consideration certainly does give a much better picture of areas served by parks—and notice the areas that now show up as underserved that the buffer didn’t expose. These areas are over a mile from a park, which meets our criteria of underserved.
In addition to mapping, I can also create a graph that visualizes the percentages of the city that are served by parks by their respective distances.
Using the graphing tools in ArcMap, I can create a new field of data to hold the percentage, calculated by using a variable in the model that stored the area of the City, and divide that by the area of each feature in my walkability analysis. I can create a table that stores the values of the output of my reclassification (1,2,3,5,9) and their respective labels (500’, ¼ Mi, ½ Mi, 1 Mi, and More than 1 Mile) and join that table to my walkability output. It’s an extra step, but one that can be repeated if my underlying data changes and I want to run it again.
Now that I have identified that there are areas underserved by parks, the task of my next blog post will be to determine the best location for a new park using a simple binary suitability analysis.
Data is provided by the City of Redlands. The data and models for this blog post can be found here
Content for the post from Matthew Baker
By Aileen Buckley, Mapping Center Lead
At long last, a book that I had the good fortune to help author is now available! Here is the press release for Map Use: Reading and Analysis, Sixth Edition, which Esri Press released on Feb. 14th:
Redlands, California—February 12, 2009—To unlock the wealth of information in a map, a person must know how to read one. That’s why Map Use: Reading and Analysis, Sixth Edition, will be a valuable book for people who work with, study, and appreciate maps and want to improve their map reading and analysis skills. Continue reading
By Charlie Frye, Esri Chief Cartographer
The Hot Spot Analysis poster shows the steps in the analysis of 911 Call data. The data were processed using the Hot Spot Analysis tool and the design of the poster is, we think, faithful to the underlying data.
I did the first edition of this poster almost three years ago, and since then it has been tacked up on the wall at the end of one the hallways here in Redlands. When Jack brought his tours through the Software Products & Development area, he’d often show this poster, extolling the analytical power of GIS. The original poster was a little overly flashy, but more importantly after we recently checked with our spatial statistics experts, we found it was symbolized in a way that was slightly misleading. Thus, we undertook an upgrade and presented it at the Users Conference in our ”Mapping the Results of Your Geographic Analysis” technical session. Continue reading