Monthly Archives: June 2011

Measuring access to parks

Park analysis and design – Measuring access to parks (part 1)
Have you ever wondered how far you are from a park? In this post, I’ll examine the placement of parks in Redlands, California, and determine which areas are best and worst served by a park. In future posts, I’ll discuss siting a new park using binary suitability analysis, web-based tools for evaluating and increasing park access, and the design of a new park using ArcMap and feature template-based editing.

Over the last year, I’ve been attending various urban planning conferences and have discussed with several urban planners the need to design healthier communities, and I have heard this notion echoing throughout the planning community.

One concern is to figure out how well areas are served by parks. In my analysis, I want to determine which areas are within one mile of a park and visualize the results in a way that is easy to understand. I chose one mile, assuming most people can visualize how long it would take them to walk a mile, but this analysis could certainly be easily altered to measure any distance and present the results in a similar manner.

To do this, I could use a simple one-mile buffer around the parks, as the first map shows. However, a map created that way does not consider modes of travel. I want to measure pedestrian access to parks, so the best route is to travel along a road, preferably on the sidewalk.

The more accurate way to measure park access is to determine areas around the parks that fall within a specified distance from the parks along the road network. Using network analysis, we call this a service area analysis, or drive time, but this uses the road network only.

There are tools within the Spatial Analyst toolbox to run a cost-distance analysis: essentially a distance map calculated against a surface describing how difficult it is to travel across a landscape. This gives me the ability to rank our landscape by how easy it is to travel, road or not.

I want to then create a map showing areas that are ¼, ½, 1 mile, and greater than 1 mile from a park along the road network and show the distances on the map as well as on a graph.

Creating a travel cost surface
For my analysis, I am first going to create a cost surface that describes ease of travel through Redlands, with areas along roads being easier (cheaper) to travel through, and areas farther from roads more difficult (expensive) to travel.

To do this, I start by creating a raster surface where every cell has a value for the distance it is from itself to the nearest walkable road segment; that is, I don’t have to drive a car to get to a park and can even get exercise on the way.

First, I’ll need to map the road network. From the City of Redlands roads dataset, I can simplify all the roads into three main types: minor, major (arterial), and highway.


Since pedestrians cannot safely or legally walk on the highways, I can remove them from the analysis. The first tool in the model will be the Select tool, which allows a set of features to be removed for analysis by an SQL statement. In this case, I’ll use Road Type not equal to Highway to remove the highways from the analysis and create a walkable road dataset.

Of course, this would be a good place for a better road dataset in which each street had an attribute for whether or not it is walkable. I have heard of a few communities and organizations starting to capture this data, and it would be most useful for this application.

Once I have extracted the walkable roads, I’ll run the Euclidean Distance tool to create a surface in which each raster cell holds a value for the distance between itself and the nearest road.

The Euclidean Distance tool creates a surface where every part of the study area is broken down into a square (cell), and every square is assigned the distance to the nearest road segment. I’ve adjusted the symbology to group cells into distance bands.

Creating a cost surface
I’ll now borrow a concept from a weighted overlay (suitability) model and reclassify the road distances onto a scale of 1 to 6, where 1 is the cheapest (easiest to travel), and 6 is the most expensive (difficult to travel).To do this, use the Reclassify tool. It allows me to define the number of classes into which I want to reclassify the data. The Old Values column describes the distances from the Euclidean distance raster. The New Values column is the breakdown of the new values for the ranges of the old distance values.

Notice I’m going to reclassify the distances using the same distance bands I used earlier to describe how far each part of town is from the nearest road. Each cell in each distance band then gets a new value describing its cost on a scale of 1 to 6.

Here are the new reclassified distances. Notice the values become more expensive when moving away from the roads.

This now becomes the cost surface that I’ll use to measure park access.

Evaluating park data
Because the park data is stored as centroid points, they may not necessarily reflect the true access points to the parks themselves. By creating points at the corners of the park, I can have a more suitable location from which to measure park access.

Borrowing again from the City of Redlands dataset, I’ll simply select the parcels that intersect the park points and run those intersecting parcels through the Feature Vertices To Point tool in the Data Management toolbox.

Depending on the geometry of some of the parcels, I might end up with a little more than just the corners, but this is a much more accurate representation of how to get into the park than just a point in the middle of the parcel.

Calculating cost distance
Next, I’ll run the new park points against the cost surface using the Cost Distance tool in the Spatial Analyst toolbox. Using this tool, I can create a raster surface where each cell has a distance from itself to the nearest park point along the cheapest path—in this case, the cells that are nearest to the roads as described by our cost surface.
The resultant raster gives a picture of how far each location is in the entire city to the nearest park, which is somewhat hard to visualize. I can then reclassify the distances into simple ¼-, ½-, and 1-mile areas.

Visualizing the results
Taking the walkable road network into consideration certainly does give a much better picture of areas served by parks—and notice the areas that now show up as underserved that the buffer didn’t expose. These areas are over a mile from a park, which meets our criteria of underserved.

In addition to mapping, I can also create a graph that visualizes the percentages of the city that are served by parks by their respective distances.

Using the graphing tools in ArcMap, I can create a new field of data to hold the percentage, calculated by using a variable in the model that stored the area of the City, and divide that by the area of each feature in my walkability analysis. I can create a table that stores the values of the output of my reclassification (1,2,3,5,9) and their respective labels (500’, ¼ Mi, ½ Mi, 1 Mi, and More than 1 Mile) and join that table to my walkability output. It’s an extra step, but one that can be repeated if my underlying data changes and I want to run it again.

Now that I have identified that there are areas underserved by parks, the task of my next blog post will be to determine the best location for a new park using a simple binary suitability analysis.

Data credits
Data is provided by the City of Redlands. The data and models for this blog post can be found here

Update

Part 2 – Park analysis and design: Locating a park through suitability analysis

Part 3 – Park analysis and design:  Voting on a new park location

Part 4 – Park analysis and design:  Sketching the design of a new park

Content for the post from Matthew Baker

Posted in Analysis & Geoprocessing, Editing | Tagged , , , , , , , | Comments Off

Have you signed up yet for the Water Utility Data Health Check at UC?

With the User Conference just 10 days away, we’d like to remind you to take advantage of the Water Utility Data Health Check service that we are providing at the Geodatabase Management island. There are still some slots left if you’d like to sign up. Just send an email with your name, organization, contact info, and preferred time slot to datareviewer@esri.com.

Posted in Editing | Tagged , , , , , , , , , , , , , , , | Leave a comment

2011 Public Safety Activities at the Esri Homeland Security Summit and User Conference Public Safety Showcase

If you are coming to San Diego this year for the Esri Homeland Security Summit and/or the International User Conference there are a lot of Public Safety related sessions and activities to choose from.  Here is an overview and highlights for both events.

Esri Homeland Security Summit

September 2011 marks the tenth anniversary of the 9/11 terrorist attacks. Tremendous change has occurred over the last decade, yet we face many of the same challenges: tight budgets, finite resources, increased demands, and ever-changing threats. The 2011 Esri Homeland Security Summit will focus on where we have been, where we are now, and where we need to go. This summit will allow us to collaborate and discuss how geographic information system (GIS) innovation and best practices support achieving our mission demands. Some of the best minds in public safety GIS will be on hand to learn and share their successes, challenges, and visions for the future.

The Esri Homeland Security Summit is a global event. Attendees from around the world are senior-level executives such as chiefs of police and fire and rescue organizations as well as directors of emergency management and fusion centers. As a leader in homeland security, your participation in this summit will help determine the road map to success for homeland security professionals.

We hope you can join us for this invaluable experience to learn, collaborate, and strengthen your capabilities. View the online agenda and plan your schedule. Please register for the event by visiting esri.com/hss.

Esri Homeland Security Summit Highlights:

  • Special reflections on 9/11 from Captain Steve Pollackov, FDNY, and Joe Rozek, Former Special Assistant to the President and Senior Director for Domestic Counterterrorism
  • A great line up of speakers including Michael Byrne (FEMA), Professor Haruo Hayashi (Kyoto University), Cynthia Ann Wright (NGA) and Special Constable Ryan Prox (Vancouver Police Department)
  • A Social Media Panel Discussion featuring Shayne Adamski (FEMA), Bronwyn Agrios (Esri), Eric Holdeman (Eric Holdeman and Associates) and Jeff Sangster (Brisbane City Council).
  • Three tracks on Sunday afternoon-Executive, Technology and Lightning Talks

Public Safety Showcase

Visit the Public Safety Showcase in SDCC Exhibit Hall D to see GIS
solutions for all aspects of public safety. The showcase will feature law
enforcement, fire, homeland security, and emergency management solutions
including mobile applications, incident analysis, and simulated disaster
incident mapping. These applications are deployed using the Internet, servers,
PCs, and handheld devices. You can get hands-on GIS experience; meet GIS users
from police, fire, and emergency management communities; and see demonstrations
of their work. Our large screen demo theater gives you the perfect opportunity to
sit down, see the latest innovations, and ask questions.

Returning to the Public Safety Showcase again this year is
Operation SafetyNet. SafetyNet demonstrations will show you exactly how ArcGIS
technology supports the four public safety workflow patterns (data management,
planning and analysis, field mobility, and situational awareness). It all
begins with a briefing in the demo theater on one of the four mission
objectives: law enforcement, fire/EMS, fusion center, or emergency management.
Here you will receive an overview flyer of what you will see in the demo areas
of Operation SafetyNet. You will then be guided through the individual kiosks
dedicated to each of the four workflow patterns. In the final area, you will
receive a debriefing of how the four patterns functioned together to solve the
mission objectives. Check the online agenda demo theater schedule for mission
briefing times.

UC and UC Public
Safety Showcase Highlights

Here is a handy flyer with all of the Public Safety
Activities with links-http://downloads.esri.com/blogs/arcgisonline/Public_Safety_Flier_with_links.pdf.

We look forward to seeing all of you in San Diego!

Posted in Public Safety | Tagged , , | Leave a comment

Crazed by Coupons?

By Catherine Spisszak and Brent Roderick

Have you seen people on television spending hours a day clipping coupons in order to come home with thousands of dollars of groceries for mere cents?  Do you subscribe to online sites that offer daily deals for goods and services in your area at 50% off or more?  Do you read any blogs on how to save money or visit websites to find online coupon codes? 

Couponing is all the rage. Rising gas prices and the struggling economy are forcing people to find new ways to save money.  But television, social media sites, mobile couponing, and blogs are attracting younger and more affluent shoppers to take advantage of these savings. Couponing is not only cost effective, it’s also in style!

How can businesses cash in Continue reading

Posted in Location Analytics | Tagged , , , , , | Leave a comment

EA SIG 2011: Real-time Answers To Key Enterprise Issues

Based on last year’s very successful Enterprise Architecture SIG, we are keeping the round table “Birds of a Feather” format for facilitating conversations at the Esri International User Conference EA SIG this year.  We are modifying the structure slightly to allow for time to summarize key findings from each group attending, providing you real-time answers.  Table topics this year include:

  • Virtualization / Cloud Computing
  • Security
  • Enterprise Integration
  • Performance & Scalability
  • EA Methodologies
  • Application Development Patterns
  • Enterprise Workflows – New for 2011
  • Mobile – New for 2011

The EA SIG will be bright and early again with a full breakfast! Thanks again to IBM for sponsorship of this SIG.  See you there!

Who: Open to all UC Attendees
Date/Time: Wed, Jul 13, 7:30AM – 8:15AM
Location: Room 17A – San Diego Convention Center (The wrong room # is listed in the printed agenda)

Posted in Uncategorized | Tagged , , , , , , | Leave a comment

Dev Meet Up – Washington, D.C.

Less than a week after the Philadelphia meet up, we were back on the East Coast in Washington, D.C. For some reason the venue thought the meet up was the next day but thanks to Charmel from the D.C. office, they rallied round and started setting up for us.

Just as well – we had a large crowd of hungry and thirsty geonerds turn up for a superb set of speakers.

None other than Bill Dollins opened with his GIS take on the Washington Post’s “5 Myths” series. He kept the audience in rapt attention from start to finish as he talked about the place of Desktop, Maps, and The Web in GIS today. There’s a lot of discussion in the GIS community about NoGIS, moving to the web, etc., but Bill managed to provide a very grounded set of arguments for and against. He re-iterated a theme I’ve heard with increasing frequency, which is that GIS is much more than just maps. His slides are available here, and are well worth taking a look at.

After that, we dived straight into the lightning talks. Andrew Hargreaves talked about a pretty cool desktop-based sewer maintenance system. That’s right: I said “sewer” and “cool” in the same sentence. Remote controlled robots make their way through D.C.’s sewer system taking video as they go. The robot’s relative position to where it entered the sewer system is tracked and tied to a position in the recorded video, so as faults are spotted along the sewer, they can be geolocated on the map immediately. Then, anyone can jump to the relevant position in the video just by inspecting the fault record or by clicking on the map. Really very cool and very seamless. He showed how an exported attribute only report of sewer faults (which is how engineers typically use the data) could integrate with the video in exactly the same way.

Mark Wimer talked about searching for GIS data (slides here). As a bit of en experiment, we had intended that Mark’s talk be quite interactive, but in the end he had so much great material in discussing a couple of recent data-hunts he’d embarked on that we didn’t have time (after all, that’s why we stick around for the social afterwards, isn’t it?). For geodata searches, he demonstrated the importance of having a date, source, thumbnail, and description at your fingertips.

Dave Smith then told us about how the EPA handles data quality and integrity across all its data sources using their Facility Registry System. This aggregator allows all data sources to be included in a single place, not just for query, but also quality control and integrity. In particular, a single geocoding and addressing system can be applied to all facilities, and data gets a steward with appropriate knowledge to do the QA. In addition, the system keeps track of an indicator of the accuracy of the location data – all very neat and incredibly valuable. You can find his slides here.

People’s glasses were emptying (mine in particular), so I called a five-minute break for top-ups and handed out some t-shirts to a few lucky trivia winners. Then we turned our attention back to the screen for Chris McClain’s talk on GIS Development, past, present, and future (slides), during which he took us back through the history of our industry and conjectured on where we were all going. He touched on a number of topical issues, including the role of the developer and the cloud in the future of GIS, how location and place will work out their differences, and overall complemented Bill’s opener very nicely.

Christopher Fricke closed the lightning talks out with a great run through of how ArcPy has made his job easier by automating a bunch of installation and data processing tasks (slides). He can make use of really robust Python libraries and reuse desktop code in maintaining server systems, freeing up his time for the things he really likes, like not being stuck indoors.

A common theme throughout all the talks and presentations was that the speakers had fun presenting and kept the crowd thoroughly amused. Many of the speakers did a great service to the Meet Ups by emphasizing the importance of the community getting involved, and I think the next time we return to D.C. we’ll have an even larger crowd.

Many thanks to all who attended, especially the speakers, and congratulations to our prize-winners Sara Emani who won an EDN Subscription, and Jay Boyd who we’ll hopefully see enjoying his pass to the Dev Summit next March.

Posted in Developer | Tagged , , | Leave a comment

High performance web apps with big datasets as feature layers

High performance feature layer applicationDealing with big datasets is a frequent topic of discussion among web mapping app developers. The prevalence of huge, crowdsourced datasets ensures that big data will continue to be a common point of discussion when you get more than two web mapping geeks in the same room. The traditional approaches of generating a raster tile cache or using a dynamic map service work, but if you want any interactivity with the underlying data, it requires a server round trip. This post will discuss some of the key concepts behind building a web app that provides an extensive interactive experience while maintaining responsiveness and performance.

The main factor causing big data to be a tricky topic is that browsers cannot elegantly handle upwards of a few thousand features. Send more than a few thousand points, or a few hundred polygons to a browser and you can watch it grind to a halt, especially if you’re running a legacy version of Internet Explorer. This is a problem because slow performance is a death knell for web apps. If someone using your web app has to wait ten seconds, twenty seconds, or longer for five megs of JSON data to be retrieved and displayed, there’s a good chance they’ll just give up and move on. What good is a web app that no one uses?

At some point, you cross a threshold where it’s not practical to send all of your data across the wire and let the client sort it out. That threshold is an ambiguous one (and was explored in a previous post), but you know when you hit it. If you’re a JavaScript developer, how should you deal with big data? If you’ve read my recent posts, this answer will not surprise you: Feature Layers.

Building on the concepts discussed in the previous feature layer posts (generalizing features and vector tiling) and client side graphics performance post, I’ve put together an application that demonstrates how to use Feature Layers to set up custom scale dependencies and follow best practices to ensure performance does not suffer when querying, retrieving and displaying large datasets in an ArcGIS API for JavaScript application: High Performance Maps with Feature Layers.

The goal was to display appropriate US boundaries (states, counties or census block groups) for the current map scale. For example, trying to display census block groups for the whole US would overwhelm the browser with features. This app makes sure that only the states appear at the small scales, counties at the medium scales, and census block groups at the large scales. This ensures that a manageable number of features are always being transferred and displayed.

The app is simple, but the concepts can be applied to just about any web mapping app that has to deal with big data. The total size of the data used by the app is around a couple of hundred megabytes. This is accomplished through custom scale dependencies. This is as simple as creating a few feature layers, listening for their onLoad event and setting their minScale and maxScale properties. Here’s a simple code sample showing how this is done:

var fl = new esri.layers.FeatureLayer(url, options);
dojo.connect(fl, ‘onLoad’, function() {
fl.minScale = minScale;
fl.maxScale = maxScale;
});

The app includes a couple of other bells and whistles that demonstrate some of tools that are available when you have your features client side. The first is a rich Popup that displays feature attributes, as well as an option to zoom to the feature when you touch a graphic. The second is the ability to filter features based on population using a slider. Neither of these would perform nearly as well if features weren’t available client side.

The final point to drive home is that the app’s performance is kept snappy because only the required data is being sent to the client. For geometries, that means using maxAllowableOffset to generalize geometries on the server before they’re sent to the client (see my previous post on maxAllowableOffset and generalizing features on the fly for more info). For attributes, only the fields required by the Popup are sent across the wire.

Contributed by Derek Swingley of the ArcGIS API for JavaScript development team

Posted in Services | Tagged , , , , , | 2 Comments

Imagery at UC 2011

Imagery at UC is held annually at the Esri International User Conference. The series of events provides a community for those using and interested in using remotely sensed data in GIS.

The Imagery Island Expo is located at in the main exhibit hall. The Focus Sessions, Moderated Paper Sessions, and SIG meeting take place within rooms 28D and 31A. The Imagery Social will take place in the CitySide corridor of the convention center.
 

Tuesday July 12th, 2011

8:30AM–9:45AM
GIS and Industry Focus Sessions  
Imagery @ UC—Keynote and User Perspectives  
Room 28 D

10:15AM– 11:30AM
GIS and Industry Focus Sessions  
Imagery @ UC—Esri Perspectives  
Room 28 D

9:00AM–6:00PM
Imagery Island EXPO
Main exhibit hall

Wednesday July 13th, 2011

1:30PM–2:45PM  
Moderated Paper Session  
Data Discovery, Correction and Classification, and Workflows for Imagery  
Room 31 A

3:15PM–4:30PM  
Moderated Paper Session  
Analysis and Change Detection Using Imagery  
Room 31 A

5:00PM–6:00PM  
Special Interest Group Meeting  
Imagery Special Interest Group  
Room 31 A

6:00PM– 8:00PM  
Special Interest Group Meeting  
Imagery at UC 2011 – Imagery Social  
CitySide Corridor Inside

9:00AM–6:00PM
Imagery Island EXPO
Main exhibit hall

Thursday July 14th, 2011

8:30AM–9:45AM  
Moderated Paper Session  
Earthquake Hazards, Lunar Mapping, and Streaming Raster Data: GIS and Remote Sensing at the NASA Jet Propulsion Laboratory  
Room 31 A

9:00AM–1:30PM
Imagery Island EXPO
Main exhibit hall

Posted in Imagery, Services | Tagged , , , , , | Leave a comment

GeoHMS and GeoRAS – Where to find the latest?

The Geospatial Hydrologic Modeling System (GeoHMS) and River Analysis System (GeoRAS) are geospatial toolkits developed by Esri and the Hydrologic Engineering Center (HEC), an organization within the US Army Corps of Engineers, to aid engineers and hydrologists with limited GIS experience.  

The toolkits work in ArcGIS to generate inputs for the Hydrologic Engineering Center’s Hydrologic Modeling System (HEC-HMS) and River Analysis System (HEC-RAS).  The GeoHMS and GeoRAS downloads provide a set of procedures, tools, and utilities necessary to prepare, process, and visualize your geospatial data for input into HEC-HMS and HEC-RAS. 

HEC - HMS Basin Model 

 

In addition, GeoRAS can process HEC-RAS outputs, perform one-dimensional steady and unsteady flow river hydraulics calculations, sediment transport – mobile bed modeling, and water temperature analysis.

HEC - RAS Cross Section

 HEC - RAS Lateral

 

The latest versions of GeoHMS and GeoRAS are compatible with ArcGIS 9.3, and can be found on the US Army Corps’s HEC-GeoHMS site and HEC-GeoRAS site.

HEC is currently working to update GeoHMS and GeoRAS for ArcGIS 10, and they released a statement on their download site announcing the tools availability in “Summer 2011.”  It is now officially Summer 2011, which means the tools may become available at any time.  So, check back to the Hydro Blog and the HEC download site often for the announcement of GeoHMS and GeoRAS for ArcGIS 10!

For more information about the HEC software products, check out the Hydrologic Engineering Center Home Page or read the HEC Newsletter.

Special thanks to Caitlin Scopel for providing this post. Questions for Caitlin: CScopel@esri.com.

Posted in Hydro | Tagged , , | 1 Comment

GeoCollector for ArcGIS Mobile

The options for high accuracy data collection with ArcGIS Mobile just keep getting better! 

Esri’s Hardware Solutions group is offering a new packaged solution for U.S. customers called GeoCollector for ArcGIS Mobile. GeoCollector for ArcGIS Mobile bundles together ArcGIS Mobile software with Trimble GeoExplorer 6000 series handhelds devices.

You can now couple the COTS ArcGIS Mobile application with the GeoCollector XH receiver, leverage your local VRS network and map infrastructure to 10-centimeter accuracy in real time! Or using a custom solution developed with the ArcGIS Mobile SDK and the Trimble Mobile GIS Developer Community resources you can deploy applications that enable the post-processing of GPS positions when VRS is not available and still meet your accuracy requirements.

The warranty and support for GeoCollector is completely managed by Esri. Esri technical support staff will take the lead for all GeoCollector questions and issues. Details on the GeoCollector for ArcGIS Mobile packaged solution can be found here.

Mobile Team

Posted in Mobile | Tagged , , , , , , | Leave a comment