Tag: Best Practices
Why should a water, wastewater or stormwater utility adopt the Local Government Information Model?
One of the biggest benefits of a water utility adopting the Local Government Information Model is that it makes deploying the ArcGIS for Water Utilities maps and apps easier, faster and cheaper. The further you deviate from the Local Government Information Model, and in particular it’s geodatabase schema, the harder it will be for you to implement the maps and apps that are part of ArcGIS for Water Utilities. It will also be hard and time consuming to upgrade your ArcGIS for Water Utilities implementation when we release updates.
Changes you make to the Local Government Information Model schema may necessitate extensive modifications of the maps documents, and changes to apps (web apps, mobile apps, ArcGIS Desktop, etc.) that are part of ArcGIS for Water Utilities. So the closer you stay to the core Local Government Information Model, the easier your initial deployment will be and the easier it will be to migrate your ArcGIS implementation to new releases or to deploy updates to the maps and apps.
It’s also important to note that when we say “adopt” the Local Government Information Model we don’t mean that you necessarily have to use it as is (or more appropriately – as downloaded). You probably will need to configure the Local Government Information to meet the needs of your organization. But the key thing to keep in mind is you should only be making changes to accommodate the true organizational needs of your utility. For example, instead of changing the field names to the field names you’d like to use in your organization, modify field and map layer aliases. Bottom line, don’t reinvent the wheel, just make changes that are required to meet specific business needs in your organization.
At the very least you need to change the projection to the appropriate coordinate system and set up the domains to reflect the assets in use at your utility. Small utilities or utilities that are new to GIS may choose to take the Local Government Information Model as is, while larger utilities, mature GIS implementations, or GIS implementations that are integrated with other enterprise system will undoubtedly need to make more significant configurations or extensions to the schema to reflect their organizational needs.
Water, Sewer and Stormwater Data Modeling Best Practices
The Local Government Information Model incorporates many best practices for water utility GIS. One of the most important best practices is how to represent a water, sewer or stormwater system in GIS.
For years Esri had downloadable data models for water, wastewater and stormwater utility networks. Those data models were the first freely available water utility GIS data models. They were stewarded by Esri, but built by the user community and became the industry standard. Globally thousands of water utilities have built their GIS around Esri’s free data models.
The Local Government Informational Model is the next iteration of Esri’s water, sewer and stormwater data models. In essence we’ve modernized the data models to reflect how water utilities have been deploying GIS over the past few years and we’ve also modified the schema to fit the requirements of the ArcGIS for Water Utilities maps and apps. As water utility GIS continues to evolve Esri will regularly maintain the Local Government Information Model to keep introducing new best practices into the user community and functionality into our apps.
Comprehensive Data Model
There is no doubt Esri’s water, wastewater and stormwater data models were an incredibly valuable starting point for water utilities to get their utility networks into GIS. Since the original data models focused primarily on a data structure for the assets that comprise utility networks, we received feedback that many utilities wanted more guidance on how to model operational data (workorders, service requests, customer complaints, main breaks, capital improvement projects, etc.) and base data (roads edge of pavement, road centerlines, elevation data, parcels, etc.) in their GIS. The Local Government Data Model solves this problem because it includes a complete schema for typical water utility base data and operational data.
Over the years, an observation we’ve made is that water utilities struggle with how to model and manage schemas for datasets that aren’t their utility networks or operational data – simply put managing base data can be a challenge for water utilities. For example we’ve seen a lot of utilities struggle with managing roads, parcel, buildings, etc. in their enterprise GIS, especially when these datasets are coming from other organizations or departments.
This is a particular issue for water utilities that serve multiple units of local government such as authorities, county wide utilities, state wide utilities and private companies. A good example of this is a water authority whose service territory includes three counties. The water authority needs parcel data that is maintained by the counties. County A, County B and County C all use different schemas for their parcels. So the water utility had two choices – leave the parcels in 3 different data layers and use them as is – which makes analysis, map creation and integration with other systems at the utility that need parcel data (such as a customer information system) difficult. Or invest time to extract, transfer and load (ETL) the parcels into a common schema so they can be used as a single seamless layer across the service area. The Local Government Information Model can now serve as the common schema in this example.
Easier Data Sharing
We describe the Local Government Information as a harmonized information model – meaning designed to accommodate typical GIS needs across local government. If organizations that commonly share data all adopt the Local Government Information Model, it will greatly reduce the time and resources spent establishing a common schema and migrating data to these schemas – thus allowing water utilities to focus on the maintenance and management of their authoritative data.
For example a private water utility may serve two municipalities. If the water utility and both municipalities all adopt the Local Government Information Model then they can all very easily exchange data. When the water utility needs road centerline and edge of pavement layers from the municipalities than the utility can just import the new data without having to manipulate the schema and will have seamless layers for their service areas. The same logic applies to the water utility sharing data with the municipalities – when the water utility updates the location of their upcoming capital projects, the utility can share that data back with the municipalities and the municipalities can use it without any schema manipulation.
Best Cartographic Practices for Water Utility Maps
As we’ve discussed in a previous blog, the Local Government Information Model includes geodatabase schema, map documents and specification for services necessary to deploy the ArcGIS for Water Utilities and ArcGIS for Local Government maps and apps.
The map documents highlight
best practices for displaying water, wastewater and stormwater data in the context that each map is designed to be used. For example the map documents included with the Mobile Map Template have best practice cartography for displaying water utility GIS data in the field in both a day and night time use map. The same goes for the map document included with the Infrastructure Editing Template – this is a best practice map document for editing water utility data with ArcGIS Desktop.
Looking to the Future
The specification for the services (map, feature, geoprocessing, etc) necessary for the ArcGIS Water Utilities maps and apps are also part of the Local Government Information Model. So if other local government entities in the service area of water utility embrace the Local Government Information Model, ArcGIS for Local Government and start to publish services, then water utilities can consume those services for their maps and apps. In this scenario the water utility may no longer have to import some data into their own geodatabase and can just consume the services right from the organization that is the steward of the data.
We hope you’ve found this exploration of some of the benefits water, wastewater and stormwater utilities will experience when adopting the Local Government Information Model helpful. We encourage your feedback on the information in this blog, the Local Government Information Model or ArcGIS for Water Utilities.
Using the ArcGIS 10 Data Driven Pages feature, you can quickly and easily create a professional-quality map book from a single map document. This seminar teaches the workflow for using Data Driven Pages. The presenter also covers how to create an index layer from a feature layer and add dynamic text and locator maps to your map pages.
Who Should Attend
GIS professionals and cartographers working in utilities, transportation, public safety, and government mapping agencies and others who need to produce map books.
The presenter discusses
- Data Driven Pages, map books, index feature extents, and geoprocessing tools.
- The process for building map books.
- Updating, printing, and exporting map books.
A parcel map requirements for line dimensions used to be hard to achieve using only labels. This is the reason many user reverted to the use of annotation. But maintaining annotation is labor intensive, designed for a specific scale and prone to user error. Labels, on the other hand, are database driven, can be easily compared with the line’s geometry as part of the QA process and require no maintenance once configured. We spent a few hours configuring the labels for parcel lines and you can see the results below, which are just as good, if not better. This result could have never been achieved without the parcel fabric redundancy of lines and the concept of line-point.
This post can help you configure labels for parcel fabric lines using the standard label engine or the Maplex extension. Even if you are forced to use annotation, you can benefit from this configuration, as labels can easily be converted to annotation. Continue reading
3. Set a feature template’s default attribute values.
Park analysis and design: Sketching the design of a new park (part 4)
In my previous blog post, I used a voting application allowing citizens to vote on their favorite location for a park based on choices derived from a suitability analysis. Using ArcGIS Server, their choices went into a database and allowed the parks to be ranked based on their popularity. We have a winner, so now our task is to design the new park. Continue reading
Park analysis and design: Voting on a new park location (part 3)
In my previous blog post, I determined suitable locations for a new park by analyzing a series of datasets provided by the City of Redlands. The final output showed a number of parcels that matched the standards established in the model. The next task is to seek feedback from the public. To do this, I’ll take advantage of a web application I developed using ArcGIS Server.
Preparing the data
The park suitability model resulted in an output of a feature class containing many multipart features. A multipart feature, as the name suggests, is a feature with multiple parts. Think of Hawaii as one feature (state) with multiple parts (islands). To break the suitable areas for parks into separate features, I’ll use a tool called Multipart To Singlepart.
With every parcel being its own feature, I can calculate the area for each potential site by creating a field and using Calculate Geometry in the attribute table. Once I have the area in acres, I need to convert all the polygons to points using the Feature to Point tool so I can represent each park as a point location in the web application.
The final dataset contains fields for the park’s area and an identification number, which I derived by copying the OBJECTID to a field called ParkID. This number is used to link the park feature to the voting results table, which also has a field for the ID named ParkIDVoted (so I can distinguish it in the Flex code).
Building the web service and application
I’m developing my application using the ArcGIS API for Flex, so I first check if there are any existing samples that I can use as a starting point to help me collect votes. I find the Editing a related table sample, which demonstrates a similar scenario that I can modify for the needs of my own project. This sample takes a set of incidents (stored as points) and allows the user to flag an incident as important. In the code, there’s a map service that holds the points, as well as a table to hold the results. In the geodatabase, these are linked using a relationship class. These datasets need to be in an ArcSDE geodatabase with feature access enabled to allow web editing. Accordingly, I can set up my data this way and publish it with ArcGIS Server, which makes the parks and the table become layers in a map service.
I need to change a few things in the sample to customize it for my own application: the URL of the parks layer and the URL of the table holding the votes. Some field names are different, but other than that, the logic of casting the vote is fairly straightforward.
In terms of the interface, the sample shows how to use the pop-up window (infoWindow) when a park is clicked. I used the same thumbs-up icon and added a bit more information to the information window. Additionally, I published the park access map and the final suitable parcels layers, which can be turned off and on in the application using simple Flex components.
Submitting a vote
When users find a park they are interested in, they click the icon on the map. This sends a query to the server using the x,y location of the map click, which also triggers a relationship query that gets the number of votes of the record in the related table. The infoWindow then displays the ID of the park that was clicked, the park size, and the total count of current records in the related table, which are votes in favor of this location.
To vote for this park, the user clicks the thumbs-up icon, which sends a message to the server (applyEdits) that puts the ID of the park, plus a value for “like” into the related table through the relationship class. The count is increased by one and the total vote count can be seen immediately.
Counting the results
On the server, the related table collects the votes. Each record in the table is a vote, which includes the Park ID the user clicked, an attribute for the vote (“true”), and the date of the vote.
When the voting period is over, I can run a summary on the final table using the Summary Statistics tool. This counts the number of records with the same ID and creates a table, which I can then build a report on using the new reporting tools in ArcGIS 10.
Now that I have a winner, the next task is to design the park using the sketching tools in ArcGIS 10. I will cover this in my next blog post.
Accessing the Data
The data, Flex source code, report template, and a few other parts of the workflow can be found here
The rest of the data and tools for this blog series can be found in the Park Analysis and Design group (make sure to filter by Show: All Content at the top of the page)
Content for the post from Matthew Baker
In my previous blog post, I analyzed park accessibility in the City of Redlands and discovered several areas of the city that were farther than one mile from an existing park along the walkable street network. Now, I want to determine where to best locate a new park within the areas I identified as being underserved by current parks.
To answer this question, I’ll conduct a suitability analysis to find parcels that are most appropriate for a new park.
There are two main types of suitability analysis: binary and weighted. Binary suitability analysis involves a binary final answer —1 or 0, or in our case, suitable and unsuitable. A weighted suitability analysis allows for a range of final answers, from 1 to 10, for example, and allows certain layers to have more influence (weight) on the result of the model. For this example, I’m going to create a binary suitability analysis model.
As with our park accessibility analysis, I’ll start with several datasets from the City of Redlands, including parks, schools, roads, trails (off-road and on), existing and proposed bicycle lanes, and vacant parcels. Before I construct a model, I should know the distances the new park should be from certain features. In most cases, I’m looking to be close to certain features, but in other cases, I want to make sure I’m far enough away, such as with highways and existing parks.
Remember that any of these values can be changed to suit any criteria. ModelBuilder allows a workflow to be created, run, and then modified to suit different ideas of how far each feature should be from a new park.
Creating a data processing workflow
My analysis should read like a flowchart: buffer the schools, trails, and bicycle lanes to make the ‘good’ areas. Buffer the existing parks and highways to make the ‘bad’ areas. Then remove the bad areas from the good areas, and find the areas that are common to the vacant parcels.
Developing a suitability model
To use the data and tools found in ArcGIS to accomplish suitability analyses, I’ll develop a model using ModelBuilder. ModelBuilder acts much like a living flowchart, with data elements connecting to tools creating outputs just like the flow processing diagram. A model serves not only as an organizational tool for doing data processing, but the elements of the model store parameter values and data paths that can be changed, and the model itself can be shared and run on different data. For example, other users can change the input datasets to their own parks and street network to achieve the same analysis.
By definition, geoprocessing tools take one or more pieces of geographic data, run a process based on parameters I define, and create a new piece of data as the result. That first result can be fed into another tool which results in yet another piece of data. Once the new data has been created, the old result can be discarded. This data is called intermediate data. Each piece of intermediate data should be written to a scratch workspace, which is defined in the environment settings of the map or model. Keeping intermediate data in a scratch workspace is a great way to ensure I don’t end up with random datasets all over my computer.
Tools for models can be found using the Search window. The Search window will allow me to type in the name of a tool, dataset, or script and show results across all types of data. To add a tool to a model, drag the tool by its name, and drop it on the model canvas. Model elements can be connected using the Connect tool from the model window. Double-clicking a tool or element will open a dialog box that allows me to ensure the settings are correct before I run the model. ModelBuilder will also check the inputs are valid before running, and I can check them all manually by clicking the Validate Entire Model tool from the ModelBuilder toolbar. I can save the model in a toolbox, which can be stored anywhere on disk or in a geodatabase, as I am doing.
When the model runs, a dialog box shows me the progress, notification that it is finished, and any messages, warnings, or errors that might have occurred. The Results window is the location to track the status of a model or other geoprocessing operation.
Reusing models as tools
Another nice feature of models is they can be used in other models as tools. Since I already proved the effectiveness of measuring distances along the road network versus straight-line buffers, I can take the method I developed and use it as a tool in my park suitability model. I’ll call the tool Buffer Along Roads and use it for the schools and existing parks, which are the only datasets that require travel to be measured along the road network.
My model tool will operate as any other tool: it requires an input point dataset and will create a polygon dataset containing buffers along the roads using the distances exposed in the reclassification scheme. Once I’ve created these distance polygons, I then choose the ones that meet my criteria—in this case those that are ½ mile from existing parks and within ½ mile of schools. From there, the rest of my analysis can continue using straight-line buffers from bike lanes, trails, and highways.
Determining the final location
When the model is finished, I see that there is more than one suitable location for a new park. I then have some work to do to figure out the final parcel or location. For example, perhaps I’m looking for the area that is closest to downtown. Using my park access analysis as an example, converting the final suitable polygons to points and running them through a cost distance tool would be one method to use.
However, I want to allow the citizens to provide input. In the next entry in this series, I’ll use ArcGIS Server to collect volunteered geographic information, crowd-sourced, or user-generated content to allow users to vote on their favorite location for a new park. This concept is now being referred to as “participatory planning”.
Accessing the data and models
The data and models for this blog post can be found here
The rest of the data and tools for this blog series can be found in the Park Analysis and Design group here (make sure to filter by Show: All Content at the top of the page)
Content for the post from Matthew Baker
Park analysis and design – Measuring access to parks (part 1)
Have you ever wondered how far you are from a park? In this post, I’ll examine the placement of parks in Redlands, California, and determine which areas are best and worst served by a park. In future posts, I’ll discuss siting a new park using binary suitability analysis, web-based tools for evaluating and increasing park access, and the design of a new park using ArcMap and feature template-based editing.
Over the last year, I’ve been attending various urban planning conferences and have discussed with several urban planners the need to design healthier communities, and I have heard this notion echoing throughout the planning community.
One concern is to figure out how well areas are served by parks. In my analysis, I want to determine which areas are within one mile of a park and visualize the results in a way that is easy to understand. I chose one mile, assuming most people can visualize how long it would take them to walk a mile, but this analysis could certainly be easily altered to measure any distance and present the results in a similar manner.
To do this, I could use a simple one-mile buffer around the parks, as the first map shows. However, a map created that way does not consider modes of travel. I want to measure pedestrian access to parks, so the best route is to travel along a road, preferably on the sidewalk.
The more accurate way to measure park access is to determine areas around the parks that fall within a specified distance from the parks along the road network. Using network analysis, we call this a service area analysis, or drive time, but this uses the road network only.
There are tools within the Spatial Analyst toolbox to run a cost-distance analysis: essentially a distance map calculated against a surface describing how difficult it is to travel across a landscape. This gives me the ability to rank our landscape by how easy it is to travel, road or not.
I want to then create a map showing areas that are ¼, ½, 1 mile, and greater than 1 mile from a park along the road network and show the distances on the map as well as on a graph.
Creating a travel cost surface
For my analysis, I am first going to create a cost surface that describes ease of travel through Redlands, with areas along roads being easier (cheaper) to travel through, and areas farther from roads more difficult (expensive) to travel.
To do this, I start by creating a raster surface where every cell has a value for the distance it is from itself to the nearest walkable road segment; that is, I don’t have to drive a car to get to a park and can even get exercise on the way.
First, I’ll need to map the road network. From the City of Redlands roads dataset, I can simplify all the roads into three main types: minor, major (arterial), and highway.
Since pedestrians cannot safely or legally walk on the highways, I can remove them from the analysis. The first tool in the model will be the Select tool, which allows a set of features to be removed for analysis by an SQL statement. In this case, I’ll use Road Type not equal to Highway to remove the highways from the analysis and create a walkable road dataset.
Of course, this would be a good place for a better road dataset in which each street had an attribute for whether or not it is walkable. I have heard of a few communities and organizations starting to capture this data, and it would be most useful for this application.
Once I have extracted the walkable roads, I’ll run the Euclidean Distance tool to create a surface in which each raster cell holds a value for the distance between itself and the nearest road.
The Euclidean Distance tool creates a surface where every part of the study area is broken down into a square (cell), and every square is assigned the distance to the nearest road segment. I’ve adjusted the symbology to group cells into distance bands.
Creating a cost surface
I’ll now borrow a concept from a weighted overlay (suitability) model and reclassify the road distances onto a scale of 1 to 6, where 1 is the cheapest (easiest to travel), and 6 is the most expensive (difficult to travel).To do this, use the Reclassify tool. It allows me to define the number of classes into which I want to reclassify the data. The Old Values column describes the distances from the Euclidean distance raster. The New Values column is the breakdown of the new values for the ranges of the old distance values.
Notice I’m going to reclassify the distances using the same distance bands I used earlier to describe how far each part of town is from the nearest road. Each cell in each distance band then gets a new value describing its cost on a scale of 1 to 6.
Here are the new reclassified distances. Notice the values become more expensive when moving away from the roads.
This now becomes the cost surface that I’ll use to measure park access.
Evaluating park data
Because the park data is stored as centroid points, they may not necessarily reflect the true access points to the parks themselves. By creating points at the corners of the park, I can have a more suitable location from which to measure park access.
Borrowing again from the City of Redlands dataset, I’ll simply select the parcels that intersect the park points and run those intersecting parcels through the Feature Vertices To Point tool in the Data Management toolbox.
Depending on the geometry of some of the parcels, I might end up with a little more than just the corners, but this is a much more accurate representation of how to get into the park than just a point in the middle of the parcel.
Calculating cost distance
Next, I’ll run the new park points against the cost surface using the Cost Distance tool in the Spatial Analyst toolbox. Using this tool, I can create a raster surface where each cell has a distance from itself to the nearest park point along the cheapest path—in this case, the cells that are nearest to the roads as described by our cost surface.
The resultant raster gives a picture of how far each location is in the entire city to the nearest park, which is somewhat hard to visualize. I can then reclassify the distances into simple ¼-, ½-, and 1-mile areas.
Visualizing the results
Taking the walkable road network into consideration certainly does give a much better picture of areas served by parks—and notice the areas that now show up as underserved that the buffer didn’t expose. These areas are over a mile from a park, which meets our criteria of underserved.
In addition to mapping, I can also create a graph that visualizes the percentages of the city that are served by parks by their respective distances.
Using the graphing tools in ArcMap, I can create a new field of data to hold the percentage, calculated by using a variable in the model that stored the area of the City, and divide that by the area of each feature in my walkability analysis. I can create a table that stores the values of the output of my reclassification (1,2,3,5,9) and their respective labels (500’, ¼ Mi, ½ Mi, 1 Mi, and More than 1 Mile) and join that table to my walkability output. It’s an extra step, but one that can be repeated if my underlying data changes and I want to run it again.
Now that I have identified that there are areas underserved by parks, the task of my next blog post will be to determine the best location for a new park using a simple binary suitability analysis.
Data is provided by the City of Redlands. The data and models for this blog post can be found here
Content for the post from Matthew Baker
Every year leading up to the Esri International User conference we get asked by the water, wastewater and stormwater GIS user community what are the “can’t miss” events. So we thought we would highlight some key things for water utility GIS users at the 2011 User Conference.
First, and most importantly, the User Conference is full of opportunities to learn and bring valuable information back to your organizations, so we are just highlighting a few of the many great presentations, meetings and events at the UC. No matter how you choose to spend your time at the UC, you and your organization will benefit from it. If you haven’t registered for the 2011 User Conference yet, you can register here.
Since we are only highlighting a few of the many activities at the 2011 UC, we suggest you take advantage of the online Agenda Search to make the best use of your conference time. You can query by keyword, such as “water”, “sewer” or “stormwater” to find presentations topics and you can also view all of the presentations, events and meetings by date.
So, here are some of our recommendations for 2011 User Conference:
Saturday July 9th
9:00 AM to 5:00 PM Water/Wastewater Meeting – Convention Center Room 29A
Join us for an all-day meeting focused on water, wastewater and stormwater GIS. Presentations by ArcGIS users, Esri Business Partners and Esri. For more information and to register contact Christa Campbell.
Monday July 11th
9:00 AM – Plenary Session
Not to be missed, kick off the User Conference by attending the plenary session and get energized for the week. Also see a preview of ArcGIS 10.1.
3:30 PM – Map Gallery Open
See maps from water utilities as well as many other industries.
4:30 PM – Lightning Talks – Ballroom 20 C & D
Lighting talks feature rapid fire speakers giving 5 minute presentations about a variety of GIS topics. A great format to showcase a lot of ideas and get you thinking about new way to leverage GIS.
Tuesday July 12th
Learn about templates, maps, apps and the Local Government Resource Center. Also learn about the Local Government Information Model, which is the datamodel for ArcGIS for Water Utilities.
9AM – Exhibit Hall Opens
Be sure to visit Esri’s Water Team at the Water Industry Island in the exhibit hall. We’re available to answer your questions, talk about the templates, demonstrate ArcGIS for Water Utilities and take your feedback.
At the Geodatabase Management Island in the Esri Showcase, Esri staff can perform “Health Checks” on your water utility GIS data. The Health Checks include automated checks on your data in a personal or file geodatabase so you can understand the overall quality of your data.
This service is available from 9 AM to 5:30 PM Tuesday the 12th and Wednesday the 13th and 9 AM to 1 PM on Thursday the 14th. We expect high demand for Health Checks, so we encourage you to email firstname.lastname@example.org with your name, organization, contact information to reserve a preferred date and time.
9AM – GIS Managers’ Open Summit – Ballroom 20 B/C
If you are a GIS manager at a water utility be sure to stop into the GIS Managers’ Open Summit and share knowledge with other GIS managers from a variety of industries. This is a great venue to learn about best practices and cutting edge advances in managing GIS within any organization.
Wednesday July 13
12:00 PM – 1:00 PM – Team Water/Wastewater & Stormwater User Group Meeting – Room 2
Come learn what the user community is up to, get updates on user community driven projects and get some key briefings from the Esri Water Team.
1:30 PM – 2:45 – Understanding Geometric Networks Technical Workshop – Room 3
Geometric networks are a component of the geodatabase that every water, wastewater and stormwater utility should be benefiting from. Come learn about geometric networks and new geometric network capabilities coming at ArcGIS10.1.
3:15 – 4:30 PM – ArcGIS for Water Utilities – An Introduction – Room 32 B
Get an overview of ArcGIS for Water Utilities, including demonstrations. We’ll also discuss our future plans. This is a great opportunity to give us feedback and request functionality from the team behind ArcGIS for Water Utilities.
7:00 PM – 10:00 – Team Water/Wastewater “Pool Party” – Pool Terrace
Kick back with your peers, enjoy some food and beverages and listen to some music. This is a fantastic opportunity to network with the water, wastewater and stormwater user community and share information with peers in a social setting. It’s also one heck of a party.
Thursday July 14
10:15 – 11:30 AM ArcGIS for Water Utilities – Configuring – Room 32 B
Learn how to configure the maps and apps that are part of ArcGIS for Water Utilities. This session will also cover general best practices for configuring ArcGIS as a platform to support water, wastewater and stormwater GIS.
We look forward to seeing you at the 2011 Esri User Conference!
On May 18th we will be hosting a meeting of the Esri Mid-Atlantic Water/Wastewater Special Interest Group in our Chesterbrook, PA office. The meeting will run from 9 am to 3 pm. Lunch is provided and is graciously sponsored by Esri Business Partner GBA Master Series. Continue reading