Tag: data

ESRI Data & Maps 2009 Update now available

12/08/09–ESRI Data & Maps is a collection of preconfigured data and maps for ArcGIS. It is freely available for all ArcGIS customers as a set of DVDs or layer packages you can download from ArcGIS Online.

An update to the 2009 version is now available. For details about what’s new, see the ArcGIS Data blog.

Posted in ArcGIS Online, Services | Tagged , , | 2 Comments

Geometric networks for water utilities

Whether you are implementing GIS in your water, wastewater or stormwater utility and creating a data model for the first time or you are updating your existing GIS datamodel, you will no doubt ask yourself this question –

How should I model my utility’s asset in a geometric network and why should I use a geometric network?  You should model a Geometric networks can enable utility system tracing, error checking, and better productivity while editing.

But how should I build it.  That one simple question spawns many sub questions.  Should I use complex edges, what edge to junction or edge to edge rules should I implement?  What are weights?  Should I worry about cardinality?  We get these questions or have this conversation all the time with utilities and partners. 

Below, you will find information and some guidance to help you answer these questions.  Also, we always recommend that you read through ESRI’s webhelp as a starting point on geometric on networks:


First, you will need to create your network.  When creating your network, you have a few options.  The most important are choosing which layers participate in the geometric network and what layers, if any, are sources or sinks.

So what layers should you include in your geometric network?  Keep in mind that the geometric network should encapsulate how your distribution or collection systems actually operate.  So include only layers that participate in the logic network – or to think of it another way the layers that include assets that determine how your collection or distribution system function.

These typically are mains, valves, fittings, hydrants, laterals, virtual lines, manholes, catch basins, etc.  Since the geometric network should only contain layers that affect the network, a change in geometry or information can affect analysis on that network.  Data like Leaks, or SCADA sensor locations, are operational data sets.  It is showing some incident on the network or some value or reading of the network.  So these operational layers should be included in the operations dataset and not in the geometic network.

In order to create a geometric network, you’ll need to have a feature dataset which contains all of the feature classes for that network.  You’ll also need at least the ArcEditor level of ArcGIS Desktop.  When prompted to build the geometric network from existing features or  an empty network or to create an empty network, you’ll typically choose the first option.

After you determine the layers that should be part of your geometric network, you need to think about how you are going to model flow.  When setting sources or sinks, make sure to only set one of these.  This is critically important, and a mistake that we see all too often.  Do not set one feature as source and another as sinks.  You only need one and having both in a geometric network for water, wastewater or stormwater will lead to odd network behaviors.  Typically the NetworkStructure feature class or similar feature class containing a relatively small number of points is used.  While creating the network you don’t specify whether it is a source or sink just that it could be one or the other.  This adds a field called AncillaryRole to any feature classes you specify.   Later, in ArcMap you can set the value of this field for individual features to source, sink or none.  These values can be used to establish flow direction for the non-looped portion of your network.  You could instead choose to use the digitized direction of your lines to establish flow direction, in this case sources and sinks are not used.  

After the geometric network has been created, you need to set up the core properties of the network. 

Let’s first think about complex edges versus simple edges.  This is an easy one to make a decision on. In a geometric network, a simple edge must be split at every junction, so every valve, manhole, fitting, etc, on an edge, splits that edge.  Complex edges allow one segment to have many junctions on top of it and it does not require that segment to be split.  And by split, I mean separate records in the geodatabase.  Usually, complex edges only are used on your mains and laterals for water, sewer, and storm.  This allows you to model your segments by the method defined by your utility – meaning that there is no standard industry definition for what a “pipe segment” is and we often see utilities making a conscious decision for how they want to define a pipe segment and use that definition across all of their operation and business systems.  We have seen people model laterals as simple edges.  Typically, simple edges are used here because the lateral and the connection point (meter, service connection)  is a representation of the actual meter and lateral.  The representation allows you to perform tracing through the network to the meter and connect the meter id to a billing or customer system. 

Next we need to determine whether to set connectivity rules.  Keep this in mind – if you set just one connectivity rule in your geometric network and wish to use the validation tools, then you need to set up all the rules.  This can be a complex process to figure out how all your assets connect in every situation-.  Despite being complex to implement and maintain, there are certainly large benefits to using connectivity rules.

Connectivity rules allow you to model the logic connection of your network.  To support all connection types, you need to make sure your datamodel will support this.  Connectivity rules can leverage a geodatabase design element called subtypes. Subtype allow more complex modeling of your data so that within a feature class, features are assigned to a subtype which may have different default values, different domains, and different connectivity rules than the other subtypes within that feature class.  The example template geodatabase is simplified and doesn’t include any subtypes.  This means that for connectivity purposes a fitting is just a fitting not a tee, bend, or cap.  Likewise for connectivity purposes a lateral line is just lateral line not a hydrant lateral or service lateral.  With a more detailed design which includes subtypes you can make more extensive use of connectivity rules.  That is you could have a rule that says a hydrant must connect to a hydrant lateral line and that a hydrant lateral line must connect to one hydrant.  You could also specify that a hydrant feature be added by default at the free end of a hydrant lateral and that a tap fitting be placed automatically where the lateral line connects to the main.  You could set up a similar set of rules for service lines and meters. 

Within connectivity rules, there is an option to set cardinality.  So you can go beyond just how your assets can connect and you can define how many assets can connect to each other.  Let’s think about fittings again, with subtypes for fittings, you could specify that a tee fitting must connect to 3 pipes, an end cap to 1 pipe, etc.  So you can see that to model proper cardinality, you need to model your data in a way to properly define the number each asset can connect to.

With a simple data model, like the data model that is included with the Water Utility Resource Center templates, you can still set a connectivity rule if desired.  For instance, you can specify that wLateralLine should connect to a wMain and by default a fitting must be added. 

Some of the Edit Tools in the Network Editing Template are designed to assist with automation and basic connectivity testing without the use of geodatabase connectivity rules.  For example, the connectivity checker tool merely looks at feature types and makes sure they logically connect to each other.  So if you want to use connectivity to enhance your editing experience, you can do so, without modeling connectivity to represent every asset’s connectivity restrictions in the geodatabase.  For instance, you can model a hydrant to a lateral to a main and not worry about modeling everything, you will just have some connectivity errors when you validate, which you can choose to ignore.

Next, you will see an option for setting weights.  Geometric network weights can be used in two ways.  Weights can be a filter, tracing only features with matched values.  This is somewhat advanced and is used primarily in telecom and electric networks.  Weights can also be used to aggregate flow.  This is second usage is helpful for wastewater and storm water networks where flow direction is known – that is the non-looped portion of the network.  Using trace weights, we can accumulate flow upstream from a specified location.  You might add a trace weight on the length of your gravity mains and laterals in order to later obtain the total length of pipes upstream from a given location.  You might also add a field to your wastewater lateral points representing estimated gallons entering the system.  By creating a trace weight on this field, you can summarize gallons at any point in your network using the Find Upstream Accumulation trace task on the Utility Network Analyst toolbar. If desired, the system could then store these values of accumulated flow along your network in the manholes and gravity mains. For an example of this, see the Calculate Accumulation script:


In short, weights are typically not used for most utilities.  You can see that they do provide some advance functions but are not required to model and work with a geometric network.

Lastly, we like to recommend that if you are tackling some of these issues, that you take ESRI training so you can understand all of the implications of what we’ve discussed in this blog.  The proper training or consulting help with creating your datamodel or implementing the geometric network will undoubtedly save you a lot of time and money when your data model is in production.

Posted in Water Utilities | Tagged , , , , , , , , , , , , , , , , , , , | 1 Comment

The Evolution of the Water Distribution Capital Improvement Planning Template

As you may have seen, we released the Water Distribution Capital Improvement Planning (CIP) Template  a last week.  First, we wanted to say a big thank you to all of our users and business partners who helped us to refine the initial geoprocessing models and the toolset also shared their workflows for capital planning.   

We’ve already had a few questions about why we chose the term Capital Improvement Planning (CIP) to describe this template, since not all utilities use that term.  So when we use the term CIP, what we mean is the long term plans of a utility to manage their assets and/or to expand their system, what you may also call a “Capital Plan”, “Long Term Plan” or “5 year plan”.

Personally, I think the CIP Template is great example of how ESRI listens to our water utility customers and responds to their needs.  We’ve had numerous customers over the past few years tell us that they want to be able to leverage their asset data in GIS as well as their operational data (workorders, CIS, water quality) better to support their long term plans.  Of course, we thought that giving our customers a geographic view of all that asset and operational data was the best place for them to start.  We also heard from many of our water and wastewater customers that their long term planning has evolved from an occasional event to a continual process; because of funding issues, grant availability, coincidence with other projects that a utility could share costs with and the desire to be quick and proactive to eliminate the risk of future critical asset failures.

Also, we are excited, because the CIP template is great example of GeoDesign.  We’ll be doing a blog shortly that explores the principals of GeoDesign and relates them back to the CIP template.

2 Parts of the CIP Workflow

As we dug into the CIP process, we observed 2 distinct, but related workflows happening.  The first part of the workflow was to assemble data from many sources and analyze that data to look for where projects are needed.  This part of the process is tailor made for the benefits of GIS – to use GIS as the place where different types of data are assembled together into a common view and also to use the analytical capabilities of GIS to gain better insight into the aggregated data.  Because this analysis needs to be iterative (looking at multiple data layers with different weighted criteria), an auditable process (you have to be able to defend your findings to a PUC and your ratepayers) and an automated workflow (to save time, money and resources) this is a perfect match for Geoprocessing Models in ArcGIS.

GIS Analysis for CIP Decision Making

At first we took the approach that ESRI should try and build a few geoprocessing models that all water and wastewater utilities could use to score and rate their assets by estimated remaining asset life, condition or criticality.  We figured that we could do some research, interview some of our users and figure out these geoprocessing models (our inner geography geek begged us to take this approach first).  What we quickly realized was that there isn’t a silver bullet set of geoprocessing models we could build because every utility system has their own approach to long term asset management and their own priorities (KPIs, level of service they want to provide, hot button issues, fiscal condition, etc) that drive their long term planning. 

This was also a great reminder that even though we have the ability to use technology to automate a process, the human element is still critical, meaning that the more we talked with the engineers who are creating these CIP plans, the more we realized they need a better way to manipulate and process data so they could apply their engineering expertise to make decisions about capital projects.  We also noticed when talking to engineers doing capital planning, that while they were somewhat aware of the analytical capabilities of GIS, they weren’t aware of the geoprocessing framework core to ArcGIS and how to use ModelBuilder to automate analysis and create a reusable toolset. 

So we decided that we need to focus our CIP template on showing the water utility community how they could benefit from automating spatial analysis with the ArcGIS geoprocessing framework by providing some generic models.  So, please keep in mind that the intent of the models we’ve provided in the CIP template is to show you how geoprocessing and ModelBuilder work within ArcGIS so you can create geoprocessing models that reflect how your utility wants to manage assets and plan for the long term.  Incidentally, if you want to learn more about GIS analysis, Geoprocessing or Model Builder within ArcGIS, ESRI has lots of great resource including on-line training, books and class room instruction.

Estimating Project Costs

The second part of the CIP workflow we observed was estimating CIP project costs.  Basically this workflow was estimating the cost of a project based on either replacing existing infrastructure or adding new infrastructure (main extensions, interconnections, extending service to new sub-divisions, etc).  It’s important to note that all of the functionality in this part of the CIP process is core to ArcGIS and the geodatabase, all we’ve done is customized the application to automate and simplify this part of the workflow.  This is what we decided to call the Costing Estimating Tools.

The first step in estimating project costs is to create projects by grouping assets together into projects.  In this part of the process you are visualizing the data you brought into GIS and also the results of your analysis and then determining what assets you want to include in a project, your rehab or replacement strategy for those assets  and then saving that information.  So you are literally visualizing data in GIS (most likely working with many data layers of data, including the same feature datasets symbolized different ways) and doing some spatial and attribute queries to come up with candidate assets to include in CIP projects.

From there, assets that are in need of replacement or rehabilitation and spatially close to together are grouped in projects.  We’ve heard from many water utilities that without a spatial context it was a real challenge for them to group assets together into appropriate projects without and also it was a challenge for them to track and manage information about candidate assets for CIP projects throughout the CIP planning process.  Water utilities were struggling with supporting their CIP process with paper maps and tracking assets that were part of a project, including costs to replace those assets, in spreadsheets. 

So traditionally, this CIP process took a lot of staff time and also lead to uncertainty about whether utilities were actually spending their money on the most appropriate capital projects.  We also heard that utilities were struggling with how to update data when they tried to refine a large candidate list of CIP projects down to just a few to carry forward into design and that it was next to impossible to look at multiple scenarios for the same project area (assets grouping and rehab or replacement approach) because so much of this process was manual or spreadsheet driven.

We took the approach that if a utility has their assets (water distribution, wastewater collection or stormwater) in GIS, they should use their GIS asset data to group into CIP projects and then to store information about the CIP projects (like the extent and also all of the assets that are part of the project) as new data layers in GIS.  This enables a utility to create an authoritative source of data about their proposed capital projects in GIS.   So this drove us to create the Cost Estimating Tools. 

As we began to demonstrate early versions of the Cost Estimating Tools to our utility users, we got a lot of great feedback that helped us to refine the tools.  We were told that to be really useful, the tools should include the ability to either rehab or replace existing assets and to extend mains, so we programmed that functionality into the tools.  We also were told by our users that they needed to be able to compare the costs of different replacement strategies (open cut, trenchless, etc) for the same set of assets so we designed the tools to make it easy to compare the costs of use using different rehab methodologies.  Also we knew that the costing element of the tools needed to be flexible, because individual utilities favor different pipe materials which can be set as defaults and that unit costs are often specific to a utility and those can be easily configured in a simple table.

So what we wanted to do with this blog was to explain how we arrived at version 1 of the Water Distribution CIP Template.  We are very interested in your feedback so we can incorporate more useful changes in version 2.  Also we’d like to hear about any geoprocessing models that you would like to use for CIP planning.  So, please leave us feedback here – http://forums.esri.com/forums.asp?c=55&s=426#426

In the next few weeks we’ll be recording a video of the Water Distributions CIP Template in action and we are also going to do a webcast in December that takes a deep dive into the CIP Template.

Posted in Water Utilities | Tagged , , , , , , , , , , , , , , , , , , , , | 4 Comments

CIP Template Released

If you have been following us on twitter, you already know that we released the ArcGIS Water Distribution Capital Planning template (we are calling this the CIP Template for short) yesterday.  The CIP template includes a set of models to help you understand how to you can use GIS to score and rank your infrastructure and a set of tools to provide cost estimates for rehabilitating, replacing or building new infrastructure. 


We will film a video at the end of October that shows you in detail how these tools work, and we’ll be doing a live webcast in December to explore the CIP template in depth. So in the mean time, below is a little help with the CIP Template.



                We included 6 models that show different ways you can analyze your data.  To run these models, you will need to create a temporary file geodatabase and set the environmental variables for each model.  The two variables you need to set are Current Workspace (the folder that has the CapitalPlanning.gdb in it) and the Scratch Workspace (the folder that has the temporary File GDB you just created). 



                The Project Cost Estimating tools use three tables in the CapitalPlanning.gdb for configuration.  These tables are shipped to work with the data in the Sample.gdb.  If you want to start changing cost or the configuration, you will need to change these tables.

                CIPDEFINITON – This tables defines which featureclass’s to cost, the fields to look at(such as Diameter and Material) and a few other parameters.

                CIPCOST – This table defines the cost for a particular asset.  When costing an asset, you can define a Strategy, like Replacement or Rehabilitate, then an action for that Strategy, like Open cut for a Replacement .  For each Strategy and Action, then you define the cost based on the fields you set up in the definition table.  So if you are looking at wMains as a layer, you are interested in the Field’s, Diameter and Material, you would select your Strategy, the Action for that Strategy, the Material(say PVC), then Diameter(say 12) and define a cost for what each foot would cost.  So by using a Strategy, Action and two filter fields, you can provide very detailed cost estimates.

                CIPREPLACEMENT – This table allows you to provide lookups for replacement.  If you are going to replace a 6” DI, you may have a rule saying that each 6” DI is going to be replaced with a 8” PVC.  This table allows you to define this replacement.  So that costing is preformed an 8” PVC, not a 6” DI.


Since this is our initial release of the CIP Template, we want your feedback.  So please post any questions or feedback to our forums under the Water Utilities Template section: http://forums.esri.com/forums.asp?c=55

Posted in Uncategorized | Tagged , , , , , , , , , , , , , , , | 7 Comments

Interfacing the mobile map with other systems

                Many of you have had a chance to test out the Mobile template and provided great feedback.  One question that keeps coming up is “How do I interface (or integrate) the mobile map with my other utility systems?”  Typically, when we get asked this question, people are referring to their workorder system (also called a CMMS or EAM).  Occasionally we are asked about interfacing with a LIMS system, mobile leak detection system, customer information system (CIS), billing system, heck we’ve been asked about interfacing with a utility’s time card system.   Hopefully you notice the trend here, that water and wastewater utilities can and do want to “spatially enable” their other business systems because most of these systems contain information that has a location to it, but the other business system can’t store spatial information at all or can’t store it well. 

Well, there is one simple easy answer because there are some many types of systems, vendors, API’s, gateways, etc…  So I wanted to talk about a few general ways to communicate with other systems and some ideas how to work with other systems.  First you need to decide what functions the field crew is going to need.  For instance, if you are flushing hydrants, do they need to access to when the hydrant was flushed lasted or every time it was flushed?  The reason I would ask this question is the answer is going to help us define how to work with other systems. 

Lets start by looking at were to record the inspection or flushing report in the above case.  If we are storing our field reports/inspections in the geodatabase, then this is fairly easy process.  We can create a feature class with all fields for the hydrant flushing.  This is exactly what we did with the template.  The user can click the hydrant, copy some relevant information to the hydrant inspection record, such as Asset ID and populated the geometry of the report from either the hydrant or the GPS location of the field crew doing the inspection.  The crew can fill out the rest of the information, click save, and use ArcGIS Server to post that information directly back to the Geodatabase.  If the user wants access to all historical inspection data, then that information can either be in the same feature class as the new inspections or in a separate one.  I would suggest that all historical information been in a separate featureclass.  The reason is that the historical inspection or field report data can be very large.  You want to manage the update of the devices with this information separately then the newly created field inspections.  If all the inspections, both new and old, are in one feature class, then the map may have 100’s of inspections at the same location, may be very confusing for the field crews.  The historical inspections never really need to be displayed, they really just need a tool to click the asset and pull up all related information.  With the new inspection, once they create it, they then can visually see on the map that they created and save the inspection and are done working with that asset.  So in summary, if you are using the GDB as you system of record for assets and their related information, inspections, flushing, etc.., then create a data schema for new data and historical data.  This will provide ultimate flexibilty and usability.  Just one more note on the above.  If you are going to load all your historical inspection information into the geodatabase from another system, then use a process to join your historical, non GIS inspection data, to the geometry of your asset and load this to your historical field inspection data.

                Now if you are storing some asset information in another system, like extended asset data in your workorder system, then you have a few ways to interface that data with the mobile map.  One way is to use the Geodatabase as your connection to other systems.  What I mean is build a backend process to pull out new inspections from the Geodatabase and push them to whatever system you have, and vice versa, use the same process to push information from the other system into the geodatabase.  This way you can use ArcGIS Server and ArcGIS Mobile to interface with the information in the field.  It is much easier to write back end database scripts to move information around then it is to build a process to push out other systems data into a field, in a format that they can access offline, make edits or entries to that information and push it back into the office.

                If want to connect to the other system information store directly, without having to move it into the geodatabase, you have two options.  You can work with a local copy or cached representation of that other system.  This means that a set or all the data from that system will be loaded on the device.  The other option is to use a web services approaches or a Enterprise Service Bus(ESB) to directly talk to those other systems.

                If you want to work with the other systems directly on the mobile device, then you are going to need to figure out how to get that information on the device you are using and write a module for the mobile app to talk to that data store.  ArcGIS Mobile is built with the .Net framework, so it is very easy get your Mobile GIS Information to talk to other data stores.  The biggest challenge with this method is figuring out how to get the information on the device, keep it updated and push changes back into the office.  Some vendors have ways to do this, some do not.  I would suggest talk to your vendor and discuss what options they have.  You can also look at using provisioning software that can manage pushing out information to the field and pulling back in.  If you have a homegrown system, then you will need to develop a homegrown field version of the data and a synchronization method. 

                If the above is technical daunting and you want to use web services to have ArcGIS Mobile talk to your other systems, then I would ask yourself one question.  Can your field personal do their job if they do not have a connection to that service?  If so, this is a great way to interface mobile and office systems.  If your answer is no, then proceed down this route with caution.  Even with cell coverage getting better and better, there are always dead spots or connection issues.  What is one of the first things that happen when there is an incident, the cell networks get overloaded.  Also think about bandwidth.  This could be a chatty system.  According to Gartner, the days of unlimited data on cell networks are coming to an end, btw, unlimited data is 5GB on most carriers.  If you are ok with all the above or your field crews do not need access to this information to do their jobs, then web services are great, effective and easy ways to talk to other systems.  They are easy to implement and they can support many applications.  All you need to do is build a module for ArcGIS Mobile that when I click an asset, it hits the appropriate web service and displays the results.  This could be a simple hyperlink in the attributes of a feature. 

In closing there are a number of ways to interface an ArcGIS Mobile Applications with other utility systems and we wanted to highlight few of them.  The above strategies are not the only strategies; there are many ways to implement communication between different systems.  If there methods you would like to discuss further, please contact us and we can help you figure out the best approach for your utility.  You also may find that combining some of the approaches best suites you.  For example, with new inspections, you may use ArcGIS Mobile to create a new record, store it in the cache, and post it to the geodatabase using ArcGIS Server, then nightly, use a backend script to move it to the proper system.  When that field user wants to look at the historical info tied that asset they are inspection, they can hit a web service.  If your field crews do not have coverage broadband, well at least they can complete their work.



Posted in Water Utilities | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Design concepts and setting up your own dashboard

I started typing this blog entry last week.  My original idea was to share 5 simple steps to set up the dashboard or Flex sample viewer.  Since then, two great things came up, the creating effective web maps seminar and an unbelievable flex guide from a user.  Both are discussed below with my original 5 steps.

I recently had the honor of presenting the creative effective web maps seminar.  If you did not get a chance to attend this seminar, I strongly suggest reviewing the seminar materials.  The seminar focused on creating web maps, just like the operational dashboard.  A lot of concepts and design practices were discussed.  If you would like to review the materials, you can download the powerpoint and handout from this website. 


At the end of the seminar, we demonstrated configuring the Sample Flex Viewer for a parcel notification application.  If you are interested in doing this, take a look at the handout that was provided at the seminar.  It provides a walkthrough for you.  You can find it on the web site provided above.

I would also like to share a very detailed document that one of your users shared with us.  Tapas Das, from the Arizona Land Department, wanted to learn the Flex environment so they could upgrade their old ArcIMS based Parcel Viewer to ArcGIS Server.  In this process, he put together a very detailed word documents that goes through setting up Adobe Flex builder, using a flex tutorial, debugging flex, and much much more.  Again, big thanks to Tapas for sharing this.  This is fantastic.  If you find this helpful, please thank Tapas.


So to my original idea, 5 steps to setting up the Dashboard.

1 .Determine your Data and Services

I like to think of the data to support my application in three formats.  Dynamic, Cached and Other services.  When you construct your application, you are going to use a combination of these services.  By separating data into a series of cached and dynamic services, maybe some custom overlays, you will achieve the optimal performance.  Think about it, if you had all data, orthos, mains, parcels, etc.. in one dynamic service.  Each pan and zoom has to query all these layers, label and draw them and send the image down to the browser, but if you separate the orthos and parcels into one cache service.  A pan or zoom will just grab the pre-rendered cache service of orthos, parcels, etc. and only query and draw the layers that need to be overlaid, the mains, valves, etc..

When you are trying to split up your data, here are some helpful ways to think about your different services:

Dynamic Map Services – Optimized or Regular

  • Real-time data
  • Frequently-changing data
  • Reporting Layers
  • Widget Results

Cached Map Services

  • Background data
  • Data that does not change often
  • Projected on the fly – what I mean by this is that if this data is not in the proper projection, set the data frame to the proper projection and the cache will be build with the data projected.

Other Services

  • Geoprocessing, Routing, Locators, Custom XML services
  • Real-time data from other systems
  • Results from tables or other systems

When configuration the Sample Viewers, you have three options to display your data.  Each of these options gives you a little different control over the data.

Base Maps 

Typical these are cached maps.  They are drawn as the bottom layer in the Viewer and only one base map layer can be displayed at a time.  The user does not have the option to toggle layers in the map on and off.  The service is treated as one layer. 

Live Maps

Typical these are dynamic maps.  They are drawn on top of the basemap and can be stacked on top of each other.  Think of each map service as a group layer.  If they are dynamic services, the user can toggle different layers on and off.

Widget Displays

We refer to these as client side graphics.  The information behind them is coming from a dynamic maps services or what I referred to above as other services.  If you are using a dynamic map service, the supporting map document is usually a few layers.  Symbology and scale is not determined by the map services, so do not spend anytime of this map doc look and feel.  The widget will query service, map service or other, filter the results and display a graphic.

This can include a table of information.  At this moment the Rest API’s do not support tables.  This should not prevent you from creating a web service that connects to a SDE table or third party system, and send those results to a widget as XML

Widgets are also a great place to show the results from a selection.  We typically call these selection sets in desktop GIS.  We can display them in the standard grid, but try to think about the results in a different manor.  Can we use a chart to display them?  Is there any way to rely the same info in a more intuitive way?

2. Set up the Flex Viewer

Set up files in IIS

  • Copy the Unzipped flex application to a folder on your Web Server.
  • Create a virtual directory for this application
  • If you need help with this section, follow the section in the templates help files
  • Apply CrossDomain.xml – to access data from a different server than the one hosting your Flex application, the remote server needs to have a cross-domain file in the root directory. For security reasons, the Web browser cannot access data that resides outside the exact Web domain where the SWF file originated. However, Adobe Flash Player can load data across domains if permission is granted from the server. This is accomplished by including a small crossdomain.xml file on the remote server that permits Flash to connect to services on that server.  http://resources.esri.com/help/9.3/arcgisserver/apis/flex/help/index.html#references/using_crossdomain_xml.htm

3. Determine Widgets

The Flex viewer template comes with some widgets for typically web mapping functionality.  Some of them are listed below. 

  • AboutWidget.mxml
  • BookmarkWidget.mxml
  • ChartWidget.mxml
  • DrawWidget.mxml
  • GeoRSSWidget.mxml
  • IdentifyWidget.mxml
  • LiveLayerWidget.mxml
  • LiveMapsWidget.mxml
  • LocateWidget.mxml
  • OverviewMapWidget.mxml
  • PrintWidget.mxml
  • There are more widgets posted on the resource center for the Flex API. 

4. Update the Configurations File – Main Viewer

Open Config.xml in the Sample Viewers Root & Change the following tags

  • <Title> and <Subtitle> – This is the text in the bar at the top
  • <Menus> – Menus are the drop downs in the banner bar.  You add will add tools, widgets and layers to these.
  • <Map initialextent…..> – Adjust these extents in the spatial reference of the viewer
  • <Basemaps menu=”menuMap”> – Add the basemap services.  These services are display vertical on menu listed
  • <LiveMaps – Add the operational layers services.These are displayed in the LiveMapsWidget and are displayed on a floating widget.  Make sure to leave the LiveMapsWidget in the widget section
  • <Navtools> – By Default, all navigation tools are listed
  • <Widgets> – Add all the widgets you need in your application

5. Update the Configurations File – Widgets

Check and see if your widgets have configuration files.  If they do, open them up and adjust the configuration for each.


Posted in Water Utilities | Tagged , , , , , , , , , , , , , , , , | Leave a comment

User Template Submissions

We’ve had a few questions about whether users and ESRI business partners can submit templates to the Water Utilities Resource Center.  We wanted to tell everyone, the answer is yes!

We are encouraging our users and business partners to submit templates to the Water Utility Resource Center template gallery.

Keep in mind there are 5 items that a template must have and all of these items must be in your template zipfile:

1. An instruction document – with information on how to install and configure the template. Including what software is necessary.

2. Any MXD or MXDs necessary – the mxds are critically important to show everyone your cartography, geoprocessing tools, etc.

3. Any custom code – Including the source code.

4. Sample Data – a populated geodatabase with any necessary data for your template. This is so everyone can understand how your template works with sample data. If you can’t share your own data than you can use the sample data we’ve provided with the Mobile, Editing or Dashboard templates for your template.

5. A blank geodatabase – this is an empty geodatabase with the same schema as your sample geodatabase.

Here is an example of the folder structure your template should follow:

A few ground rules for submitting templates – we will review each template to ensure that the proper items are in them and they function as advertised.  We won’t accept any templates that have trial software applications or extensions in them or don’t have the source code if there was custom programming in your template.

So anyone ready to share your good work?

If you have any questions about how to create your own template and post it email us at ArcGISTeamWater@ESRI.com

Posted in Water Utilities | Tagged , , , , , , , | Leave a comment

Building and Maintaining Water Utility Geodatabases – Part 3

 Copy and Paste

You probably want to bring in some of your datasets and merge them with the projected schema. Copy and paste is the easiest way to do this, but you may get some error messages about differences in the source/target spatial reference. For example, when you try to copy and paste in ArcCatalog you may get messages like this:

This is due to some subtle differences in the properties of the target feature dataset/spatial reference and should not be a concern. The best way to handle this is to use the Import Feature Class tool in ArcCatalog:

You can also use the equivalent Geoprocessing tool: “Feature Class To Feature Class”.


Geoprocessing and Data Interoperability

In ArcGIS, Geoprocessing (GP) tools and scripts can be used to do the data manipulation and loading tasks. At a high level, the process involves using GP to massage the source data until it matches the data model of the target geodatabase, and then using commands like Append to get the features into your target geodatabase. This works well for most data loading situations, especially if you using Python or other scripting tools for automation. Once you figure out the pattern you can copy/paste scripts and blocks of code and it is generally easier to manage than ModelBuilder Models for data loading.


Another option is the ArcGIS Data Interoperability Extension. This extension provides a visual workbench to connect source and target datasets, and has a useful set of tools called “transformers” that can be used to perform calculations between source and target (for example, LifecycleStatus should be a new field called ACTIVEFLAG and LifecycleStatus=”Active” should be ACTIVEFLAG=”1″). This approach is preferred by many specialist users but does have an associated cost and learning curve.


Simple Example for wFitting, similar to the Python script for wCasing.


Portion of more complex Spatial ETL tool example for wMain


Sample Tools

Attached to this post are sample tools used to Load Data, Creating Reporting Layers and Create HTML Inventory Reports from your Geodatabase.  These tools are a working example of how to use Geoprocessing tools to build a water utility database. These tools can be used to build part of a geodatabase that matches the Fort Pierce template data model. The tools are designed to be used by GIS specialists building GIS Servers. Using this sample is easy, but implementing these tools on your project can be a large project effort. This template is designed to help you get started and to show you how we loaded the Fort Pierce data.  The basic principle is to “cook” or prepare feature classes to simplify application development and improve the performance and scalability of your applications. As an initial step, we suggest you watch the online video named How to Load Data into the Template Geodatabase and How to Build Reporting Layers found on the Water Utility Resource Center. Then, you can follow the instructions below to install and use the template on your own.


Posted in Uncategorized | Tagged , , , , , , , , , , , , | 3 Comments

Building and Maintaining Water Utility Geodatabases – Part 1

Part 1: Explanation of the Sample and Template GDB

The Water Utility Templates provide sample datasets and also a template Geodatabase for your use. There are 4 main parts to the Template Geodatabase:

    ReferenceData – Landbase data typically acquired from other organizations

  • WaterDistribution – Water Utility Network Assets
  • PlanningAndOperations – Long Range Planning and Utility Administrative/Engineering Area Boundaries
  • Field Operations – Feature classes include redlines/markups and workorders assigned to field crews
    We would like to once again thank the Fort Pierce Utilities Authority and St Lucie County, Florida for allowing us to include a sample of their data in the templates. You should be aware that we took a copy of the data at a point in time and built additional content on top of those datasets. As a result, the data in the templates has been significantly altered from the original database and also does not reflect real conditions in Fort Pierce Florida. That will be obvious to many readers of this blog, but we want to clarify that for people who are just getting started.


    The Sample.gdb is quite similar to the Template.mdb, but it has more content in the ReferenceData feature dataset. The reason for the difference is that most water utilities do not control the landbase data they use, and we expect that (like Fort Pierce) most of the data loading from partners will essentially be a copy and paste into the target geodatabase. As a result, we only included a few ReferenceData feature classes in the Template.mdb, and depending on your system it is ok to remove those feature classes if you have something different to use.
    Below is a detailed explanation of each Feature Dataset in the Sample/Template geodatabase.


    This feature dataset contains the water network feature classes. ESRI has built template water network data models for about 10 years and most water utility projects have used those designs as a starting point for implementation. This design should look familiar to most longtime users, but there are some significant design changes here that are described briefly later in this document.
    To load data into these feature classes, you should first drop the geometric network. Data loading tools will run faster without the network, and you will also be able to use all of the data loading tools available in ArcGIS. Once you have all of the data loaded you can rebuild the network using the wizard in ArcCatalog.
    The loading process is a bit different than the ReferenceData because you have a target database design to load into. Of course it is ok to use a different design or to use the design you already have, but that will mean more work if you want to use the map documents provided with the templates. You might also want to use this latest design if you have existing data because it improves on previous examples.
    Generally speaking you will need to do more than copy/paste your water network features into a target geodatabase. You will likely need to calculate new values, combine or split datasets into new feature classes, and you should also have a plan for doing QA/QC on the results of the data loading process. This can be a time consuming exercise that might be more practical to contract out to an organization that specializes in this type of work, and even for simple examples many weeks of work can be involved in getting existing data loaded and validated.


    This feature dataset contains administrative and planning content for water utilities. The reason for putting these features in their own feature dataset is that they are edited on a different cycle than the asset data and they may have different permissions/editors.
    The simple part of this feature dataset is the feature classes for EngineeringGrid, Map Sheet boundaries, and other administrative areas for water utilities. This part of the database also contains planned replacement mains – wReplacementProject (Capital Improvement Projects) and also proposed mains – wProposedMain (New Development).
    This feature dataset also contains content that will be new to most users. What we built in the Fort Pierce database was a set of reporting layers that allow us to organize and view asset inventory, consumption, asset condition, and other data in the context of operational management units like map sheets, districts, and political jurisdictions.

    These feature classes were created by spatially joining/intersecting the water network features to sets of polygons like the EngineeringGrid example shown above, and then further calculations for leaks/mile, consumption, and inventory information were performed.
    There are 2 main benefits we see from this work:

  • 1. The ability to provide more granular reporting. For example, consumption and number of valves by map sheet or smaller area rather than company-wide reports.
    2. Applications such as the Operations Dashboard require that we prepare data so that rather than users running general queries on the data they can just click a few times and get the information they are looking for. As a general principle, we want to “cook” the answers to the common questions to improve performance and reduce load on web servers.


    Last but not least the FieldOperations feature dataset contains a number of feature classes that are used in the Mobile and Dashboard Water Utility Templates. These include leaks, redlines, and workorders. While the Mobile Map example only uses the redlines capability in the database, the design includes the ability to manage workorders and inspection information to/from the field as well.
    The one thing people notice right away is that workorders and inspections in this data model are Point feature classes. Yes, we know that most workorder/CMMS systems do not store point features because they are tabular systems, but part of simplifying the data/application for field users is finding the geographic location for work. In addition, ArcGIS Mobile works with simple feature classes so this approach works well from an implementation standpoint.
    The content for this feature dataset will be more dynamic, and will likely only contain assigned/current work that needs to be pushed to/from the field. For example, today’s work can be loaded into the mobile feature classes and updates can be pushed to the field. Similarly, as work happens in the field, data can be pushed back into the office through a mobile data service.
    Ultimately this approach requires some back-end work to get the data from one or more enterprise systems to the field application, and then from the field application back to the enterprise systems. This is not particularly difficult work, but the data loading and management activities will be different than the other feature datasets in the template geodatabase.

    Data Model Notes

    The data model available with this template has some changes from those data models that reflect the evolution of implementation models for geodatabases. These changes include:

    • Uppercase, short field names to ensure that field names are not altered/truncated when moved between different database platforms (and yes shapefiles are still being used for data exchange)
    • Longer, more meaningful alias names on fields
    • Descriptions on fields and feature classes (FGDC stored as FGDC metadata)


    Feature class aliases are plural but table names are singular. When you add a feature class to ArcMap you will get a layer with the plural name (the convention GIS users prefer), but it also works for database people that insist on singular names for tables.



    • LifecycleStatus field replaced by ACTIVEFLAG. Previous designs had proposed, active and abandoned features in the same network feature classes. Over time most utilities have separated proposed and abandoned features into different feature classes so they are not accidentally included in network traces or asset inventory reports. Once we added those feature classes to this design, Lifecycle status just became a flag to indicate if the features are active or not. For example, there are temporary/seasonal services that are only used for part of the year, and there are other assets in the ground that are not active because they are still under construction or are temporarily inactive.
    • OWNEDBY/MANAGEDBY fields indicate the owner of each asset and also the maintenance responsibility. While most domains/list of permitted values in this data model are strings, these domains are integers. The plan here is that “Company owned” assets that should be counted for inventory purposes have a value > 0 and things that should not be counted have a value < 0. In many cases there are multiple companies involved in county/regional scale systems and this strategy will make asset reporting simpler. Of course in those situations people should extend the list/domain of values for their specific situation.



      • You may notice that we also dropped AdministrativeArea and OperatingArea fields because these are better managed as part of the reporting layers we built for the PlanningAndOperations data – which is described in the next section of this document.
        More Information

        In each of the Templates, you will see the sample geodatabase and the template geodatabase in the “Maps and GDBs” folder. If you look in the subfolder “Documentation” there are html documents that provide reports for the geodatabase and map documents.
        These documents will help you to understand the details of the geodatabases and maps, and also help you to plan out how to load data into your Geodatabase and make changes to map documents to work with your data.
        Most projects will start with a source-target matrix spreadsheet that describes the available data and the target datasets for their new Geodatabase. This is a good place to start and it will help you to assess the suitability of the template design as well as the level of effort required to build your Geodatabase.

        Again, this can be a large part of your project, so keep in mind that ESRI and Business Partners are here to help you if you need us. Network with your peers to get their recommendations on who to work with, or email us at ArcGISTeamWater@esri.com and we can get you in touch with someone local to help you to get started.


    Posted in Water Utilities | Tagged , , , , | 1 Comment

    Building and Maintaining Water Utility Geodatabases

    We’ll be focusing our next few blog posts on building and maintaining water utility geodatabases. Since we went live with the Water Utility Resource Center 2 weeks ago, we’ve already had numerous questions from the user community about how to use your utility data with the templates we’ve provided, so our first post will explain the template and sample geodatabases.
    New or prospective ESRI customers often ask us how to approach building a geodatabase for their utility and loading data. Many of our long-term water utility customers have questions about strategies to maintain their geodatabase to ensure it continues to meet their business needs and fit their IT landscape as other enterprise systems evolve. So we’ll also share some best practice and tools for building and maintaining water utility geodatabasesIf you have more specific questions about building your water utility geodatabase, data maintenance, data loading or for that matter anything to do with GIS for water utilities please let us know: ArcGISTeamWater@esri.com
    The ArcGIS Water Team


    Posted in Water Utilities | Tagged , , , | Leave a comment