Tips and tricks for scripting ArcGIS Server on Amazon EC2

 

A few months ago we posted an Introduction to Scripting with Amazon EC2. Today’s post contains some additional tips and tricks regarding scripts. It summarizes some of the items from the first scripting post and includes lots of new tips. There’s also an associated sample on the Resource Center that contains some example scripts and a template folder structure for organizing your scripts and related files.

 

Testing scripts

 

Be sure to do some testing before you jump into scripting your production instances. After setting up the AWS tools, create a few low-cost test instances (for example, a t1.micro “Getting Started on Microsoft Windows Server 2008″ instance) to practice with, and try the following:

 

  1. Run the ec2-start-instances and ec2-stop-instances commands manually from a command prompt to see that things are working.
  2. Create some simple batch files and run them by double-clicking.
  3. Schedule one of your test batch files for a few minutes in the future and confirm it works

 

Once you are comfortable with basic scripting, also be sure to run tests using ArcGIS Server instances. Instances running ArcGIS Server and Web applications can take longer to start, and you may need to account for this with TIMEOUT commands in your scripts.

 

Scheduling scripts

 

You can schedule a script to run from a server or an administrative desktop/laptop as long as it has Internet access. Regardless of the location of your scheduled task, it’s possible you may not be logged in when your scripts are set to run. To ensure your script runs even if you are not logged in, you’ll need to modify the properties of your scheduled task. Double-click your task in the Task Scheduler, click the General tab > Security Settings, and allow the task to Run whether user is logged on or not. You’ll need to provide a login with permissions to run tasks to change this option.

 

If the password for the user account that is running your scheduled tasks changes, you must update this information in Task Scheduler or else your tasks won’t run. Unless you’ve set up some kind of alert or actively check your logs, you probably won’t notice that your tasks aren’t running. According to Microsoft’s documentation, Task Scheduler stores account information only once for all tasks that use the same account, so if you update the password for one task all the other tasks will use the updated login information. Just go to the properties of one of your scheduled tasks, click the radio button Run only when user is logged on, then click back to Run whether user is logged on or not. When you click OK you’ll be prompted to update the password for the account.

 

Naming scripts

 

Use naming conventions for your scripts and scheduled tasks to make them easier to manage. This helps you sort these items so you can find what you are looking for more easily, and gives you an indication of the contents of the items without having to open them. One convention I use for scripts is: “EC2_[instance name]_[workflow].bat,” and for tasks: “EC2_[instance name]_[workflow]_[time].”

 

Try to keep descriptions simple. Here are some examples:

 

  • Instance Name: “TeamA-AGS1,” “TeamB-EGDB2,” “ProjectX_AGS,” etc.
    • A few months ago, Amazon introduced tags that allow you to set aliases for your instance names, and those are what I use for “instance name” in my script/task names.
  • Workflow: “OFF” or “ON”
  • Time: “FriPM” or “MonAM”

 

For example, a task might be named: “EC2_TeamA-AGS1_ON_MonAM.”

 

Organizing scripts

 

In addition to naming conventions, organizing the files relating to your Amazon accounts is also good practice. The folder structure I’ve been using is below. (If you are managing more than one Amazon account, the Resource Center sample has a suggestion for how to organizing files in that case.)

 

  • …AWS
    • connections (remote desktop connection files)
    • logs
    • scripts (ready-to-go scripts)
      • test (drafts or test scripts)
    • security (.pem files, keypairs, etc.)
    • tools
      • EC2-api-tools…
      • ELB

 

Logging what happens in your scripts

 

If you want to log the activity of your scripts, you can modify your BAT file to append to a text file items such as a separator line, a timestamp, and the command output. If you create a text file called AWS_log.txt in the same directory as your BAT files (c:aws, in this example) you could write logging into the script as follows:

SET log=c:aws[account1]logaws_log.txt
 

ECHO =================== >>%log%
ECHO %Date% %Time% >>%log%
CALL ec2-start-instances i-abcd1234 >>%log%
TIMEOUT 300
echo %Date% %Time% >>%log%
CALL ec2-associate-address 0.0.0.0 -i i-abcd1234 >>%log%
TIMEOUT 180
CALL ec2-reboot-instance i-abcd1234 >>%log%

 

Checking the status of an instance from a script

 

Typically when you start or stop an instance, the status of your instance is reported as “Pending” or “Stopping” in your log file. These operations take time, and the command feedback is provided before the process is complete. A command that’s useful in this situation is ec2-describe-instances. You can use this to check (and optionally log) the status of an instance from a script.

 

For example, if you want to log a confirmation that your instance stopped you could add a TIMEOUT command to wait a few minutes and then run ec2-describe-instances to request the instance state name. I’ve found that two minutes is usually enough time to wait, but you may need to adjust this for your environment. Use the technique discussed in the previous tip to send the command output to your log file.

 

Making your script flexible with variables

 

You can use variables creatively in your scripts to make it easier to modify them for other purposes. For example, if you define a variable for the instance name and IP address, you just have to change these values in one place rather than in each of the individual commands. See below for an example of an ArcGIS Server instance startup script using variables:

SET instance=i-abcd1234
SET ip=0.0.0.0
 

CALL ec2-start-instances %instance%
TIMEOUT 300
CALL ec2-associate-address %ip% -i %instance%
TIMEOUT 180
CALL ec2-reboot-instances %instance%

 

Allowing non-administrators to run scripts

 

In the previous post I mentioned non-administrators running scripts. An easy way to do this is to set up a simple Web page that enables the launching of scripts. Using this type of application, individuals who are not Amazon admins can start and stop instances but don’t need to have access to your account credentials and don’t need to know the details of the process (details such as: first start the instance, wait 5 mins, then associate EIP, then wait two mins, then reboot).

 

Operating on multiple instances with the same command

 

Most of the EC2 commands enable you to operate on several instances in the same command, which can reduce repetition in your script. See the example below:

ec2-stop-instances i-abcd1234 i-efgh5678 i-ijkl9012

 

Specifying certificate file paths as command parameters

 

To manage instances in multiple Amazon accounts, you will need more than one set of certificate key files. In the previous scripting post we discussed setting environment variables for these file paths, but the tools also allow you to specify these paths as parameters when you run a command. If you work with one account more than others, I would recommend setting an environment variable for the account you work with the most (this helps when you are sending one-off commands written by hand), and then specifying the paths to .pem files in your scripts for other accounts as necessary. Below is an example of a command that provides certificate file paths. -C is the path to your certificate and -K is the path to your personal key.

ec2-stop-instances -C c:aws[account1]securitycert-abcd1234.pem -K c:aws[account1]securitypk-efgh5678.pem i-abcd1234

 

Final notes

 

Don’t try to script everything. The AWS Management Console is very easy to use, so it typically doesn’t pay off to try to have scripts on hand for all possible tasks like stopping any instance you have or creating a new instance. It’s just as easy to log into the console to do these things.

 

I’ve found that it pays to script when your task fits in either of these two categories: 1) workflows that you want to schedule for set times during the week, and 2) medium-to-complex workflows that you do periodically. For example, it’s probably not worth writing a script to create a new instance from the ArcGIS Server AMI since it’s just a few clicks to do this in the Management Console; however, if you often create several new instances from a custom AMI that contains one of your deployed applications, and add those instances to an ELB, this would be a good workflow to script.

 

Amazon has shared other developer toolsets including tools to manage ELBs and AMIs, CloudWatch Monitoring, and AutoScaling. These can be downloaded from the Amazon Web Services Developer Tools page. Be sure to check for new toolsets periodically as changes are made and bugs are fixed.

 

Contributed by Owen Evans of the Esri Washington, D. C. Technology Center

 

This entry was posted in Services and tagged , , , . Bookmark the permalink.

Leave a Reply

One Comment

  1. hmelendez says:

    Is there a sample script for starting and stoping an ArcGIS AMI created with Cloud Builder?
    When I use cloud builder to start and stop an instance I can see that it is doing more than just start/stop the AMI like: Start Instance, assign Load Balancer, create an an auto scaling group and policy and creates a CloudWatch metric alarm…