Archive for the ‘Hacks’ Category

Sending Alerts With Graphite Graphs From Nagios

Friday, February 24th, 2012


The way I’m doing this relies on a feature I wrote for Graphite that was only recently merged to trunk, so at time of writing that feature isn’t in a stable release. Hopefully it’ll be in 0.9.10. Until then, you can at least test this setup using Graphite’s trunk version.

Oh yeah, the new feature is the ability to send graph images (not links) via email. I surfaced this feature in Graphite through the graph menus that pop up when you click on a graph in Graphite, but implemented it such that it’s pretty easy to call from a script (which I also wrote – you’ll see if you read the post).

Also, note that I assume you already know Nagios, how to install new command scripts, and all that. It’s really easy to figure this stuff out in Nagios, and it’s well-documented elsewhere, so I don’t cover anything here but the configuration of this new feature.

The Idea

I’m not a huge fan of Nagios, to be honest. As far as I know, nobody really is. We all just use it because it’s there, and the alternatives are either overkill, unstable, too complex, or just don’t provide much value for all the extra overhead that comes with them (whether that’s config overhead, administrative overhead, processing overhead, or whatever depends on the specific alternative you’re looking at). So… Nagios it is.

One thing that *is* pretty nice about Nagios is that configuration is really dead simple. Another thing is that you can do pretty much whatever you want with it, and write code in any language you want to get things done. We’ll take advantage of these two features to actually do a couple of things:

  • Monitor a metric by polling Graphite for it directly
  • Tell Nagios to fire off a script that’ll go get the graph for the problematic metric, and send email with the graph embedded in it to the configured contacts.
  • Record that we sent the alert back in Graphite, so we can overlay those events on the corresponding metric graph and verify that alerts are going out when they should, that the outgoing alerts are hitting your phone without delay, etc.

The Candy

Just to be clear, we’re going to set things up so you can get alert messages from Nagios that look like this (click to enlarge):

And you’ll also be able to track those alert events in Graphite in graphs that look like this (click to enlarge, and note the vertical lines – those are the alert events.):

Defining Contacts

In production, it’s possible that the proper contacts and contact groups already exist. For testing (and maybe production) you might find that you want to limit who receives graphite graphs in email notifications. To test things out, I defined:

  • A new contact template that’s configured specifically to receive the graphite graphs. Without this, no graphs.
  • A new contact that uses the template
  • A new contact group containing said contact.

For testing, you can create a test contact in templates.cfg:

define contact{
        name                            graphite-contact 
        service_notification_period     24x7            
        host_notification_period        24x7 
        service_notification_options    w,u,c,r,f,s 
        host_notification_options       d,u,r,f,s  
        service_notification_commands   notify-svcgraph-by-email
        host_notification_commands      notify-host-by-email
        register                        0

You’ll notice a few things here:

  • This is not a contact, only a template.
  • Any contact defined using this template will be notified of service issues with the command ‘notify-svcgraph-by-email’, which we’ll define in a moment.

In contacts.cfg, you can now define an individual contact that uses the graphite-contact template we just assembled:

define contact{
        contact_name    graphiteuser
        use             graphite-contact 
        alias           Graphite User

Of course, you’ll want to change the ’email’ attribute here, even for testing.

Once done, you also want to have a contact group set up that contains this new ‘graphiteuser’, so that you can add users to the group to expand the testing, or evolve things into production. This is also done in contacts.cfg:

define contactgroup{
        contactgroup_name       graphiteadmins
        alias                   Graphite Administrators
        members                 graphiteuser

Defining a Service

Also for testing, you can set up a test service, necessary in this case to bypass default settings that seek to not bombard contacts by sending an email for every single aberrant check. Since the end result of this test is to see an email, we want to get an email for every check where the values are in any way out of bounds. In templates.cfg put this:

define service{
    name                        test-service
    use                         generic-service
    passive_checks_enabled      0
    contact_groups              graphiteadmins
    check_interval              20
    retry_interval              2
    notification_options        w,u,c,r,f
    notification_interval       30
    first_notification_delay    0
    flap_detection_enabled      1
    max_check_attempts          2
    register                    0

Again, the key point here is to insure that no notifications are ever silenced, deferred, or delayed by nagios in any way, for any reason. You probably don’t want this in production. The other point is that when you set up an alert for a service that uses ‘test-service’ in its definition, the alerts will go to our previously defined ‘graphiteadmins’.

To make use of this service, I’ve defined a service in ‘localhost.cfg’ that will require further explanation, but first let’s just look at the definition:

define service{
        use                             test-service 
        host_name                       localhost
        service_description             Some Important Metric
        check_command                   check_graphite_data!24!36
        notifications_enabled           1

There are two new things we need to understand when looking at this definition:

  • What is ‘check_graphite_data’?
  • What is ‘_GRAPHURL’?

These questions are answered in the following section.

In addition, you should know that the value for _GRAPHURL is intended to come straight from the Graphite dashboard. Go to your dashboard, pick a graph of a single metric, grab the URL for the graph, and paste it in (and double-quote it).

Defining the ‘check_graphite_data’ Command

This command relies on a small script written by the folks at Etsy, which can be found on github:

Here’s the commands.cfg definition for the command:

# 'check_graphite_data' command definition
define command{
        command_name    check_graphite_data
        command_line    $USER1$/check_graphite_data -u $_SERVICEGRAPHURL$ -w $ARG1$ -c $ARG2$

The ‘command_line’ attribute calls the check_graphite_data script we got on github earlier. The ‘-u’ flag is a URL, and this is actually using the custom object attribute ‘_GRAPHURL’ from our service definition. You can see more about custom object variables here: – the short story is that, since we defined _GRAPHURL in a service definition, it gets prepended with ‘SERVICE’, and the underscore in ‘_GRAPHURL’ moves to the front, giving you ‘$_SERVICEGRAPHURL’. More on how that works at the link provided.

The ‘-w’ and ‘-c’ flags to check_graphte_data are ‘warning’ and ‘critical’ thresholds, respectively, and they correlate to the positions of the service definition’s ‘check_command’ arguments (so, check_graphite_data!24!36 maps to ‘check_graphite_data -u <url> -w 24 -c 36’)

Defining the ‘notify-svcgraph-by-email’ Command

This command relies on a script that I wrote in Python called ‘’, which also lives in github:

The script does two things:

  • It emails the graph that corresponds to the metric being checked by Nagios, and
  • It pings back to graphite to record the alert itself as an event, so you can define a graph for, say, ‘Apache Load’, and if you use this script to alert on that metric, you can also overlay the alert events on top of the ‘Apache Load’ graph, and vet that alerts are going out when you expect. It’s also a good test to see that you’re actually getting the alerts this script tries to send, and that they’re not being dropped or seriously delayed.

To make use of the script in nagios, lets define the command that actually sends the alert:

define command{
    command_name    notify-svcgraph-by-email
    command_line    /path/to/ -u "$_SERVICEGRAPHURL$" -t $CONTACTEMAIL$ -n "$SERVICEDESC$" -s $SERVICESTATE$

A couple of quick notes:

  • Notice that you need to double-quote any variables in the ‘command_line’ that might contain spaces.
  • For a definition of the command line flags, see’s –help output.
  • Just to close the loop, note that notify-svcgraph-by-email is the ‘service_notification_commands’ value in our initial contact template (the very first listing in this post)

Fire It Up

Fire up your Nagios daemon to take it for a spin. For testing, make sure you set the check_graphite_data thresholds to numbers that are pretty much guaranteed to trigger an alert when Graphite is polled. Hope this helps! If you have questions, first make sure you’re using Graphite’s ‘trunk’ branch, and not 0.9.9, and then give me a shout in the comments.

Nose and Reporting in Hudson

Thursday, December 2nd, 2010

I like Hudson. Sure, it’s written in Java, but let’s be honest, it kinda rocks. If you’re a Java developer, it’s admittedly worlds better because it integrates with seemingly every Java development tool out there, but we can do some cool things in Python too, and I thought I’d share a really simple setup to get’s HTML reports and nose’s xUnit-style reports into your Hudson interface.

I’m going to assume that you know what these tools are and have them installed. I’m working with a local install of Hudson for this demo, but it’s worth noting that I’ve come to find a local install of Hudson pretty useful, and it doesn’t really eat up too much CPU (so far). More on that in another post. Let’s get moving.

Process Overview

As mentioned, this process is really pretty easy. I’m only documenting it because I haven’t seen it documented before, and someone else might find it handy. So here it is in a nutshell:

  • Install the HTML Publisher plugin
  • Create or alter a configuration for a “free-style software project”
  • Add a Build Step using the ‘Execute Shell’ option, and enter a ‘nosetests’ command, using its built-in support for xUnit-style test reports and
  • Check the ‘Publish HTML Report’, and enter the information required to make Hudson find the HTML report.
  • Build, and enjoy.

Install The HTMLReport Plugin

From the dashboard, click ‘Manage Hudson’, and then on ‘Manage Plugins’. Click on the ‘Available’ tab to see the plugins available for installation. It’s a huge list, so I generally just hit ‘/’ in Firefox or cmd-F in Chrome and search for ‘HTML Publisher Plugin’. Check the box, go to the bottom, and click ‘Install’. Hudson will let you know when it’s done installing, at which time you need to restart Hudson.

Install tab

HTML Publisher Plugin: Check!

Configure a ‘free-style software project’

If you have an existing project already, click on it and then click the ‘Configure’ link in the left column. Otherwise, click on ‘New Job’, and choose ‘Build a free-style software project’ from the list of options. Give the job a name, and click ‘OK’.

Build a free-style software project.

You have to give the job a name to enable the 'ok' button 🙂

Add a Build Step

In the configuration screen for the job, which you should now be looking at, scroll down and click the button that says ‘Add build step’, and choose ‘Execute shell’ from the resulting menu.

Add Build Step

Execute shell. Mmmmm... shells.

This results in a ‘Command’ textarea appearing, which is where you type the shell command to run. In that box, type this:

/usr/local/bin/nosetests --with-xunit --with-coverage --cover-package demo --cover-html -w tests

Of course, replace ‘demo’ with the name of the package you want covered in your coverage tests to avoid the mess of having try to seek out every module used in your entire application.

We’re telling Nose to generate an xUnit-style report, which by default will be put in the current directory in a file called ‘nosetests.xml’. We’re also asking for coverage analysis using, and requesting an HTML report of the analysis. By default, this is placed in the current directory in ‘cover/index.html’.

execute shell area

Now we need to set up our reports by telling Hudson we want them, and where to find them.

Enable JUnit Reports

In the ‘Post-Build Actions’ area at the bottom of the page, check ‘Publish JUnit test result report’, and make it look like this:

The ‘**’ is part of the Ant Glob Syntax, and stands for the current working directory. Remember that we said earlier nose will publish, by default, to a file called ‘nosetests.xml’ in the current working directory.

The current working directory is going to be the Hudson ‘workspace’ for that job, linked to in the ‘workspace root’ link you see in the above image. It should mostly be a checkout of your source code. Most everything happens relative to the workspace, which is why in my nosetest command you’ll notice I pass ‘-w tests’ to tell nose to look in the ‘tests’ subdirectory of the current working directory.

You could stop right here if you don’t track coverage, just note that these reports don’t get particularly exciting until you’ve run a number of builds.

Enable Coverage Reports

Just under the JUnit reporting checkbox should be the Publish HTML Reports checkbox. The ordering of things can differ depending on the plugins you have installed, but it should at least still be in the Post-build Actions section of the page.

Check the box, and a form will appear. Make it look like this:

By default, will create a directory called ‘cover’ and put its files in there (one for each covered package, and an index). It puts them in the directory you pass to nose with the ‘-w’ flag. If you don’t use a ‘-w’ flag… I dunno — I’d guess it puts it in the directory from where you run nose, in which case the above would become ‘**/cover’ or just ‘cover’ if this option doesn’t use Ant Glob Syntax.

Go Check It Out!

Now that you have everything put together, click on ‘Save’, and run some builds!

On the main page for your job, after you’ve run a build, you should see a ‘ Report’ link and a ‘Latest Test Result’ link. After multiple builds, you should see a test result ‘Trend’ chart on the job’s main page as well.

job page

Almost everything on the page is clickable. The trend graph isn’t too enlightening until multiple builds have run, but I find the reports a nice way to see at-a-glance what chunks of code need work. It’s way nicer than reading the line numbers output on the command line (though I sometimes use those too).

How ’bout you?

If you’ve found other nice tricks in working with Hudson, share! I’ve been using Hudson for a while now, but that doesn’t mean I’m doing anything super cool with it — it just means I know enough to suspect I could be doing way cooler stuff with it that I haven’t gotten around to playing with. 🙂

Per-machine Bash History

Monday, May 10th, 2010

I do work on a lot of machines no matter what environment I’m working in, and a lot of the time each machine has a specific purpose. One thing that really annoys me when I work in an environment with NFS-mounted home directories is that if I log into a machine I haven’t used in some time, none of the history specific to that machine is around anymore.

If I had a separate ~/.bash_history file on each machine, this would likely solve the problem. It’s pretty simple to do as it turns out. Just add the following lines to ~/.bashrc:

export HISTFILE="/home/jonesy/.bash_history_${srvr}"

Don’t be alarmed when you source ~/.bashrc and you don’t see the file appear in your home directory. Unless you’ve configured things otherwise, history is only written at the end of a bash session. So go ahead and source bashrc, run a few commands, end your session, log back in, and the file should be there.

I’m not actually sure if this is going to be a great idea for everyone. If you work in an environment where you run the same commands from machine to machine, it might be better to just leave things alone. For me, I’m running different psql/mysql connection commands and stuff like that which differ depending on the machine I’m on and the connection perms it has.

Programmers that… can’t program.

Monday, March 15th, 2010

So, I happened across this post about hiring programmers, which references two other posts about hiring programmers. There seems to be a demand for blog posts about hiring programmers, but that’s not why I’m writing this. I’m writing because there was this sort of nagging irony that I couldn’t help but stumble upon.

In a blog post, Joel Spolsky talks about the mathematical inaccuracies associated with claims of “only hiring the top 1%”. It seemed pretty obvious to me that whether or not you’re hiring the top 1% of all programmers is pretty much unknowable, and when managers say they hire “the top 1%”, I assume they’re talking about the top 1% of their applicants. Note too that I always thought it was idiotic to point this out, because, well, isn’t that what you’re SUPPOSED to do? You’re not very well going to aim for the middle & hope for the best are you?

Apparently I’ve been giving too much credit to management. There I go giving people with ties on the benefit of the doubt again.

Then, in another blog post, Jeff Atwood talks about how it’s very difficult to even get interviews with programmers who can actually program. The problem is real.

The original blog post that pointed me at the two others is one by Roberto Alsina where he talks about his own methods for weeding out the non-programmers. He’s clearly seen the issue as well.

But if you open all three of these posts in separate tabs and read them, you’re likely to come away with the same basic problem I did:

  • Who the hell are these managers who can’t figure out a dead simple statistics problem?
  • How can a person fairly inept at simple math be qualified to make a hiring decision for anything but a summer intern?

That sorta blew my mind a little. But it blew my mind a lot when Atwood started describing the problems that interviewees *couldn’t* perform in an interview! One task described by Imran was called a ‘FizzBuzz’ question. Here’s one such question:

Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

Here’s the part that blew my mind: He says, and I quote:

Most good programmers should be able to write out on paper a program which does this in a under a couple of minutes.

Want to know something scary ? – the majority of comp sci graduates can’t. I’ve also seen self-proclaimed senior programmers take more than 10-15 minutes to write a solution.

That’s amazing to me. I decided to quickly pop open a Python prompt and see if I could do it:

>>> for i in range(1,101):
...     if (i % 3 == 0) and (i % 5 == 0):
...             print i,'FizzBuzz'
...     elif i % 3 == 0:
...             print i, 'Fizz'
...     elif i % 5 == 0:
...             print i, 'Buzz'
...     else:
...             print i

Note that I’ve taken the liberty of printing out the numbers in addition to the required words. I’m playing the role of interviewer and interviewee here, and wanted to be able to easily verify that things were correct, since there was no time for unit testing 🙂

Turns out it worked on the first try! That was pasted directly from my terminal screen. I didn’t time myself, but it took far less than 5 minutes. This leads to my other question, of course, which is “if you’re going to complain about CS degree holders not writing good code, maybe it’s time to open the doors to non-CS degree holders?”

PyYaml with Aliases and Anchors

Tuesday, December 22nd, 2009

I didn’t know this little tidbit until yesterday and want to get it posted so I can refer to it later.

I have this YAML config file that’s kinda long and has a lot of duplication in it. This isn’t what I’m working on, but let’s just say that you have a bunch of backup targets defined in your YAML config file, and your program rocks because each backup target can be defined to go to a different destination. Awesome, right?

Well, it might be, but it might also just make your YAML config file grotesque (and error-prone). Here’s an example:

        host: foo
        dir: /Users/jonesy
        protocol: ssh
        keyloc: ~/.ssh/
            host: bar
            dir: /mnt/array23/homes/jonesy
            check_space: true
            min_space: 80G
            num_archives: 4
            compress: bzip2
        host: eggs
        dir: /Users/molly
        protocol: sftp
        keyloc: ~/.ssh/
            host: bar
            dir: /mnt/array23/homes/jonesy
            check_space: true
            min_space: 80G
            num_archives: 4
            compress: bzip2

Now with two backups, this isn’t so bad. But if your environment has 100 backup targets and only one destination, or…. heck — even if there are three destinations — should you have to write out the definition of those same three destinations for each of 100 backup targets? What if you need to change how one of the destinations is connected to, or the name of a destination changes, or array23 dies?

Ideally, you’d be able to reference the same definition in as many places as you need it and have things “just work”, and if something needs to change, you just change it in one place. Enter anchors and aliases.

An anchor is defined just like anything else in YAML with the exception that you get to label the definition block using “&labelname”, and then you can (de)reference it elsewhere in your config with “*labelname”. So here’s how our above configuration would look:

BackupDestination-23: &Backup_To_ARRAY23
    host: bar
    dir: /mnt/array23/homes/jonesy
    check_space: true
    min_space: 80G
    num_archives: 4
    compress: bzip2
        host: foo
        dir: /Users/jonesy
        protocol: ssh
        keyloc: ~/.ssh/
        Destination: *Backup_To_ARRAY23
        host: eggs
        dir: /Users/molly
        protocol: sftp
        keyloc: ~/.ssh/
        Destination: *Backup_To_ARRAY23

With only two backup targets, the benefit is small, but keep trying to imagine this config file with about 100 backup targets, and only one or two destinations. This removes a lot of duplication and makes things easier to change and maintain (and read!)

The cool thing about it is that if you already have code that reads the YAML config file, you don’t have to change it at all — PyYaml expands everything for you. Here’s a quick interpreter session:

>>> import yaml
>>> from pprint import pprint
>>> stream = file('foo.yaml', 'r')
>>> cfg = yaml.load(stream)
>>> pprint(cfg)
{'BackupDestination-23': {'check_space': True,
                          'compress': 'bzip2',
                          'dir': '/mnt/array23/homes/jonesy',
                          'host': 'bar',
                          'min_space': '80G',
                          'num_archives': 4},
 'Backups': {'Home_Jonesy': {'Destination': {'check_space': True,
                                             'compress': 'bzip2',
                                             'dir': '/mnt/array23/homes/jonesy',
                                             'host': 'bar',
                                             'min_space': '80G',
                                             'num_archives': 4},
                             'dir': '/Users/jonesy',
                             'host': 'foo',
                             'keyloc': '~/.ssh/',
                             'protocol': 'ssh'},
             'Home_Molly': {'Destination': {'check_space': True,
                                            'compress': 'bzip2',
                                            'dir': '/mnt/array23/homes/jonesy',
                                            'host': 'bar',
                                            'min_space': '80G',
                                            'num_archives': 4},
                            'dir': '/Users/molly',
                            'host': 'eggs',
                            'keyloc': '~/.ssh/',
                            'protocol': 'sftp'}}}

…And notice how everything has been expanded.


Python, PostgreSQL, and psycopg2’s Dusty Corners

Tuesday, December 1st, 2009

Last time I wrote code with psycopg2 was around 2006, but I was reacquainted with it over the past couple of weeks, and I wanted to make some notes on a couple of features that are not well documented, imho. Portions of this post have been snipped from mailing list threads I was involved in.

Calling PostgreSQL Functions with psycopg2

So you need to call a function. Me too. I had to call a function called ‘myapp.new_user’. It expects a bunch of input arguments. Here’s my first shot after misreading some piece of some example code somewhere:

qdict = {'fname': self.fname, 'lname': self.lname, 'dob': self.dob, 'city':, 'state': self.state, 'zip': self.zipcode}

sqlcall = """SELECT * FROM myapp.new_user( %(fname)s, %(lname)s,
%(dob)s, %(city)s, %(state)s, %(zip)s""" % qdict


There’s no reason this should work, or that anyone should expect it to work. I just wanted to include it in case someone else made the same mistake. Sure, the proper arguments are put in their proper places in ‘sqlcall’, but they’re not quoted at all.

Of course, I foolishly tried going back and putting quotes around all of those named string formatting arguments, and of course that fails when you have something like a quoted “NULL” trying to move into a date column. It has other issues too, like being error-prone and a PITA, but hey, it was pre-coffee time.

What’s needed is a solution whereby psycopg2 takes care of the formatting for us, so that strings become strings, NULLs are passed in a way that PostgreSQL recognizes them, dates are passed in the proper format, and all that jazz.

My next attempt looked like this:

curs.execute("""SELECT * FROM myapp.new_user( %(fname)s, %(lname)s,
%(dob)s, %(city)s, %(state)s, %(zip)s""", qdict)

This is, according to some articles, blog posts, and at least one reply on the psycopg mailing list “the right way” to call a function using psycopg2 with PostgreSQL. I’m here to tell you that this is not correct to the best of my knowledge.The only real difference between this attempt and the last is I’ve replaced the “%” with a comma, which turns what *was* a string formatting operation into a proper SELECT with a psycopg2-recognized parameter list. I thought this would get psycopg2 to “just work”, but no such luck. I still had some quoting issues.

I have no idea where I read this little tidbit about psycopg2 being able to convert between Python and PostgreSQL data types, but I did. Right around the same time I was thinking “it’s goofy to issue a SELECT to call a function that doesn’t really want to SELECT anything. Can’t callproc() do this?” Turns out callproc() is really the right way to do this (where “right” is defined by the DB-API which is the spec for writing a Python database module). Also turns out that psycopg2 can and will do the type conversions. Properly, even (in my experience so far).

So here’s what I got to work:

callproc_params = [self.fname, self.lname, self.dob,, self.state, self.zipcode]

curs.callproc('myapp.new_user', callproc_params)

This is great! Zero manual quoting or string formatting at all! And no “SELECT”. Just call the procedure and pass the parameters. The only thing I had to change in my code was to make my ‘self.dob’ into a object, but that’s super easy, and after that psycopg2 takes care of the type conversion from a Python date to a PostgreSQL date. Tomorrow I’m actually going to try calling callproc() with a list object inside the second argument. Wish me luck!

A quick cursor gotcha

I made a really goofy mistake. At the root of it, what I did was share a connection *and a cursor object* among all methods of a class I created to abstract database operations out of my code. So, I did something like this (this is not the exact code, and it’s untested. Treat it like pseudocode):

class MyData(object):
   def __init__(self, dsn): 
      self.conn = psycopg2.Connection(dsn)
      self.cursor = self.conn.cursor()

   def get_users_by_regdate(self, regdate, limit):
      self.cursor.arraysize = limit 
      self.cursor.callproc('myapp.uid_by_regdate', regdate)
      while True: 
         result = self.cursor.fetchmany()
         if not result:
         yield result

   def user_is_subscribed(self, uid): 
      self.cursor.callproc('myapp.uid_subscribed', uid)
      result = self.cursor.fetchone()
      val = result[0]
      return val

Now, in the code that uses this class, I want to grab all of the users registered on a given date, and see if they’re subscribed to, say, a mailing list, an RSS feed, a service, or whatever. See if you can predict the issue I had when I executed this:

    db = MyData(dsn)
    for id in db.get_users_by_regdate([joindate]):
        idcount += 1
        print idcount
        param = [id]
        if db.user_is_subscribed(param):
            print "User subscribed"
            skip_count += 1
            print "Not good"

Note that the above is test code. I don’t actually want to continue to the top of the loop regardless of what happens in production 🙂

So what I found happening is that, if I just commented out the portion of the code that makes a database call *inside* the for loop, I could print ‘idcount’ all the way up to thousands of results (however many results there were). But if I left it in, only 100 results made it to ‘db.user_is_subscribed’.

Hey, ‘100’ is what I’d set the curs.arraysize() to! Hey, I’m using the *same cursor* to make both calls! And with the for loop, the cursor is being called upon to produce one recordset while it’s still trying to produce the first recordset!

Tom Roberts, on the psycopg list, states the issue concisely:

The cursor is stateful; it only contains information about the last
query that was executed. On your first call to “fetchmany”, you fetch a
block of results from the original query, and cache them. Then,
db.user_is_subscribed calls “execute” again. The cursor now throws away all
of the information about your first query, and fetches a new set of
results. Presumably, user_is_subscribed then consumes that dataset and
returns. Now, the cursor is position at end of results. The rows you
cached get returned by your iterator, then you call fetchmany again, but
there’s nothing left to fetch…

…So, the lesson is if you need a new recordset, you create a new cursor.

Lesson learned. I still think it’d be nice if psycopg2 had more/better docs, though.

A Couple of Python Tools for Tornado or AMQP Users

Monday, November 23rd, 2009

Hi all,

Not lots of time to blog since starting my new gig as Senior Operations Developer at (which is pretty awesome, btw), but wanted to let anyone who’s interested know about a couple of tools I’ve released on github:

bunny is a CLI tool for doing quick tests and simple operations on an AMQP server (I use RabbitMQ for testing, but it should work with any AMQP server). It’ll do your basic create/delete of exchanges, queues, and bindings. It’s only capable of doing things that are part of AMQP, and I still need to finish the feature that tests functionality via a ’round trip’ test, where it sends a message, then gets it, then sends the body to stdout. If you find time before me to finish that, send a pull request or patch.

curtain is a mixin for use with the Tornado web server that provides HTTP Digest Authentication. I actually started with this recipe and altered/updated it in various ways. It’s still not complete according to RFC 2617 — I need to code for a few edge cases like getting a stale nonce, but what’s there does work, so if you (like me) were wondering what the performance comparison would look like between tornado doing digest auth and having a front-end proxy do it, have at it 🙂

Python, Creating XML, Recursion, and Order

Wednesday, November 4th, 2009

I love being challenged every day. Today I ran across a challenge that has several solutions. However, most of them are hacks, and I never feel like I really solved the problem when I implement a hack. There’s also that eerie feeling that this hack is going to bite me later.

I have an API I need to talk to. It’s pure XML-over-HTTP (henceforth XHTTP). Actually, the first issue I had was just dealing with XHTTP. All of the APIs I’ve dealt with in the past were either XMLRPC, SOAP, which have ready-made libraries ready to use, or they used GET parameters and the like, which are pretty simple to deal with. I’ve never had to actually construct an HTTP request and just POST raw XML.

It’s as easy as it should be, really. I code in Python, so once you actually have an XML message ready to send, you can use urllib2 to send it in about 2 lines of code.

The more interesting part is putting the request together. I thought I had this beat. I decided to use the xml.dom.minidom module and create my Document object by going through the DOMImplementation object, because it was the only thing I found that allowed me to actually specify a DTD programmatically instead of hackishly tacking on hard-coded text. No big deal.

Now that I had a document, I needed to add elements to it. Some of these XML queries I’m sending can get too long to write a function that manually creates all the elements, then adds them to the document, then creates the text nodes, then adds them to the proper element… it’s amazingly tedious to do. I really wish Python had something like PHP’s XMLWriter, which lets you create an element and its text in one line of code.

Tedium drives me nuts, so rather than code this all out, I decided to create a dictionary that mirrored the structure of my query, with the data for the text nodes filled in by variables.

query_params = {'super_special_query':
                      'credentials': {'username': user, 'password': password, 'data_realm': realm},
                      'result_params': {'num_results': setsize, 'order_by': order_by},
                       query_type: query_dict

def makeDoc():
    impl = getDOMImplementation()
    dt = impl.createDocumentType("super_special", None, 'super_special.dtd')
    doc = impl.createDocument(None, "super_special", dt)
    return doc

def makeQuery(doc, query_params, tag=None):
        @doc is an xml.minidom.Document object
        @query_params is a dictionary structure that mirrors the structure of the xml.
        @tag used in recursion to keep track of the node to append things to next time through.


    if tag is None:
        root = doc.documentElement
        root = tag

    for key, value in query_params.iteritems():
        tag = doc.createElement(key)
        if isinstance(value, dict):
            makeQuery(doc, value, tag)
            tag_txt = doc.createTextNode(value)

    return doc.toxml()

doc = makeDoc()
qxml = makeQuery(doc, query_params)

This is simplistic, really. I don’t need to deal with attributes in my queries, for example. But it is generic enough that if I need to send different types of queries, all that’s required is creating another dictionary to represent it, and passing that through the same makeQuery function to create the query.

Initial testing indicated success, but that’s why you can’t rely on only simple initial tests. Switching things up immediately exposed a problem: The API server validated my query against a DTD that enforced strict ordering of the elements, and Python dictionaries do not have the same notion of “order” that you and I do.

So there’s the code. If nothing else, it’s a less-contrived example of what you might actually use recursion for. Tomorrow I have to figure out how to enforce the ordering. One idea is to have a separate list to consult for the ordering of the elements. It requires an extra outer loop to go through the list, get the name of the next tag, then use that value to ask the dictionary for any related values. Seemed like a good way to go, but I had a bit of difficulty figuring out how to make that work at all. Maybe with fresh eyes in the AM it’ll be more obvious — that happens a lot, and is always a nice surprise.

Ideas, as always, are hereby solicited!

Python Quirks in Cmd, urllib2, and decorators

Wednesday, October 28th, 2009

So, if you haven’t been following along, Python programming now occupies the bulk of my work day.

While I really like writing code and like it more using Python, no language is without its quirks. Let me say up front that I don’t consider these quirks bugs or big hulking issues. I’m not trying to bash the language. I’m just trying to help folks who trip over some of these things that I found to be slightly less than obvious.

Python’s Cmd Module and Handling Arguments

Using the Python Cmd module lets you create a program that provides an interactive shell interface to your users. It’s really simple, too. You just create a class that inherits from cmd.Cmd, and define a bunch of methods named do_<something>, where <something> is the actual command your user will run in your custom shell.

So if you want users to be able to launch your app, be greeted with a prompt, type “hello”, and have something happen in response, you just define a method called “do_hello” and whatever code you put there will be run when a user types “hello” in your shell. Here’s what that would look like:

import cmd

class MyShell(cmd.Cmd):
   def do_hello(self):
      print "Hello!"

# Kick off the shell
shell = MyShell()

Of course, what’s a shell without command line options and arguments? For example, I created a shell-based app using Cmd that allowed users to run a ‘connect’ command with arguments for host, port, user, and password. Within the shell, the command would look something like this:

> connect -h mybox -u jonesy -p mypass

Note that the “>” is the prompt, not part of the command.

The idea here is that you pass the arguments to the option flags, and you can set sane defaults in the application for missing args (for example, I didn’t provide a port here — I’m leaning on a default, but I did provide a host, since the default might be ‘localhost’).

Passing just one, single-word argument with Cmd is dead easy, because all of the command methods receive a string that contains *everything* on the line after the actual command. If you’re expecting such an argument, just make sure your ‘do_something’ method accepts the incoming string. So, to let users see what “hello” looks like in Spanish, we can accept “esp” as an argument to our command:

class MyShell(cmd.Cmd):
   def do_hello(self, arg):
      print "Hello! %s" % arg

The problems come when you want more than one argument, or when you want flags with arguments. For example, in the earlier “connect” example, my “do_connect” method is still only going to get one big, long string passed to it — not a list of arguments. So where in a normal program you might do something like:

class MyShell(cmd.Cmd):
   def do_connect(self, host='localhost', port='42', user='guest', password='guest'):
      #...connection code here...

In a Cmd method, you’re just going to define it like we did the do_hello method above: it takes ‘self’ and ‘args’, where ‘args’ is one long line.

A couple of quick workarounds I’ve tried:

Parse the line yourself. I created a method in my Cmd app called ‘parseargs’ that just takes the big long line and returns a dictionary. My specific application only takes ‘name=value’ arguments, so I do this:

         d = dict([arg.split('=') for arg in args.split()])

And return the dictionary to the calling method. My connect method can then check for keys in the dictionary and set things up. It’s longer an a little more arduous, but not too bad.

Use optparse. You can instantiate a parser right inside your do_x methods. If you have a lot of methods that all need to take several flags and args, this could become cumbersome, but for one or two it’s not so bad. The key to doing this is creating a list from the Big Long Line and passing it to the parse_args() method of your parser object. Here’s what it looks like:

class MyShell(cmd.Cmd):
   def do_touch(self, line):
      parser = optparse.OptionParser()
      parser.add_option('-f', '--file', dest='fname')
      parser.add_option('-d', '--dir', dest='dir')
      (options,args) = parser.parse_args(line.split())

      print "Directory: %s" % options.dir
      print "File name: %s" % options.fname

This method is just an example, so don’t scratch your head looking for “import os” or anything 🙂

This is probably the more elegant solution, since it doesn’t require you to restrict your users to passing args in a particular way, and doesn’t require you to come up with fancy CLI argument parsing algorithms.

Using urllib2 for Pure XML Over HTTP

I wrote a web service client this week that does pure XML over HTTP to send queries to a service. I’ve written things like this before using Python, but it turns out, after looking back at my code, I was always either using XMLRPC, SOAP, or going through some wrapper that hid a lot from me in an effort to make my life easier (like the Google Data API). I’ve never had to try to send a pure XML payload over the wire to a web server.

I figured urllib2 was going to help me out here, and it did, but not before going through some pain due mainly to an odd pattern in various sources of documentation on the topic. I read docs at,, a couple of blogs, and did a Google search, and everything, everywhere, seems to indicate that the urllib2.Request object’s optional “data” argument expects a urlencoded string. From

data should be a buffer in the standard application/x-www-form-urlencoded format

The examples on every site I’ve found always pass whatever ‘data’ is through urllib.urlencode() before adding it to the request. I figured urllib2 was no longer my friend, and almost started looking at implementing an HTTPSClient object. Instead I decided to try just passing my unencoded data. What’s it gonna do, detect that my data wasn’t urlencoded? Maybe I’d learn something.

I learned that all of the documentation fails to account for this particular edge case. Go ahead and pass whatever the heck you want in ‘data’. If it’s what the server on the other end expects, you’ll be fine. 🙂


I found myself in dark, dusty corners when I had to decide how and where inside of a much larger piece of code to implement a feature. I really wanted to use a decorator, and still think that’s what I’ll wind up doing, but then how to implement the decorator isn’t as straightforward as I’d like either.

Decorators are used to alter how a decorated function operates. They’re amazingly useful, because instead of implementing some bit of code in a bunch of methods that themselves live inside a bunch of classes across various modules, or creating an entire class or mixin to inherit from when you only need the code overhead in a couple of edge cases, you can just create a decorator and apply it only to the proper methods or functions.

The lesson I learned is to try very hard to make one solid decision about how your decorator will work up front. Will it be a class? That’s done somewhat differently than doing it with a function. Will the decorator take arguments? That’s handled differently in both implementations, and also requires changes to an existing decorator class that didn’t used to take arguments. I don’t know why I expected this to be more straightforward, but I totally did.

If you’re new to decorators or haven’t had to dig into them too deeply, I highly recommend Bruce Eckel’s series introducing Python decorators, which walks you through all of the various ways to implement them. Part I (of 3) is here.

Linux/Unix File Copy Trick

Wednesday, June 17th, 2009

I have a need for this hack every now and then, and I *always* forget it for some reason, so I’m putting it here for safe keeping. I’ll start a “hacks” category, too, so I can locate these quickly, and so you can too 🙂 So, here’s the hack:

[jonesy@cranford testing]$ ls
bar  foo
[jonesy@cranford testing]$ ls foo
1  2  3
[jonesy@cranford testing]$ ls bar
1  2  3  4  5  6
[jonesy@cranford testing]$ yes n | cp -i bar/* foo 2>/dev/null
[jonesy@cranford testing]$ ls foo
1  2  3  4  5  6
[jonesy@cranford testing]$ ls bar
1  2  3  4  5  6

I’m piping a constant “no” to the “cp -i” command, which asks you if you want to overwrite (or not) any files that already exist in the target directory. You don’t have to send STDERR to /dev/null — there’s just some messy output if you don’t. The end result is that the copy command will only copy files into the destination directory if they don’t already exist. It’ll skip files that exist in both directories (Well, technically the *copy* command won’t, but the operation will when you do it this way).

Of course, I could just forcibly overwrite the directory contents, but I don’t know if the files that exist in both directories are identical. Or I could move one sideways and the other into place, but the same issue exists.