Archive for the ‘Scripting’ Category

Python 3: Informal String Formatting Performance Comparison

Sunday, January 2nd, 2011

If you haven’t heard the news, Dave Beazley and I have officially begun work on the next edition of the Python Cookbook, which will be completely overhauled using absolutely nothing but Python 3. Yay!

Right now, I’m going through some string formatting recipes from the 2nd edition to see if they still work, and if Python 3 offers any preferred alternatives to the solutions provided. As usual, it turns out that the answer to that is often ‘it depends’. For example, you might decide on a slower solution that’s more readable. Conversely, you might need to run an operation in a loop a million times and really need the speed.

New string formatting operations like the built-in format() function (separate from the str.format method) and the format mini-language are available in 2.6, and made nicer in 2.7. All of it is backported from the 3.x tree to my knowledge, and I’ll be using a Python 3.2b2 interpreter session for my examples.

I want to focus specifically on string alignment here, because there are very obviously multiple ways to solve alignment needs. Here’s an example solution from the 2nd edition:

>>> print '|' , 'hej'.ljust(20) , '|' , 'hej'.rjust(20) , '|' , 'hej'.center(20) , '|'
| hej             |             hej |       hej       |

Note that this is of course in Python 2.x syntax, but this works in Python 3.2 if you just make it a function call instead of a statement (so, just add parens and it works). The string methods used here are still in Python 3.2, with no notices of deprecation or preference for newer methods available now. That said, this looks messy to me, and so I wondered if I could make it more readable without losing performance, or at least without losing so much performance that it’s not worth any gains in the area of readability.

Single String Formatting

Here are three ways to get the same string alignment behavior in Python 3.2b2:

>>> '{:+<20s}'.format('hej')
'hej+++++++++++++++++'
>>> format('hej', '+<20s')
'hej+++++++++++++++++'
>>> 'hej'.ljust(20, '+')
'hej+++++++++++++++++'

Ok, so they all work the same. Now I’m going to wrap each one in a function and use the timeit module to help me get an idea what the difference is in terms of performance.

>>> def runit():
...     format('hej', '+<20s')
...
>>> def runit2():
...     'hej'.ljust(20, '+')
...
>>> def runit3():
...     '{:+<20s}'.format('hej')
...
>>> timeit(stmt=runit3, number=1000000)
0.6168370246887207
>>> timeit(stmt=runit3, number=1000000)
0.6109819412231445
>>> timeit(stmt=runit3, number=1000000)
0.6166291236877441
>>> timeit(stmt=runit2, number=1000000)
0.49651098251342773
>>> timeit(stmt=runit2, number=1000000)
0.4870288372039795
>>> timeit(stmt=runit2, number=1000000)
0.49135899543762207
>>> timeit(stmt=runit, number=1000000)
0.7751290798187256
>>> timeit(stmt=runit, number=1000000)
0.7771239280700684
>>> timeit(stmt=runit, number=1000000)
0.7805869579315186

Turns out using the old, tried and true str.* methods are fastest in this case, though I think in a more complex case like the recipe from the 2nd edition I’d opt for something more readable if I had the chance.

One String, Three Ways

Let’s look at a more complex case. Let’s take each of the methodologies used in runit, runit2, and runit3, and see how things pan out when we want to do something like the 2nd edition recipe. I’ll start with the bare interpreter operation to compare the output:


>>> '|' + format('hej', '+<20s') + '|' + format('hej', '+^20s') + '|' + format('hej', '+>20s') + '|'
'|hej+++++++++++++++++|++++++++hej+++++++++|+++++++++++++++++hej|'
>>> '|' + 'hej'.ljust(20, '+') + '|' + 'hej'.center(20, '+') + '|' + 'hej'.rjust(20, '+') + '|'
'|hej+++++++++++++++++|++++++++hej+++++++++|+++++++++++++++++hej|'
>>> '|{0:+<20s}|{0:+^20s}|{0:+>20s}|'.format('hej')
'|hej+++++++++++++++++|++++++++hej+++++++++|+++++++++++++++++hej|'

Unless you go through the rigamarole of creating a sequence and using ‘|’.join(myseq), the last method seems the most readable to me. I’d really just like to use the built-in print function with a “sep=’|'” argument, but that won’t cover the pipes at the beginning and end of the string unless I’ve missed something.

Here are the functions and timings:


>>> def threeways():
...     '|' + format('hej', '+<20s') + '|' + format('hej', '+^20s') + '|' + format('hej', '+>20s') + '|'
...

>>> def threeways2():
...     '|' + 'hej'.ljust(20, '+') + '|' + 'hej'.center(20, '+') + '|' + 'hej'.rjust(20, '+') + '|'
...

>>> def threeways3():
...     '|{0:+<20s}|{0:+^20s}|{0:+>20s}|'.format('hej')
...

>>> timeit(stmt=threeways, number=1000000)
2.4910600185394287
>>> timeit(stmt=threeways, number=1000000)
2.50291109085083
>>> timeit(stmt=threeways, number=1000000)
2.4913830757141113
>>> timeit(stmt=threeways2, number=1000000)
1.9027390480041504
>>> timeit(stmt=threeways2, number=1000000)
1.8975908756256104
>>> timeit(stmt=threeways2, number=1000000)
1.8957319259643555
>>> timeit(stmt=threeways3, number=1000000)
1.311446189880371
>>> timeit(stmt=threeways3, number=1000000)
1.3099820613861084
>>> timeit(stmt=threeways3, number=1000000)
1.3031558990478516

The threeways3 function has a bit of an advantage in not having to muck with concatenation at all, and this probably explains the difference. Changing threeways() to use a list and '|'.join() brought it from about 2.50 to about 2.30. Better. Changing threeways2() in the same way was also a small improvement from ~1.90 to ~1.77. No big wins there, and they’re not particularly readable in either case. For this one arguably trivial corner case, the new formatting mini-language wins in both performance and (IMO) readability.

Big Assumptions

This of course assumes I didn’t overlook something in creating the comparison functions, that there’s not yet a different way to do this that blows all of my work out of the water. If you see a completely different way to do this that’s both readable and performant, or I did something bone-headed, please let me know in the comments. :)

The Makings of a Great Python Cookbook Recipe

Monday, December 20th, 2010

I’ve seen some comments on Twitter, Buzz, Reddit, and elsewhere, and we’ve gotten some suggestions for recipes already via email (thanks!), and both Dave and I thought it’d be good to present a simple-to-follow ‘meta-recipe'; a recipe for making a great recipe that has a good shot at making it into the cookbook.

So let’s get down to what makes a good recipe. These are in no particular order:

Concise

When you read a recipe for apple pie, it doesn’t include a diatribe about how to grow and pick apples. This is in part because of space constraints (the book would be huge, or the coverage would be incomplete, take your pick). It’s also partly because you can probably assume that the reader has somehow procured apples and assumes they’ll need them for this recipe and all of that.

In a recipe for, say, an ETL script that hopes to illustrate Python’s useful string manipulation features, you probably don’t need to spend much time explaining what a database is, or take a lot of lines of code to deal with the database. It makes the code longer, and just distracts from the string manipulation goodness you’re trying to relay to the reader.

Short and sweet. “Everything you need, nothing you don’t”.

Illustrative

Recipes for the cookbook should be relatively narrowly focused on illustrating a particular technique, module (built-in or otherwise), or language feature, and bringing it to life. It need not be illustrative of a business process, remotely associated technology, etc.

So, if you want to write a recipe about os.walk, you don’t need to get into the semantics of the ext3 filesystem implementation, because that’s not what you’re trying to illustrate.

Practical

Above I noted that the recipe should be relatively narrowly focused on a technique or language feature. It should NOT be narrowly focused in terms of its applicability.

For example, if you wanted to illustrate the usefulness of the Python csv module, awesome! And if you want to mention that csv will attempt to make its output usable in Excel, awesome! But if you wanted to write a recipe called “Supporting Windows ’95 Excel Clients With Python” dealing only with Excel, specifically on Windows ’95, well… that’s really specific, and really a ‘niche’ recipe. It’d be better left for some future ‘Python Hacks’ book or something.

When you read a cookbook, you probably don’t seek out “How to make mulligatawny soup in a Le Creuset™ Dutch Oven Using an Induction Stove at 30,000 Feet”. Likewise, in order to be useful to a wider audience, your recipe should ideally not force so many assumptions onto readers who just want to make a good meal (so to speak).

Our devotion to the practical also means we don’t plan to include any recipes dealing with Fibonacci numbers, factorials, or the like. Leave those for some future “Python Homework Problems” book.

Well-Written

By ‘well-written’, I’m partially just lumping everything I just said all together under one title. However, in addition, I would ask that recipe authors resist the temptation to utilize unnecessary ‘cleverness’ that might make the code harder to read and understand, or be a distraction from what you’re trying to relay to the reader.

Just because you can get the job done with a nested list comprehension doesn’t mean you should. Open up in the code listing to allow easy comprehension by readers at all levels. If you must use nested list comprehensions, perhaps it warrants a separate recipe?

Nested list comprehensions are just an example, of course. I’m sure you can think of others. When you’re looking at your recipe, just ask yourself if there’s a simpler construct, technique, or idiom that can be used to achieve the same goal.

Pythonic

In general, follow the ‘import this’ rules like you would with any other Python code you write. “Sparse is better than dense”, “Readability counts”, etc. In addition, bonus points are given for following PEP 8.

But I’m not just talking about the code. Long-time Python coders (or maybe even not-so-long-time ones) come to realize that the Zen of Python applies not just to code, but to the way you think about your application. When Dave and I are reading recipe descriptions, we’re thinking in that mode. “Would we do this? Why would we do this? When would we do this? Is there a more Pythonic solution to this problem? Can this solution be made more Pythonic?”

When in doubt…

If you’re not sure about any of that, your default action should be to post your recipe on the ActiveState site. The reality is that posting a recipe there will help both the community and yourself. The feedback you get on your recipe will help you become a better coder, and it’ll help people evaluating your recipe to make sound decisions and perhaps consider things they hadn’t. It’s good all over. Getting into the book is just a nice cherry on the sundae.

Also, while unit tests are fantastic, they’re not expected to show up along with the recipe. ActiveState to my knowledge doesn’t have a mechanism for easily including test code alongside (but not embedded in) the recipe code. If you want to use doctest, great. If you want to point us at a recipe you’ve posted, you can include tests in that email, or not. It’s totally unnecessary to include them, although they are appreciated.

Questions?

If you have questions, email them to Dave and I at PythonCookbook at oreilly dot com. You can also post questions here in the comments of this post.

Good Things Come in Threes: Python Cookbook, Third Edition

Thursday, December 16th, 2010

It became official earlier today that David Beazley and myself will be co-editing/co-curating the next edition (the Third Edition) of the Python Cookbook. That’s really exciting. Here’s why:

It’s Python 3, Cover to Cover

Go big or go home. The third edition will be a Python 3 Cookbook. This by itself makes this a rather large undertaking, since it means modules used in earlier editions that don’t work in Python 3 can’t be used, and so those old recipes will need to be scrapped or rewritten.

You heard it right: if a module used in the last edition of Python Cookbook doesn’t work in Python 3, it won’t be in this edition. This includes some modules for which several recipes already exist, like dateutil and wxPython, and other modules I would’ve liked to use for illustrative purposes, like Tornado and psycopg2. I guess there’s still some time for smaller modules to port to Python 3 and be left alone in this edition, but I don’t know how realistic that is for something like wxPython.

It’s going to be ok.

I can’t find any modules (yet) in the second edition for which Python 3-compatible substitutes don’t exist, and in some cases I found myself wondering why a separate module was used at all when what needs doing isn’t a whole lot of work in pure Python anyway. I guess if the module is there and stable, why not use it, eh? Fair enough.

Three Authors?

Actually, yes. David, myself, and YOU, where YOU is anyone who posts good Python 3 recipes to the ActiveState ‘Python Cookbook’ site. This is actually not new. If you look at the last edition, you’ll see separate credits for each recipe. The Python Cookbook has always been a community effort, featuring recipes by some very familiar names in the Python community.

Anyway, just in case that wasn’t direct enough:

Go to the ActiveState Python Cookbook site and post a Python 3 recipe, and if it’s solid, your name and recipe may well be in the book.

Basically, ActiveState gives us free reign over their Python Cookbook content, so it’s a convenient way to let the community contribute to the work. If it’s in there, and it’s good, we can use it. They’re cool like that.

Three Questions

Answer these in the comments, or send email to Dave and I at PythonCookbook at oreilly.

  1. What are your three favorite recipes from either the 1st or 2nd edition?
  2. What are your three least favorite recipes from either the 1st or 2nd edition?
  3. What are three things (techniques, modules, basic tasks) you’d like to see covered that weren’t covered in earlier editions?

Three Cool Things About the Third Edition

  1. Tests. When the book comes out, we’ll make the unit tests for the recipes available in some form that is yet to be determined (but they’ll be available).
  2. Porting help. We’re not going to leave module authors out in the cold. We’re going to provide some help/advice. We’ve both written code in Python 3, and dealt with the issues that arise. I’m still porting one of my projects and have another in line after that, so I’ll be dealing with it even more.
  3. Dave and I are both overwhelmed with excitement about this book, about Python 3, and about working with you on it. Come help us out by posting Python 3 recipes (tests are also nice, but not required) on ActiveState, and shoot us an email at PythonCookbook at oreilly dotcom.

There will be other cool things, too. We’ll let you know, so stay tuned to this blog, Dave’s blog, and you should definitely follow @bkjones and @dabeaz on Twitter, since we’ll be asking for opinions/resources/thoughts on things as we go.

Nose and Coverage.py Reporting in Hudson

Thursday, December 2nd, 2010

I like Hudson. Sure, it’s written in Java, but let’s be honest, it kinda rocks. If you’re a Java developer, it’s admittedly worlds better because it integrates with seemingly every Java development tool out there, but we can do some cool things in Python too, and I thought I’d share a really simple setup to get coverage.py’s HTML reports and nose’s xUnit-style reports into your Hudson interface.

I’m going to assume that you know what these tools are and have them installed. I’m working with a local install of Hudson for this demo, but it’s worth noting that I’ve come to find a local install of Hudson pretty useful, and it doesn’t really eat up too much CPU (so far). More on that in another post. Let’s get moving.

Process Overview

As mentioned, this process is really pretty easy. I’m only documenting it because I haven’t seen it documented before, and someone else might find it handy. So here it is in a nutshell:

  • Install the HTML Publisher plugin
  • Create or alter a configuration for a “free-style software project”
  • Add a Build Step using the ‘Execute Shell’ option, and enter a ‘nosetests’ command, using its built-in support for xUnit-style test reports and coverage.py
  • Check the ‘Publish HTML Report’, and enter the information required to make Hudson find the coverage.py HTML report.
  • Build, and enjoy.

Install The HTMLReport Plugin

From the dashboard, click ‘Manage Hudson’, and then on ‘Manage Plugins’. Click on the ‘Available’ tab to see the plugins available for installation. It’s a huge list, so I generally just hit ‘/’ in Firefox or cmd-F in Chrome and search for ‘HTML Publisher Plugin’. Check the box, go to the bottom, and click ‘Install’. Hudson will let you know when it’s done installing, at which time you need to restart Hudson.

Install tab

HTML Publisher Plugin: Check!

Configure a ‘free-style software project’

If you have an existing project already, click on it and then click the ‘Configure’ link in the left column. Otherwise, click on ‘New Job’, and choose ‘Build a free-style software project’ from the list of options. Give the job a name, and click ‘OK’.

Build a free-style software project.

You have to give the job a name to enable the 'ok' button :)

Add a Build Step

In the configuration screen for the job, which you should now be looking at, scroll down and click the button that says ‘Add build step’, and choose ‘Execute shell’ from the resulting menu.

Add Build Step

Execute shell. Mmmmm... shells.

This results in a ‘Command’ textarea appearing, which is where you type the shell command to run. In that box, type this:

/usr/local/bin/nosetests --with-xunit --with-coverage --cover-package demo --cover-html -w tests

Of course, replace ‘demo’ with the name of the package you want covered in your coverage tests to avoid the mess of having coverage.py try to seek out every module used in your entire application.

We’re telling Nose to generate an xUnit-style report, which by default will be put in the current directory in a file called ‘nosetests.xml’. We’re also asking for coverage analysis using coverage.py, and requesting an HTML report of the analysis. By default, this is placed in the current directory in ‘cover/index.html’.

execute shell area

Now we need to set up our reports by telling Hudson we want them, and where to find them.

Enable JUnit Reports

In the ‘Post-Build Actions’ area at the bottom of the page, check ‘Publish JUnit test result report’, and make it look like this:

The ‘**’ is part of the Ant Glob Syntax, and stands for the current working directory. Remember that we said earlier nose will publish, by default, to a file called ‘nosetests.xml’ in the current working directory.

The current working directory is going to be the Hudson ‘workspace’ for that job, linked to in the ‘workspace root’ link you see in the above image. It should mostly be a checkout of your source code. Most everything happens relative to the workspace, which is why in my nosetest command you’ll notice I pass ‘-w tests’ to tell nose to look in the ‘tests’ subdirectory of the current working directory.

You could stop right here if you don’t track coverage, just note that these reports don’t get particularly exciting until you’ve run a number of builds.

Enable Coverage Reports

Just under the JUnit reporting checkbox should be the Publish HTML Reports checkbox. The ordering of things can differ depending on the plugins you have installed, but it should at least still be in the Post-build Actions section of the page.

Check the box, and a form will appear. Make it look like this:

By default, coverage.py will create a directory called ‘cover’ and put its files in there (one for each covered package, and an index). It puts them in the directory you pass to nose with the ‘-w’ flag. If you don’t use a ‘-w’ flag… I dunno — I’d guess it puts it in the directory from where you run nose, in which case the above would become ‘**/cover’ or just ‘cover’ if this option doesn’t use Ant Glob Syntax.

Go Check It Out!

Now that you have everything put together, click on ‘Save’, and run some builds!

On the main page for your job, after you’ve run a build, you should see a ‘Coverage.py Report’ link and a ‘Latest Test Result’ link. After multiple builds, you should see a test result ‘Trend’ chart on the job’s main page as well.

job page

Almost everything on the page is clickable. The trend graph isn’t too enlightening until multiple builds have run, but I find the coverage.py reports a nice way to see at-a-glance what chunks of code need work. It’s way nicer than reading the line numbers output on the command line (though I sometimes use those too).

How ’bout you?

If you’ve found other nice tricks in working with Hudson, share! I’ve been using Hudson for a while now, but that doesn’t mean I’m doing anything super cool with it — it just means I know enough to suspect I could be doing way cooler stuff with it that I haven’t gotten around to playing with. :)

Number Spiral Fun

Thursday, September 23rd, 2010

I love puzzles, and I came across the Oblong Number Spiral challenge on Code Golf over the past week and dove in. Here’s the basic idea, from the challenge at Code Golf:

This challenge involves you having to create a number spiral such as :

 1  2  3
10 11  4
 9 12  5
 8  7  6

Note how the numbers spiral in clockwise towards the centre. Here’s a larger example :

 1  2  3  4  5
18 19 20 21  6
17 28 29 22  7
16 27 30 23  8
15 26 25 24  9
14 13 12 11 10

So, the program should take input as two numbers separated by a space, which represent the width and height of the matrix you’ll output. Then you need to determine the location in the output for each number in the matrix, so when it’s sent to the screen, it looks like the examples above.

I’ve never done a spiral matrix before, but I suspected I’d probably start with a matrix full of dummy values that I’d replace using some algorithm that would understand how to turn “turn and move in a negative direction along the x axis” into placement in the matrix. I did a little reading and found I was on the right path, so I kept on trudging along and this is what I came up with:

#!/usr/bin/env python

def number_spiral(h, w):
   # total number of elements in array
   n = w * h

   # start at top left (row 0 column 0)
   row, column = 0,0

   # first move is on the same row, to the right
   d_row, d_column = 0, 1

   # fill 2d array with dummy values we'll overwrite later
   arr = [[None ]* w for z in range(h)]

   for i in xrange(1,n+1):
      arr[row][column] = i

      # next row and column
      nrow, ncolumn = row + d_row, column + d_column

      if 0 <= nrow < h and 0 <= ncolumn < w and arr[nrow][ncolumn] == None:
         # no out of bounds accesses or overwriting already-placed elements.
         row, column = nrow, ncolumn
      else:
         # change direction
         d_row , d_column = d_column, -d_row
         row, column = row + d_row, column + d_column

   # print it out!
   for a in range(h):
      for b in range(w):
         print "%2i" % arr[a][b],
      print

if __name__ == '__main__':
   number_spiral(5, 3)

Wonky Bunny Issue “Fixed”

Monday, July 19th, 2010

For those who don’t know what the headline means:

  1. Bunny is an open source command line utility written in Python that provides a shell for talking to and testing AMQP brokers (tested on RabbitMQ).
  2. AMQP is a queuing protocol. It’s defined as a binary wire-level protocol as well as a command set. The spec also defines a good portion of the server semantics, so by that logic Bunny should work against other AMQP brokers besides RabbitMQ
  3. RabbitMQ is written in Erlang atop OTP, so clustering is ‘free and easy’. My experience with RabbitMQ so far has been fantastic, though I’d like to see client libraries in general mature a bit further.

So, Bunny had this really odd quirk upon its first release. If you did something to cause an error that resulted in a connection being dropped, bunny wouldn’t trap the error. It would patiently wait for you to enter the next command, and fail miserably. The kicker is that I actually defined a ‘check_conn’ method to make sure that the connection was alive before doing anything else, and that really wasn’t working.

The reason is because py-amqplib (or, perhaps, its interpretation of the AMQP spec, which defines a Connection class), implements a high-level Connection class, along with a Channel class (also defined in the spec), which is what seems to actually map to what you and I as users actually care about: some “thing” that lets us communicate with the server, and without which we can’t talk to the server.

With py-amqplib, a Connection is actually defined as a channel 0, and always channel 0. I gather that channel 0 gets some special treatment in other sections of the library code, and the object that lives at index ‘0’ in Connection.channels is actually defined as a Connection object, whereas others are Channel objects.

The result of all of this is that creating a channel in my code and then checking my own object’s ‘chan’ attribute is useless because channels can be dropped on the floor in py-amqplib, and the only way I can tell to figure that out is to check the connection object’s ‘channels’ dictionary. So that’s what I do now. It seems to be working well.

Not only does bunny now figure out that your connection is gone, but it’ll also attempt a reconnect using the credentials you gave it in the last ‘connect’ command. You see, bunny extends the Python built-in cmd.Cmd object, which lets me define my whole program as a single class. That means that whatever you type in, like the credentials to the ‘connect’ command, can be kept handy, since the lifetime of the instance of this class is the same as the lifetime of a bunny session.

So, in summary, bunny is more useful now, but it’s still not “done”. I made this fix over the weekend during an hour I unexpectedly found for myself. It’s “a” solution, but it’s not “the” solution. The real solution is to map out all of the errors that actually cause a connection to drop and give the user a bit more feedback about what happened. I also want to add more features (like support for getting some stats back from Alice to replace bunny’s really weak ‘qlist’ command).

Python Date Manipulation

Tuesday, July 6th, 2010

This post is the result of some head-scratching and note taking I did for a reporting project I undertook recently. It’s not a complete rundown of Python date manipulation, but hopefully the post (and hopefully the comments) will help you and maybe me too :)

The head-scratching is related to the fact that there are several different time-related objects, spread out over a few different time-related modules in Python, and I have found myself in plenty of instances where I needed to mix and match various methods and objects from different modules to get what I needed (which I thought was pretty simple at first glance). Here are a few nits to get started with:

  • strftime/strptime can generate the “day of week” where Sunday is 0, but there’s no way to tell any of the conversion functions like gmtime() that you want your week to start on Sunday as far as I know. I’m happy to be wrong, so leave comments if I am. It seems odd that you can do a sort of conversion like this when you output, but not within the calculation logic.
  • If you have a struct_time object in localtime format and want to convert it to an epoch date, time.mktime() works, but if your struct_time object is in UTC format, you have to use calendar.timegm() — this is lame and needs to go away. Just add timegm() to the time module (possibly renamed?).
  • time.ctime() will convert an epoch date into nicely formatted local time, but there’s no function to provide the equivalent output for UTC time.

There are too many methods and modules for dealing with date manipulation in Python, such that performing fairly common tasks requires importing and using a few different modules, different object types and methods from each. I’d love this to be cleaned up. I’d love it more if I were qualified to do it. More learning probably needs to happen for that. Anyway, just my $.02.

Mission 1: Calculating Week Start/End Dates Where Week Starts on Sunday

My mission: Pull epoch dates from a database. They were generated on a machine whose time does not use UTC, but rather local time (GMT-4).  Given the epoch date, find the start and end of the previous week, where the first day of the week is Sunday, and the last day of the week is Saturday.

So, I need to be able to get a week start/end range, from Sunday at 00:00 through Saturday at 23:59:59. My initial plan of attack was to calculate midnight of the current day, and then base my calculations for Sunday 00:00 on that, using simple timedelta(days=x) manipulations. Then I could do something like calculate the next Sunday and subtract a second to get Saturday at 23:59:59.

Nothing but ‘time’

In this iteration, I’ll try to accomplish my mission using only the ‘time’ module and some epoch math.

Seems like you should be able to easily get the epoch value for midnight of the current epoch date, and display it easily with time.ctime(). This isn’t quite true, however. See here:

>>> etime = int(time.time())
>>> time.ctime(etime)
'Thu May 20 15:26:40 2010'
>>> etime_midnight = etime - (etime % 86400)
>>> time.ctime(etime_midnight)
'Wed May 19 20:00:00 2010'
>>>

The reason this doesn’t do what you might expect is that time.ctime() in this case outputs the local time, which in this case is UTC-4 (I live near NY, USA, and we’re currently in DST. The timezone is EDT now, and EST in winter). So when you do math on the raw epoch timestamp (etime), you’re working with a bare integer that has no idea about time zones. Therefore, you have to account for that. Let’s try again:

>>> etime = int(time.time())
>>> etime
1274384049
>>> etime_midnight = (etime - (etime % 86400)) + time.altzone
>>> time.ctime(etime_midnight)
'Thu May 20 00:00:00 2010'
>>>

So, why is this necessary? It might be clearer if we throw in a call to gmtime() and also make the math bits more transparent:

>>> etime
1274384049
>>> time.ctime(etime)
'Thu May 20 15:34:09 2010'
>>> etime % 86400
70449
>>> (etime % 86400) / 3600
19
>>> time.gmtime(etime)
time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=19, tm_min=34, tm_sec=9, tm_wday=3, tm_yday=140, tm_isdst=0)
>>> midnight = etime - (etime % 86400)
>>> time.gmtime(midnight)
time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=140, tm_isdst=0)
>>> time.ctime(midnight)
'Wed May 19 20:00:00 2010'
>>> time.altzone
14400
>>> time.altzone / 3600
4
>>> midnight = (etime - (etime % 86400)) + time.altzone
>>> time.gmtime(midnight)
time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=4, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=140, tm_isdst=0)
>>> time.ctime(midnight)
'Thu May 20 00:00:00 2010'
>>>

What’s that now? You want what? You want the epoch timestamp for the previous Sunday at midnight? Well, let’s see. The time module in Python doesn’t do deltas per se. You can calculate things out using the epoch bits and some math if you wish. The only bit that’s really missing is the day of the week our current epoch timestamp lives on.

>>> time.ctime(midnight)
'Thu May 20 00:00:00 2010'
>>> struct_midnight = time.localtime(midnight)
>>> struct_midnight
time.struct_time(tm_year=2010, tm_mon=5, tm_mday=20, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=3, tm_yday=140, tm_isdst=1)
>>> dow = struct_midnight.tm_wday
>>> dow
3
>>> midnight_sunday = midnight - ((dow + 1) * 86400)
>>> time.ctime(midnight_sunday)
'Sun May 16 00:00:00 2010'

You can do this going forward in time from the epoch time as well. Remember, we also want to grab 23:59:59 on the Saturday after the epoch timestamp you now have:

>>> saturday_night = midnight + ((5 - dow+1) * 86400) - 1
>>> time.ctime(saturday_night)
'Sat May 22 23:59:59 2010'
>>>

And that’s how you do date manipulation using *only* the time module. Elegant,no?

No. Not really.

Unfortunately, the alternatives also aren’t the most elegant in the world, imho. So let’s try doing this all another way, using the datetime module and timedelta objects.

Now with datetime!

The documentation for the datetime module says:

“While date and time arithmetic is supported, the focus of the implementation is on efficient member extraction for output formatting and manipulation.”

Hm. Sounds a lot like what the time module functions do. Some conversion here or there, but no real arithmetic support. We had to pretty much do it ourselves mucking about with epoch integer values. So what’s this buy us over the time module?

Let’s try to do our original task using the datetime module. We’re going to start with an epoch timestamp, and calculate the values for the previous Sunday at midnight, and the following Saturday at 23:59:59.

The first thing I had a hard time finding was a way to deal with the notion of a “week”. I thought I’d found it in ‘date.timetuple()’, which help(date.timetuple) says is “compatible with time.localtime()”. I guess they must mean that the output is the same as time.localtime(), because I can’t find any other way in which it is similar. Running time.localtime() with no arguments returns a time_struct object for the current time. date.timetuple() requires arguments or it’ll throw an error, and to make you extra frustrated, the arguments it takes aren’t in the docs or the help() output.

So maybe they mean it takes the same arguments as time.localtime(), eh? Not so much — time.localtime() takes an int representing an epoch timestamp. Trying to feed an int to date.timetuple throws an error saying it requires a ‘date’ object.

So, the definition of “compatible” is a little unclear to me in this context.

So here I’ve set about finding today, then “last saturday”, and then “the sunday before the last saturday”:

def get_last_whole_week(today=None):
    # a date object
    date_today = today or datetime.date.today()

    # day 0 is Monday. Sunday is 6.
    dow_today = date_today.weekday()

    if dow_today == 6:
        days_ago_saturday = 1
    else:
    # If day between 0-5, to get last saturday, we need to go to day 0 (Monday), then two more days.
        days_ago_saturday = dow_today + 2

    # Make a timedelta object so we can do date arithmetic.
    delta_saturday = datetime.timedelta(days=days_ago_saturday)

    # saturday is now a date object representing last saturday
    saturday = date_today - delta_saturday

    # timedelta object representing '6 days'...
    delta_prevsunday = datetime.timedelta(days=6)

    # Making a date object. Subtract the days from saturday to get "the Sunday before that".
    prev_sunday = saturday - delta_prevsunday

This gets me date objects representing the start and end time of my reporting range… sort of. I need them in epoch format, and I need to specifically start at midnight on Sunday and end on 23:59:59 on Saturday night. Sunday at midnight is no problem: timetuple() sets time elements to 0 anyway. For Saturday night, in epoch format, I should probably just calculate a date object for two Sundays a week apart, and subtract one second from one of them to get the last second of the previous Saturday.

Here’s the above function rewritten to return a tuple containing the start and end dates of the previous week. It can optionally be returned in epoch format, but the default is to return date objects.

def get_last_whole_week(today=None, epoch=False):
    # a date object
    date_today = today or datetime.date.today()
    print "date_today: ", date_today

    # By default day 0 is Monday. Sunday is 6.
    dow_today = date_today.weekday()
    print "dow_today: ", dow_today

    if dow_today == 6:
        days_ago_saturday = 1
    else:
        # If day between 0-5, to get last saturday, we need to go to day 0 (Monday), then two more days.
        days_ago_saturday = dow_today + 2
    print "days_ago_saturday: ", days_ago_saturday
    # Make a timedelta object so we can do date arithmetic.
    delta_saturday = datetime.timedelta(days=days_ago_saturday)
    print "delta_saturday: ", delta_saturday
    # saturday is now a date object representing last saturday
    saturday = date_today - delta_saturday
    print "saturday: ", saturday
    # timedelta object representing '6 days'...
    delta_prevsunday = datetime.timedelta(days=6)
    # Making a date object. Subtract the 6 days from saturday to get "the Sunday before that".
    prev_sunday = saturday - delta_prevsunday

    # we need to return a range starting with midnight on a Sunday, and ending w/ 23:59:59 on the
    # following Saturday... optionally in epoch format.

    if epoch:
        # saturday is date obj = 'midnight saturday'. We want the last second of the day, not the first.
        saturday_epoch = time.mktime(saturday.timetuple()) + 86399
        prev_sunday_epoch = time.mktime(prev_sunday.timetuple())
        last_week = (prev_sunday_epoch, saturday_epoch)
    else:
        saturday_str = saturday.strftime('%Y-%m-%d')
        prev_sunday_str = prev_sunday.strftime('%Y-%m-%d')
        last_week = (prev_sunday_str, saturday_str)
    return last_week

It would be easier to just have some attribute for datetime objects that lets you set the first day of the week to be Sunday instead of Monday. It wouldn’t completely alleviate every conceivable issue with calculating dates, but it would be a help. The calendar module has a setfirstweekday() method that lets you set the first weekday to whatever you want. I gather this is mostly for formatting output of matrix calendars, but it would be useful if it could be used in date calculations as well. Perhaps I’ve missed something? Clues welcome.

Mission 2: Calculate the Prior Month’s Start and End Dates

This should be easy. What I hoped would happen is I’d be able to get today’s date, and then create a timedelta object for ‘1 month’, and subtract, having Python take care of things like changing the year when the current month is January. Calculating this yourself is a little messy: you can’t just use “30 days” or “31 days” as the length of a month, because:

  1. “January 31″ – “30 days” = “January 1″ — not the previous month.
  2. “March 1″ – “31 days” = “January 30″ — also not the previous month.

Instead, what I did was this:

  1. create a datetime object for the first day of the current month (hard coding the ‘day’ argument)
  2. used a timedelta object to subtract a day, which gives me a datetime object for the last day of the prior month (with year changed for me if needed),
  3. used that object to create a datetime object for the first day of the prior month (again hardcoding the ‘day’ argument)

Here’s some code:

today = datetime.datetime.today()
first_day_current = datetime.datetime(today.year, today.month, 1)
last_day_previous = first_day_current - datetime.timedelta(days=1)
first_day_previous = datetime.datetime(last_day_previous.year, last_day_previous.month, 1)
print 'Today: ', today
print 'First day of this month: ', first_day_current
print 'Last day of last month: ', last_day_previous
print 'First day of last month: ', first_day_previous

This outputs:

Today:  2010-07-06 09:57:33.066446
First day of this month:  2010-07-01 00:00:00
Last day of last month:  2010-06-30 00:00:00
First day of last month:  2010-06-01 00:00:00

Not nearly as onerous as the week start/end range calculations, but I kind of thought that between all of these modules we have that one of them would be able to find me the start and end of the previous month. The raw material for creating this is, I suspect, buried somewhere in the source code for the calendar module, which can tell you the start and end dates for a month, but can’t do any date calculations to give you the previous month. The datetime module can do calculation, but it can’t tell you the start and end dates for a month. The datetime.timedelta object’s largest granularity is ‘week’ if memory serves, so you can’t just do ‘timedelta(months=1)’, because the deltas are all converted internally to a fixed number of days, seconds, or milliseconds, and a month isn’t a fixed number of any of them.

Converge!

While I could probably go ahead and use dateutil, which is really darn flexible, I’d rather be able to do this without a third-party module. Also, dateutil’s flexibility is not without it’s complexity, either. It’s not an insurmountable task to learn, but it’s not like you can directly transfer your experience with the built-in modules to using dateutil.

I don’t think merging all of the time-related modules in Python would be necessary or even desirable, really, but I haven’t thought deeply about it. Perhaps a single module could provide a superclass for the various time-related objects currently spread across three modules, and they could share some base level functionality. Hard to conceive of a timedelta object not floating alone in space in that context, but alas, I’m thinking out loud. Perhaps a dive into the code is in order.

What have you had trouble doing with dates and times in Python? What docs have I missed? What features are completely missing from Python in terms of time manipulation that would actually be useful enough to warrant inclusion in the collection of included batteries? Let me know your thoughts.

Brain Fried Over NoSQL

Saturday, June 26th, 2010

So, I’m working on a pet project. It’s in stealth mode. Just kidding — I don’t believe in stealth mode 😉

It’s a twitter analytics dashboard that actually does useful things with the mountains of data available from the various Twitter APIs. I’m writing it in Python using Tornado. Here’s the first mockup I ever did for it, just like 2 nights ago:

It’s already a lot of fun. I’ve worked with Tornado before and like it a lot. I have most of the base infrastructure questions answered, because this is a pet project and they’re mostly easy and in some sense “don’t matter”. But that’s what has me stuck.

It Doesn’t Matter

It’s true. Past a certain point, belaboring choices of what tools to use where is pointless and is probably premature optimization. I’ve been working with startups for the past few years, and I’m painfully aware of what happens when a company takes too long to react to their popularity. I want to architect around that at the start, but I’m resisting. It’s a pet project.

But if it doesn’t matter, that means I can choose tools that are going to be fun to dig into and learn about. I’ve been so busy writing code to help avoid or buffer impact to the database that I haven’t played a whole lot with the NoSQL choices out there, and there are tons of them. And they all have a different world view and a unique approach to providing solutions to what I see as somewhat different problems.

Why NoSQL?

Why not? I’ve been working with relational database systems since 1998. I worked on large data reporting projects, a couple of huge data warehousing projects, financial transaction systems, I worked for Sybase as a consulting DBA and project manager for a while, I was into MySQL and PostgreSQL by 2000, used them in production environments starting around 2001-02… I understand them fairly well. I also understand BDB and other “flat-file” databases and object stores. SQLite has become unavoidable in the past few years as well. It’s not like I don’t understand the compromises I’m making going to a NoSQL system.

There’s a good bit of talk from the RDBMS camp (seriously, why do they need their own camp?) about why NoSQL is bad. Lots of people who know me  would put me in the RDBMS camp, and I’m telling you not to cry yourself to sleep out of guilt over a desire to get to know these systems. They’re interesting, and they solve some huge issues surrounding scalability with greater ease than an RDBMS.

Like what? Well, cost for one. If I could afford Oracle I’d sooner use that than go NoSQL in all likelihood. I can’t afford it. Not even close. Oracle might as well charge me a small planet for their product. It’s great stuff, but out of reach. And what about sharding? Sharding a relational database sucks, and to try to hide the fact that it sucks requires you to pile on all kinds of other crap like query proxies, pools, and replication engines, all in an effort to make this beast do something it wasn’t meant to do: scale beyond a single box. All this stuff also attempts to mask the reality that you’ve also thrown your hands in the air with respect to at least 2 letters that make up the ACID acronym. What’s an RDBMS buying you at that point? Complexity.

And there’s another cost, by the way: no startup I know has the kind of enormous hardware that an enterprise has. They have access to commodity hardware. Pizza boxes. Don’t even get me started on storage. I’ve yet to see SSD or flash storage at a startup. I currently work at MyYearbook.com, and there are some pretty hefty database servers there, but it can hardly be called a startup anymore. Hell, they’re even profitable! 😉

Where Do I Start?

One nice thing about relationland is I know the landscape pretty well. Going to NoSQL is like dropping me in a country I’ve never heard of where I don’t really speak the language. I have some familiarity with key-value stores from dealing with BDB and Memcache, and I’ve played with MongoDB a bit (using pymongo), but that’s just the tip of the iceberg.

I heard my boss mention Tokyo Tyrant a few times, so I looked into it. It seems to be one of the more obscure solutions out there from the standpoint of adoption, community, documentation, etc., but it does appear to be very capable on a technical level. However, my application is going to be number-heavy, and I’m not going to need to own all of the data required to provide the service. I can probably get away with just incrementing counters in Memcache for some of this work. For persistence I need something that will let me do aggregation *FAST* without having to create aggregation tables, ideally. Using a key/value store for counters really just seems like a no-brainer.

That said, I think what I’ve decided to do, since it doesn’t matter, is punt on this decision in favor of getting a working application up quickly.

MySQL

Yup. I’m going to pick one or two features of the application to implement as a ‘first cut’, and back them with a MySQL database. I know it well, Tornado has a built-in interface for it, and it’s not going to be a permanent part of the infrastructure (otherwise I’d choose PostgreSQL in all likelihood).

To be honest, I don’t think the challenge in bringing this application to life are really related to the data model or the engine/interface used to access it (though if I’m lucky that’ll be a major part of keeping it alive). No, the real problem I’m faced with is completely unrelated to these considerations…

Twitter’s API Service

Not the API itself, per se, but the service providing access to it, and the way it’s administered, is going to be a huge challenge. It’s not just the Twitter website that’s inconsistent, the API service goes right along. Not only that, but the type of data I really need to make this application useful isn’t immediately available from the API as far as I can tell.

Twitter maintains rate limits on the API. You can only make so many calls over so short a period of time. That alone makes providing an application like this to a lot of people a bit of a challenge. Compounding the issue is that, when there are failwhales washing up on the shores, those limits can be dynamically decreased. Ugh.

I guess it’s not a project for the faint of heart, but it’ll drive home some golden rules that are easy to neglect in other projects, like planning for failure (of both my application, and Twitter). Also, it’ll be a lot of fun.

Python IDE Frustration

Thursday, May 13th, 2010

I didn’t think I was looking for a lot in an IDE. Turns out what I want is impossibly hard to find.

In the past 6 months I’ve tried (or tried to try):

  • Komodo Edit
  • Eclipse w/ PyDev
  • PyCharm (from the first EAP build to… yesterday)
  • Wingware
  • Textmate

Wingware

First, let’s get Wingware out of the way. I’m on a Mac, and if you’re not going to develop for the Mac, I’m not going to pay you hundreds of dollars for your product. Period. I don’t even use free software that requires X11. Lemme know when you figure out that coders like Macs and I’ll try Wingware.

Komodo Edit

Well, I wanted to try the IDE but I downloaded it, launched it once for 5 minutes (maybe less), forgot about it, and now my trial is over. I’ll email sales about this tomorrow. In the meantime, I use Komodo Edit.

Komodo Edit is pretty nice. One thing I like about it is that it doesn’t really go overboard forcing its world view down my throat. If I’m working on bunny, which is a one-file Python project I keep in a git repository, I don’t have to figure out their system for managing projects. I can just “Open File” and use it as a text editor.

It has “ok” support for Vi key bindings, and it’s not a plugin: it’s built in. The support has some annoying limitations, but for about 85% of what I need it to do it’s fine. One big annoyance is that I can’t write out a file and assign it a name (e.g. ‘:w /some/filename.txt’). It’s not supported.

Komodo Edit, unless I missed it, doesn’t integrate with Git, and doesn’t offer a Python console. Its capabilities in the area of collaboration in general are weak. I don’t absolutely have to have them, but things like that are nice for keeping focused and not having to switch away from the window to do anything else, so ideally I could get an IDE that has this. I believe Komodo IDE has these things, so I’m looking forward to trying it out.

Komodo is pretty quick compared to most IDEs, and has always been rock solid stable for me on both Mac and Linux, so if I’m not in the mood to use Vim, or I need to work on lots of files at once, Komodo Edit is currently my ‘go-to’ IDE.

PyCharm

PyCharm doesn’t have an officially supported release. I’ve been using Early Adopter Previews since the first one, though. When it’s finally stable I’m definitely going to revisit it, because to be honest… it’s kinda dreamy.

Git integration is very good. I used it with GitHub without incident for some time, but these are early adopter releases, and things happen: two separate EAP releases of PyCharm made my project files completely disappear without warning, error, or any indication that anything was wrong at all. Of course, this is git, so running ‘git checkout -f’ brought things back just fine, but it’s unsettling, so now I’m just waiting for the EAP to be over with and I’ll check it out when it’s done.

I think for the most part, PyCharm nails it. This is the IDE I want to be using assuming the stability issues are worked out (and I don’t have reason to believe they won’t be). It gives me a Python console, VCS integration, a good class and project browser, some nice code analytics, and more complex syntax checking that “just works” than I’ve seen elsewhere. It’s a pretty handsome, very intuitive IDE, and it leverages an underlying platform whose plugins are available to PyCharm users as well, so my Vim keys are there (and, by the way, the IDEAVim plugin is the most advanced Vim support I’ve seen in any IDE, hands down).

Eclipse with PyDev

One thing I learned from using PyCharm and Eclipse is that where tools like this are concerned, I really prefer a specialized tool to a generic one with plugins layered on to provide the necessary functionality. Eclipse with PyDev really feels to me like a Java IDE that you have to spend time laboriously chiseling, drilling, and hammering to get it to do what you need if you’re not a Java developer. The configuration is extremely unintuitive, with a profuse array of dialogs, menus, options, options about options and menus, menus about menus and options… it never seems to end.

All told, I’ve probably spent the equivalent of 2 working days mucking with Eclipse configuration, and I’ve only been able to get it “pretty close” to where I want it. The Java-loving underpinnings of the Eclipse platform simply cannot be suppressed, while things I had to layer on with plugins don’t show up in the expected places.

Add to this Eclipse’s world-view, which reads something like “there is no filesystem tree: only projects”, and you have a really damned annoying IDE. I’ve tried on and off for over a year to make friends with Eclipse because of the good things I hear about PyDev, but it just feels like a big hacky, duct-taped mess to me, and if PyCharm has proven anything to me, it’s that building a language specific IDE on an underlying platform devoted to Java doesn’t have to be like this. When I finally got it to some kind of usable point, and after going through the “fonts and colors” maze, it turns out the syntax highlighting isn’t really all that great!

A quick word about Vi key bindings in Eclipse: it’s not a pretty picture, but the best I’ve been able to find is a free tool called Vrapper. It’s not bad. I could get by with Vrapper, but I don’t believe it’s as mature and evolved as IDEAVim plugin in PyCharm.

So, I’ll probably turn back to Eclipse for Java development (I’m planning on taking on a personal Android project), but I think I’ve given up on it for anything not Java-related.

Vim

Vim is technically ‘just an editor’, but it has some nice benefits, and with the right plugins, it can technically do all of the things a fancy IDE can. I use the taglist plugin to provide the project and class browser functionality, and the kicker here is that you can actually switch to the browser pane, type ‘/’ and the object or member you’re looking for, and jump to it in a flash. It’s also the most complete Vim key binding implementation available 😉

The big win for me in using Vim though is remote work. Though I’d rather do all of my coding locally, there are times when I really have to write code on remote machines, and I don’t want to go through the rigmarole of coding, pushing my changes, going to my terminal, pulling down the changes, testing, failing, fixing the code on my machine, pushing my changes, pulling my changes… ugh.

So why not just use Vim? I could do it. I’ve been using Vim for many years and am pretty good with it, but I just feel like separating my coding from my terminal whenever I can is a good thing. I don’t want my code to look like my terminal, nor do I want my terminal to look like my IDE theme. I’m SUPER picky about fonts and colors in my IDE, and I’m not that picky about them in my terminal. I also want the option of using my mouse while I’m coding, mostly to scroll, and getting that to work on a Mac in Terminal.app isn’t as simple as you might expect (and I’m not a fan of iTerm… and its ability to do this comes at a cost as well).

MacVim is nice, solves the separation of Terminal and IDE, and I might give it a more serious try, but let’s face it, it’s just not an IDE. Code completion is still going to be mediocre, the interface is still going to be terminal-ish… I just don’t know. One thing I really love though is the taglist plugin. I think if I could just find a way to embed a Python console along the bottom of MacVim I might be sold.

One thing I absolutely love about Vim, the thing that Vim gets right that none of the IDEs get is colorschemes: MacVim comes with like 20 or 30 colorschemes! And you can download more on the ‘net! The other IDEs must lump colorscheme information into the general preferences or something, because you can’t just download a colorscheme as far as I’ve seen. The IDE with the worst color/font configuration? Eclipse – the one all my Python brethren seem to rave about. That is so frustrating. Some day I’ll make it to PyCon and someone will show me the kool-aid I guess.

The Frustrating Conclusion

PyCharm isn’t soup yet, Wingware is all but ignoring the Mac platform, Eclipse is completely wrong for my brain and I don’t know how anyone uses it for Python development, Komodo Edit is rock solid but lacking features, and Komodo IDE is fairly pricey and a 30-day trial is always just really annoying (and I kinda doubt it beats PyCharm for Python-specific development). MacVim is a stand-in for a real IDE and it does the job, but I really want more… integration! I also don’t like maintaining the plugins and colorschemes and *rc files and ctags, and having to understand its language and all that.

I don’t cover them here, but I’ve tried a bunch of the Linux-specific Python IDEs as well, and I didn’t like a single one of them at all. At some point I’ll spend more time with those tools to see if I missed something crucial that, once learned, might make it hug my brain like a warm blanket (and make me consider running Linux on my desktop again, something I haven’t done on a regular ongoing basis in about 4 years).

So… I don’t really have an IDE yet. I *did* however just realize that the laptop I’m typing on right now has never had a Komodo IDE install, so I’m off to test it now. Wish me luck!

PyTPMOTW: PsycoPG2

Wednesday, April 21st, 2010

What is this module for?

Interacting with a PostgreSQL database in Python.

What is PostgreSQL?

PostgreSQL is an open source relational database product. It has some more advanced features, like built-in networking-related and GIS-related datatypes, the ability to script stored functions in multiple languages (including Python), etc. If you have never heard of PostgreSQL, get out from under your rock!

Making Contact

Using the pscyopg2 module to connect to a PostgreSQL database couldn’t be simpler. You can use the connect() method of the module, passing in either the individual arguments required to make contact (dbname, user, etc), or you can pass them in as one long “DSN” string, like this:

dsn = "host=localhost port=6000 dbname=testdb user=jonesy"
conn = psycopg2.connect(dsn)
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)

The DSN value is a space-delimited collection of key=value pairs, which I construct before sending the dsn to the psycopg2.connect() method. Once we have a connection object, the very first thing I do is set the connection’s isolation level to ‘autocommit’, so that INSERT and UPDATE transactions are committed automatically without my having to call conn.commit() after each transaction. There are several isolation levels defined in the psycopg2.extensions package, and they’re defined in ‘extensions’ because they go beyond what is defined in the DB API 2.0 spec that is typically used as a reference in creating Python database modules.

Simple Queries and Type Conversion

In order to get anything out of the database, we have to know how to talk to it. Of course this means writing some SQL, but it also means sending query arguments in a format understood by the database. I’m happy to report that psycopg2 does a pretty good job of making things “just work” when it comes to converting your input into PostgreSQL types, and converting the output directly into Python types for easy manipulation in your code. That said, understanding how to properly use these features can be a bit confusing at first, so let me address the source of a lot of early confusion right away:

cur = conn.cursor()
cur.execute("""SELECT id, fname, lname, balance FROM accounts WHERE balance > %s""", min_balance)

Chances are, min_balance is an integer, but we’re using ‘%s’ anyway. Why? Because this isn’t really you telling Python to do a string formatting operation, it’s you telling psycopg2 to convert the incoming data using the default psycopg2 method, which converts integers into the PostgreSQL INT type. So, you can use “%s” in the ‘execute()’ method to properly convert integers, strings, dates, datetimes, timedeltas, lists, tuples and most other native Python types to a corresponding PostgreSQL type. There are adapters built into psycopg2 as well if you need more control over the type conversion process.

Cursors

Psycopg2 makes it pretty easy to get your results back in a format that is easy for the receiving code to deal with. For example, the projects I work on tend to use the  RealDictCursor type, because the code tends to require accessing the parts of the resultset rows by name rather than by index (or just via blind looping). Here’s how to set up and use a RealDictCursor:

curs = conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
curs.execute("SELECT id, name FROM users")
rs = curs.fetchall()
for row in rs:
   print rs['id'], rs['name']

It’s possible you have two sections of code that’ll rip apart a result set, and one needs by-name access, and the other just wants to loop blindly or access by index number. If that’s the case, just replace ‘RealDictCursor’ with ‘DictCursor’, and you can have it both ways!

Another nice thing about psycopg2 is the cursor.query attribute and cursor.mogrify method. Mogrify allows you to test and see how a query will look after all input variables are bound, but before the query is sent to the server. Cursor.query prints out the exact query that was actually sent over the wire. I use cursor.query in my logging output all the time to catch out-of-order parameters and mismatched input types, etc. Here’s an example:

try:
    curs.callproc('myschema.myprocedure', callproc_params)
except Exception as out:
    print out
    print curs.query

Calling Stored Functions

Stored procedures or ‘functions’ in PostgreSQL-speak can be immensely useful in large complex applications where you want to enforce business rules in a single place outside the domain of the main application developers. It can also in some cases be more efficient to put functionality in the database than in the main application code. In addition, if you’re hiring developers, they should develop in the standard language for your environment, not SQL: SQL should be written by database administrators and developers, and exposed to the developers as needed, so all the developers have to do is call this newly-exposed function. Here’s how to call a function using psycopg2:

callproc_params = [uname, fname, lname, uid]
cur.callproc("myschema.myproc", callproc_params)

The first argument to ‘callproc()’ is the name of the stored procedure, and the second argument is a sequence holding the input parameters to the function. The input parameters should be in the order that the stored procedure expects them, and I’ve found after quite a bit of usage that the module typically is able to convert the types perfectly well without my intervention, with one exception…

The UUID Array

PostgreSQL has built-in support for lots of interesting data types, like INET types for supporting IP addresses and CIDR network blocks, and GIS-related data types. In addition, PostgreSQL supports a type that is an array of UUIDs. This comes in handy if you use a UUID to identify items and want to store an array of them to associate with an order, or you use UUIDs to track messages and want to store an array of them together to represent a message thread or conversation. To get a UUID array into the database quickly and easily, it’s really not too difficult. If you have a list of strings that are UUID strings, you can do a quick conversion, call one function, and then use the array like any other input parameter:

my_uuid_arr = [uuid.UUID(i) for i in my_uuid_arr]
psycopg2.extras.register_uuid()
callproc_params = [
myvar1,
myvar2,
my_uuid_arr
]

curs.callproc('myschema.myproc', callproc_params)

Connection Status

It’s not a given that your database connection lives on from query to query, and you shouldn’t really just assume that because you did a query a fraction of a second ago that it’s still around now. Actually, to speak about things more Pythonically, you *should* assume the connection is still there, but be ready for failure, and check the connection status to diagnose and help get things back on track. You can check the ‘status’ attribute of your connection object. Here’s one way you might do it:

    @property
    def active_dbconn(self):
        return self.conn.status in [psycopg2.extensions.STATUS_READY, psycopg2.extensions.STATUS_BEGIN]:

So, I’m assuming here that you have some object that has a connection object that it refers to as ‘self.connection’. This one-liner function uses the @property built-in Python decorator, so the other methods in the class can either check the connection status before attempting a query:

if self.active_dbconn:
    try:
        curs.execute(...)
    except Exception as out:
         logging.error("Houston we have a problem")

Or you can flip that around like this:

try:
   curs.execute(...)
except Exception as out:
    if not self.active_dbconn:
        logging.error("Execution failed because your connection is dead")
    else:
         logging.error("Execution failed in spite of live connection: %s" % out)

Read On…

A database is a large, complex beast. There’s no way to cover the entirety of a database or a module that talks to it in a simple blog post, but I hope I’ve been able to show some of the more common features, and maybe one or two other items of interest. If you want to know more, I’m happy to report that, after a LONG time of being unmaintained, the project has recently sprung back to life and is pretty well-documented these days. Check it out!