On Keeping A Journal and Journaling

January 1st, 2014

I've kept a journal since I was 11. That was in the mid 80's. I hadn't heard the word “journaling” until about a year or so ago, and lately it appears that giving advice and howto information about journaling is a bit of a cottage industry. I had no idea so many people were in demand of this information, but in a world where you can subscribe to multiple very successful monthly magazines about running, I shouldn't be so shocked.

I should also not be shocked to learn that people have heard some of the benefits people tout about journaling and want those benefits for themselves. But maybe some are skeptical and wonder “is journaling really all it's cracked up to be?” For those folks, let me say this: I've read a few of the blogs myself, and while I'd say the authors are mostly 1 or 2-year “veterans” who don't really get it yet, the benefits they say you can get from journaling are not only possible, they're just the tip of the iceberg.

The key to getting started, though, is probably to completely ignore everything you've read, go get a Pilot v5 pen, and some kind of blank book, and just start doing as you please. Beyond encouraging words and listing the benefits, what the books and blogs are telling you to do is, quite honestly, bullshit. They're either trying to put arbitrary rules around it, or telling you about the process they themselves find useful… for them. Some things I read seemed utterly destructive to some of the benefits. Tread carefully around advice, from me or anyone else.

Journaling is ultimately a personal thing. All of it. Not just the process, but the goals or what you want to get out of it. Furthermore, most of the goals I've achieved over multiple decades of journaling (and they are both countless and priceless) happened not because I willed it to be so and structured my process or writing to meet them, but by happy accident. I was benefitting from journaling long before I even realized what journaling had brought to my life.

So the idea that you can learn to journal strikes me as pretty odd. But I do encourage you to do it. Just find some quiet time, a blank page, and a pen, and spill it. Even if it looks boring. In the meantime, if you have questions best answered by someone who has been doing this for more than a year or two, let me know and I'll be happy to help if I can.

What Geeks Could Learn From Working In Restaurants

September 4th, 2012

I spent a long time in my teens and early 20’s working in restaurants. To be honest, I loved just about every minute of it. It’s exciting, you’re never idle, and it’s usually not boring: you either get immediate gratification from happy customers leaving a nice tip, or you’re on the defensive, thinking on your feet to save an experience after something has gone wrong. 

In contrast, technology, I’ve found, requires more of an investment to get to the gratification of a job well done. I’ve also always felt like certain aspects of technology, like coding, are somewhat therapeutic for me, whereas restaurant work mostly wasn’t. And, occasionally, technology work can be monotonous and boring.

In spite of the quite dramatic differences between the two working environments, I sometimes find myself thinking about restaurants when I’m working on software or systems. It turns out that some of the tenets of restauranteurism, or just waitering or bartending, can be applied to work in the tech sector. Allow me a bit of license to riff on some of them here:

  • Treat your station as a single table. If one table needs a refill on a drink, chances are so do one of the other 4 tables that are all within 10 feet of that one. Save yourself some running by checking with all of the tables when one asks for something. In geek terms, this maps to looking for wins across multiple projects in your environment. Does your app need user authentication or a mail transport service? Chances are other projects do too. Looking for these kinds of wins can also help distribute the load of creating the requirements, and foster a sense of shared ownership in the result. 
  • Anticipate the need. Your customers shouldn’t have to ask for everything they need. Ideally they wouldn’t need to ask for anything they need, because you’d keep an eye on their drinks, you’d notice that someone wasn’t eating their salad and suspect something is wrong, you’d know that having kids means the first thing to the table is a huge stack of napkins, etc. The point is, there’s what’s happening now, and what is, in your professional opinion, likely to screw that all up. Your experience will likely guide you to solutions that are likely to keep that from happening. If not, you’re about to get more experience. Just like in technology 😉  
  • Presentation counts. All restaurants fiddle with the light levels, the music rotation, how food is placed on the plate, the garnishes added to drinks, and any restaurant you’d want to eat in is fastidious about tiny, tiny details from sweeping the entry foyer, to replacing tablecloths, to tossing old-looking napkins, to replacing chairs with tiny pinhole tears in the leather. Why? Because a restaurant is, in the process of serving you, also constantly selling you through your experience there. Of course, your software should do this, but you can get deeper than that: all of those ideas you have that nobody listens to, or you can’t get approval for? You’re missing some details. You don’t have any metrics, or you’re giving a geeky experience to a business-y audience, or you’re not presenting your idea in the form of an experience, or you’re creating a vision of a present instead of a product in the mind of the person with the checkbook.
  • Presentation can’t save poor quality. You can put all kinds of shiny AJAX-y Bootstrap-y stuff all over your site. It could be the nicest UI in existence. If the service doesn’t work, you’re toast. And those ideas you’re trying to sell internally? You’d better be confident they’ll work and have a solid execution plan, because a couple of bombs and no amount of presentation goodness is going to get you to “yes” again.
  • Consistency! Food quality, presentation, service… the entire dining experience, from the moment you set foot on the property to the moment you leave, should ideally be completely 100% consistent. Completely predictable. Zero surprises. It makes the restaurant easy to get to know, because if you’ve seen one, you’ve basically seen them all. Now think about the naming of your variables. Do you abbreviate here and write it out there? Think about how you package your software. Do you sometimes provide an XML config file and sometimes INI format? Think about your code layout. Do you sometimes use inheritance and sometimes composition, when either will do? Do you sometimes test? Do you sometimes… anything? Small inconsistencies add up. I believe we’re pretty much all guilty of them sometimes, but we should all strive to minimize them. 
  • Minimize Complexity. Very little inside of a restaurant is complicated or complex. Inevitably, the friction points where small complications live are where problems arise. A huge number of issues within a restaurant are human error: mistaken orders, mismatching food to table when the tray is assembled, food under- or overcooked, drinks made wrong or sent to the wrong table, miscalculated bills, forgotten requests for drink refills or side orders, the list goes on for an eternity. Ironically, computers are employed to help with some of it, and the food service world is better for it on the whole. In technology, and I’m sure in most jobs, most of the issues are more with the people than the technology. From forgotten feature details to overlooked metrics, from miscommunicated load projections to undocumented security policies, the clumsy inner workings, and inter-workings of human beings gets us every time. 
  • Mise en place! This is a French term, and I don’t speak much French, but in a kitchen it means “have your shit together and easy to reach, kid”. Chefs on a line will have a shelf or some other space that’s within arm’s reach where they put bowls and bottles of things they need all the time. The pre-meal time for a chef is spent double-checking that this small area is picture perfect, double-checking that refills and extras will be easy to grab during the shift, and making sure there’s no excuse for failure. I see teams fail at this quite a lot, actually. The mise en place for a technology worker are all of those things that need to be in place in order for a service to go to production. Metrics and monitoring. Hardware allocations. Infrastructure service dependencies. Security concerns. Power! I cannot count how many times a service has gotten to the week of deployment before a request to allocate storage in production was created, or the number of times hardware arrived before anyone thought about where it would be plugged in, or what that would do to the uptime provided by the UPS during an outage. 
Overall, I’d say it’s been for my progress in the technology industry to have grown and learned lessons from experiences in other industries, even if those experiences sometimes came from rather menial roles. Also, as my great grandmother used to say, all of that sweaty, smelly work was also good for me: “It builds character!” 

What I’ve Been Up To

August 5th, 2012

Historically, I post fairly regularly on this blog, but I haven’t been lately. It’s not for lack of anything to write about, but rather a lack of time to devote to blogging. I want to post at greater length about some of the stuff I’ve been doing, and I have several draft posts, but I wanted to list what I’ve been up to for two reasons:

  1. I use my blog as a sort of informal record of what I accomplished over the course of the year, hurdles I ran into, etc. I also sometimes use it to start a dialog about something, or to ‘think out loud’ about ideas. So it’s partially for my own reference.
  2. Someone might actually be interested in something I’m doing and want to pitch in, fork a repo, point me at an existing tool I’m reinventing, give me advice, or actually make use of something I’ve done or something I’ve learned and can share.

PyCon 2013

I’m participating again for the third year in the Program Committee and anywhere else I can help out (time permitting) with the organization of PyCon. It’s historically been a fantastic and really rewarding experience that I highly (and sometimes loudly) recommend to anyone who will listen. Some day I want to post at greater length about some actual instances where being really engaged in the community has been a win in real, practical ways. For now, you’ll have to take my word for it. It’s awesome.

I also hope to submit a talk for PyCon 2013. Anyone considering doing this should know a couple of things about it:

  1. Even though I participate in the Program Committee, which is a completely volunteer committee that takes on the somewhat grueling process of selecting the talks, tutorials, and poster sessions, it’s pretty much unrelated to my chances of having my talk accepted. In other words, submitting a talk is as daunting for me as it is for anyone. Maybe more so.
  2. Giving a talk was a really rewarding experience, and I recommend to anyone to give it a shot.

I just published a really long post about submitting a talk to PyCon. It’s full of completely unsolicited advice and subjective opinions about the do’s and don’ts of talk submission, based on my experiences as both a submitter of proposals and a member of the Program Committee, which is the committee that selects the talks.

Python Cookbook

Dave Beazley and I are really rolling with the next edition of the Python Cookbook, which will cover Python 3 *only*. We had some initial drama with it, but the good news is that I feel that Dave and I have shared a common vision for the book since just about day one, and that shared vision hasn’t changed, and O’Reilly hasn’t forced our hand to change it, which means the book should be a really good reflection of that vision when it’s actually released. I should note, however, that the next edition will represent a pretty dramatic departure from the form and function of previous versions. I’m excited for everyone to see it, but that’s going to have to wait for a bit. It’s still early to talk about an exact release date – I won’t know that for sure until the fall, but I would expect it to be at PyCon 2013.


I’ve blogged a bit about pyrabbit before: it’s a Python client for talking to RabbitMQ’s RESTful HTTP Management API. So, it’s not for building applications that do message passing with AMQP — it’d be more for monitoring, polling queue depths, etc., or if you wanted to build your own version of the browser-based management interface to RabbitMQ.

Pyrabbit is actually being used. Last I looked, kombu was actually using it, and if memory serves, kombu is used in Celery, so pyrabbit is probably installed on more machines than I’m aware of at this point. I also created a little command shell program called bunnyq that will let you poke at RabbitMQ remotely without having to write any code. You can create & destroy most resources, fire off a message, retrieve messages, etc. It’s rudimentary, but it’s fine for quick, simple tests or to validate your understanding of the system given certain binding types, etc.

I have a branch wherein I port the unit tests for Pyrabbit to use a bit of a different approach, but I also need to flesh out more parts of the API and test it on more versions of RabbitMQ. If you use Pyrabbit, you should know that I also accept pull requests if they come with tests.

Stealth Mode

Well, ‘stealth’ is a strong word. I actually don’t believe much in stealth mode, so if you want to know just ask me in person. Anyway, between 2008 and 2012 I’ve been involved in startups (both bought out, by the way! East Coast FTW!) that were very product driven and very focused on execution. I was lucky enough to answer directly to the CEO of one of those companies (AddThis) and directly to the CTO of the other (myYearbook, now meetme.com), which gave me a lot of access and insight into the mechanics, process, and thinking behind how a product actually comes to be. It turns out I really love certain aspects of it that aren’t even necessarily technical. I also really find the execution phase really exciting, and the rollout phase really almost overwhelmingly exciting.

I’ve found myself now with an idea that is really small and simple, but just won’t go away. It’s kind of gnawing at me, and the more I think about it, the more I think that, given what I’ve learned about product development, business-side product metrics, transforming some stories into an execution plan, etc., on top of my experience with software development, architecting for scalability, cloud services, tools/technologies for building distributed systems, etc., I could actually do this. It’s small and simple enough for me to get a prototype working on my own, and awesome enough to be an actual, viable product. So I’m doing it. I’m doing it too slowly, but I’m doing it.

By the way, the one thing I completely suck at is front end design/development. I can do it, but if I could bring on a technical co-founder of my own choosing, that person would be a front end developer who has pretty solid design chops. If you know someone, or are someone, get in touch – I’m @bkjones on Twitter, and bkjones at gmail. I’m jonesy on freenode. I’m not hard to find 🙂

In and Out of Love w/ NoSQL

I’ve recently added Riak to my toolbelt, next to CouchDB, MongoDB, and Redis (primarily). I was originally thinking that Riak would be a good fit for a project I’m working on, but have grown more uncomfortable with that notion as time has passed. The fact of the matter is that my data has relationships, and it turns out that relational databases are actually a really good fit in terms of the built-in feature set. The only place they really stink is on the operations side, but it also turns out that I have, like, several years of experience in doing that! Where I finally lost patience with NoSQL for this project was this huge contradiction that I never hear anyone ever talk about. You know the one — the one where the NoSQL crowd screams about how flexible everything is and how it really fits in with the “agile” mindset, and then in another doc in the same wiki strongly drives home the message that if you aren’t 100% sure what the needs of your app are, you should really make sure you have a grasp on that up front.

Uhh, excuse me, but if I’m iterating quickly on an app, testing in production, iterating on what works, failing fast, and designing in the direction of, and in direct response to, my customers, HOW THE HELL DO I KNOW WHAT MY APP’S NEEDS ARE?

So, what I’m starting with is what I know for sure: my data has relationships. I’m experienced enough with NoSQL solutions to understand my options for modeling them & enforcing the relationships, but when the relational aspect of the data is pretty much always staring you in the face and isn’t limited to a small subset of operations that rely on the relationships, it seems like a no-brainer to just use the relational database and spend my time writing code to implement actual features. If I find some aspect of the data that can benefit from a NoSQL solution later, well, then I’ll use it later!

Unit Testing Patterns

Most who know me know I’m kind of “into” unit testing. It’s almost like a craft unto itself, and one that I rather enjoy. I recently started a new job at AWeber Communications, where I’m working on a next-generation awesome platform to the stars 2.0 ++, and it’s all agile, TDD, kanban, and all that. It’s pretty cool. What I found in the unit tests they had when I got there were two main things:

First, the project used bare asserts, and used dingus in “mock it all!” mode. Combined, this led to tests that were mostly effective, but not very communicative in the event of failure, and they were somewhat difficult to reason about when you read them.

Second, they had a pretty cool pattern for structuring and naming that gave *running* the tests and viewing the output a more behavioral feel to them that I thought was pretty cool, and looked vaguely familiar. Later I realized it was familiar because it was similar to the very early “Introducing Behavioral Driven Development” post I saw a long time ago but never did anything with. If memory serves, that early introduction did not introduce a BDD framework like the ones popping up all over github over the past few years. It mostly relied on naming to relay the meaning, and used standard tools “behind the curtain”, and it was pretty effective.

So long story short, those tests have mostly been ported to use the mock module, and inherit from unittest2.TestCase (so, no more bare asserts). The failure output is much more useful, and I think the pattern that’s evolving around the tests now is unfinished but starting to look pretty cool! In the process, I also created a repository for unittest helpers that currently only contains a context manager that you feed a list of things to patch, and it’ll automatically patch and then unpatch things after the code under test is run. It has helped me start to think about building tests in a more consistent fashion, which means reading them is more predictable too, and hopefully we spend less time debugging them and less time introducing new developers to how they work.

PyCon Talk Proposals: All You Need to Know And More

August 5th, 2012

Writing a talk proposal needn’t be a stressful undertaking. There are two huge factors that seem to stress people out the most about submitting a proposal, and we’re going to obliterate those right now, so here they are:

  1. It’s not always obvious how a particular section of a proposal is evaluated, so it’s not always clear how much/little should be in a given section, how detailed/not detailed it should be, etc.
  2. The evaluation and selection process is a mystery.

What Do I Put Here?

Don’t fret. Here, in detail, are all of the parts of the proposal submission form, and some insight into what’s expected to be there, and how it’s used.


Pick your title with care. While the title may not be cause for the Program Committee to throw out your proposal, you should consider the marketing aspect of speaking at a conference. For example, there are plenty of conference-goers who, in a mad dash to figure out the talk they’ll attend next, will simply skip a title that requires manual parsing on their part.

So, a couple of DOs and DON’Ts:

  • DOs

    • DO insure that your title targets the appropriate audience for your talk
    • DO keep the title as short and simple as possible
    • DO insure that the title accurately reflects what the talk is about
  • DON’Ts

    • DON’T have a vague title
    • DON’T omit key words that are crucial to understanding who the talk is for
    • DON’T create a title that’s too long or wordy

So, in short, keep the title short and clear. There’s no hard and fast rule regarding length, of course, but consider how many best-selling book titles have ever had more than, say, 7 words? I’m sure it’s happened, but it’s probably more the exception than the rule. If you feel you need a very long title in order to meet the goals of the title, describe your proposal by some friends or coworkers, and when they say “So… how to do ‘x’ with ‘y’, basically, right?”, that’s actually your title. Be gracious and thank them.

Within your concise title, you should absolutely make certain to target your audience, if your audience is a very specific subset of the overall attendee list. For example, if you’re writing a proposal about testing Django applications, and the whole talk revolves around Django’s built-in testing facilities, your title pretty much has to say “Django” in it, for two reasons:

  1. If I’m a Django user, I really want to make sure I’ve discovered all of the Django talks before I decide what I’m doing, and if your title doesn’t say “Django” in it, lots of other ones do. A reasonable person will expect to see a key word like that in the title.
  2. If I’m *not* a Django user and show up to your “Unit Testing Web Applications” talk, only to discover 10 minutes into a 25-minute talk that I’ll get nothing out of it, I’m going to really be peeved.

Finally, unless you totally and completely know what you’re doing, DO NOT use a clever title using a play on words and not a single technology-related word in the whole thing. There are several issues with doing this, but probably the most obvious ones are:

  1. You’re not your audience: just because you get the reference and think it’s a total no-brainer, it’s almost guaranteed that 95% of the attendees will not get it when quickly glancing over a list of 100 talk titles.
  2. You’re basically forcing the reader to read the abstract to see if they’ve even heard of the technology your talk is about. Don’t waste their time!


The ‘Category’ form element is a drop-down list of high-level categories like ‘Concurrency’, ‘Mobile’, and ‘Gaming’. There are lots of categories. You may pick one. So if you have an awesome talk about how you use a standard WSGI app in a distributed deployment to get better numbers from your HPC system without the fancy interconnects, you might wonder whether to put your talk into the ‘HPC’ category or the ‘Web Frameworks’ category.

In cases such as this, it can be helpful to focus on the audience to help guide your decision. What do you think your audience would look like for this talk? Well, of course there are web framework authors and users who will absolutely be interested in the talk, but there isn’t a lot of gain for them in a talk like this, is there? I mean, what are the chances that someone in the audience has always dreamed of writing web frameworks for the HPC market? On the other hand, what are the chances that an HPC administrator, developer, or site manager would love to cut costs, ease deployment, reduce maintenance overhead, etc., of their HPC system by using a totally standard non-commercial web framework? There are probably valid arguments to be made for putting it in the ‘Web Frameworks’ category, but I can’t think of any. I’d put it in the ‘HPC’ category.

One more thing to consider is the other talks at the conference, or talks that could be at the conference, or talks from past conferences. Look at last year’s program. Where does your talk fit in that mix of talks? What talks would your talk have competed with? Is there a talk from last year that is similar in scope to your proposal? What category was it listed in?
Audience Level

There’s a ton of gray area in selecting your target audience level. I’ve never liked the sort of arbitrary “Novice means less than X years experience” formulas, so I’ll do my best to lay out some rules of thumb, but ultimately, what you consider ‘Novice’, and how advanced you think your material is, is up to you. Your choices are:

  • Novice:
    • Has used but not created a decorator and context manager.
    • Has possibly never used anything in the itertools, collections, operator, and/or functools modules
    • Has used but never had any issues with the datetime, email, or urllib modules.
    • Has seen list comprehensions, but takes some time to properly parse them
  • Intermediate:
    • Has created at least a decorator, and possibly a context manager.
    • Has recently had use for the operator module (and used it) and has accepted itertools as their savior.
    • Has had a big problem with at least one of: datetime, email, or urllib.
    • There’s only a slight chance they’ve ever created or knowingly made use of metaclasses.
    • Has potentially never had to use the socket module directly for anything other than hostname lookups.
    • Can write a (non-nested) list comprehension, and does so somewhat regularly.
  • Advanced:
    • Has created both a decorator and context manager using both functions and classes.
    • Has written their own daemonization module using Stevens as their reference.
    • Has been required to understand the implications of metaclasses and/or abstract base classes in a system.
    • May be philosophically opposed to list comprehensions, metaclasses, and abstract base classes
    • Has subclassed at least one of the built-in container types

Still not sure where your talk belongs? Well, hopefully you’re torn between only two of the user categories, in which case, I say “aim high”, for a few reasons:

  1. It’s generally easier to trim a talk to target a less experienced audience than you were expecting than to grow to accommodate a more experienced audience than you were expecting.
  2. Speaking purely anecdotally and with zero statistics, and from memory, there are lots more complaints about talks being more advanced than their chosen audience level than the reverse.

The Program Committee uses the category to insure that, within any given topic space, there’s a good selection of talks for all levels of attendees. In cases where a talk might otherwise be tossed for being too similar to (and not better than) another, targeting a different audience level could potentially save the day.


Talk slots are relatively short. Your idea for a talk is awesome, but way too long. What if you could give that talk to an audience that doesn’t need the whole 15-minute introductory part of the talk? What if, when your time started ticking down, you immediately jumped into the meat of the topic? That’s what Extreme talks are for.

I’d recommend checking the ‘Extreme’ box on the submission form only if your talk *could potentially* be an Extreme talk. Why? Two reasons:

  1. The number of Extreme slots is limited, and
  2. If your talk is not accepted into an ‘Extreme’ slot, it may still be accepted as a regular talk.


There are 30-minute or 45-minute slots, or you can choose ‘No Preference’. I recommend modeling your proposal around the notion that it could be in either time slot: your ability to be flexible helps the Program Committee to be flexible as well. If your talk competes with another in the process and the only difference of any use that anyone can find is that your talk has a hard, 45-minute slot requirement, you probably have a good chance of losing that battle.

If you’d like to have a 45-minute slot, then it might help you out to build your outline for a 30-minute talk first, and then go back and add bullet points to it that are clearly marked “(If 45min slot)” or something. Alternatively, you can create the outline based on a 45-minute slot, and just use the ‘Additional Notes’ section of the form to explain how you’d alter the talk if the committee requested you do the talk in 30 minutes.


This is the description that, if your talk is accepted, people will be reading in the conference program. It needs to:

  1. Be compelling
  2. Make a promise
  3. Be 400 characters or less

Being compelling can seem very difficult, depending on your topic space. It might help to consider that you only need to be compelling to your target audience. So, while a talk called “Writing Unit Tests” is probably not compelling to the already-testing contingent at the conference, it might be totally compelling for those who aren’t but want to. Meanwhile, a talk called “Setting Up a local Automated TDD Environment in Python 3 With Zero External Dependencies” is probably pretty compelling to the already-testing crowd and not so compelling to those who aren’t yet writing tests.

Making a promise to the reader means that you’re setting an expectation in their mind that you’ll make good on some deliverable at some point in the talk. Some key phrases to use in your description to call out this promise might be “By the end of this talk, you’ll have…”, or “If you want to have a totally solid grasp of…”. The key that both of those phrases have in common is that they both imply that you’re about to tell them what they can expect to get out of the talk. It answers a question in every conference-goer’s mind when reading talk descriptions, which is “What’s in it for me?”. If you don’t answer that question in the description, it may be harder for people to guess what’s in it for them, and frankly they won’t spend a lot of time trying!


The form expects a detailed description of the talk, along with an outline describing the flow of the talk. That said, it’s not expected that the talk you outline in August is precisely the same talk you deliver the following March. However, if your talk is accepted, the outline will be made public online (it will not be printed in the conference program), so you’d like to hit the outline as close as possible.

The abstract section will be used by the program committee to answer various questions about the talk, possibly including (but certainly not limited to):

  • Whether the talk’s title and description actually describe the talk as detailed in the abstract. Will attendees get pretty much what they expect if they only read the title and description of the talk?
  • Whether the talk appears to target the correct audience level. If you’re targeting a novice audience, your abstract should not go into topics that are beyond that audience level.
  • Whether the scope of the talk is realistic given the time constraints. If you asked for a 30-minute slot, your abstract should not make the committee think that it would be impossible to cover all of the material even given a whole hour. It’s not uncommon to be a little off in this regard, but being really far off could be an indicator that the proposer may not have thought this through very well.
  • Whether the talk is organized and has a logical flow that incorporates any known essential, obvious topics that should be touched on.

Additional Notes

This is a free-form text field where you can pretty much talk directly to the Program Committee to let them know in your own way why you think your talk is awesome, how you envision it coming off, and how you see the audience benefitting from it and finding value in attending the talk.

Although there are no hard requirements for this section of the submission form, you should absolutely, positively include any of the following that you can:

  • If your talk is about a new software project, a link to the project’s homepage, repository, and anything other relevant articles, interviews, testimonials, etc., about the project.
  • Links to any online slides or videos from any presentations you’ve given previously.
  • Comments discussing how you’d handle moving from a 45-minute to 30-minute slot, or from an Extreme slot to a regular slot, etc. In general, it helps the committee to know you’ve thought about contingencies in your proposal.

Great, so… How do I do this?

If you’ve never written a proposal before, and you’re not sure what you want to talk about, don’t have a crystal clear vision for a talk, have trouble narrowing the scope of your idea, and don’t know exactly where to start, I have a few ideas that might help you get the proposal creation juices flowing:

  • Write down some bullet points in a plain text file that are titles or one-line summaries of talks you’d like to see. Forget about whether you’re even willing or able to actually produce these talks – the idea is to start moving things from your brain onto a page. When you’ve got 5-10 of these points, reflect:
    • Could you deliver any of these yourself?
    • Could you apply an idea contained in a point to a topic you’re more familiar with?
    • Is there a topic related to any of the points that touch on things you know well?
    • Do any of these points jog your memory and make you think of projects you’ve worked on in the past that might be a source for a talk idea?
  • Do an informal audit of what you’ve done over the past year.
    • Were there problems you faced that there’s no good solution for?
    • Did you grow in some way that was really important, and could you help others to learn those lessons, and learn why those lessons are important?
    • Did you make use of a new technology?
    • Did you change how you do your job? Your development workflow? Your project lifecycle? Automation? Task management?
  • Go through the talks on pyvideo.org. It’s such an enormous, and enormously valuable trove. You could just scan the titles and see if something comes up. If that doesn’t work, click on a few, but don’t watch the talk: skip to the Q&A at the end. Buried in the Q&A are always these gems that are only tangentially related, and it is not uncommon to hear a speaker respond with “…but that’s another whole talk…”. They’re sometimes right.

Make it happen!

Sending Alerts With Graphite Graphs From Nagios

February 24th, 2012


The way I’m doing this relies on a feature I wrote for Graphite that was only recently merged to trunk, so at time of writing that feature isn’t in a stable release. Hopefully it’ll be in 0.9.10. Until then, you can at least test this setup using Graphite’s trunk version.

Oh yeah, the new feature is the ability to send graph images (not links) via email. I surfaced this feature in Graphite through the graph menus that pop up when you click on a graph in Graphite, but implemented it such that it’s pretty easy to call from a script (which I also wrote – you’ll see if you read the post).

Also, note that I assume you already know Nagios, how to install new command scripts, and all that. It’s really easy to figure this stuff out in Nagios, and it’s well-documented elsewhere, so I don’t cover anything here but the configuration of this new feature.

The Idea

I’m not a huge fan of Nagios, to be honest. As far as I know, nobody really is. We all just use it because it’s there, and the alternatives are either overkill, unstable, too complex, or just don’t provide much value for all the extra overhead that comes with them (whether that’s config overhead, administrative overhead, processing overhead, or whatever depends on the specific alternative you’re looking at). So… Nagios it is.

One thing that *is* pretty nice about Nagios is that configuration is really dead simple. Another thing is that you can do pretty much whatever you want with it, and write code in any language you want to get things done. We’ll take advantage of these two features to actually do a couple of things:

  • Monitor a metric by polling Graphite for it directly
  • Tell Nagios to fire off a script that’ll go get the graph for the problematic metric, and send email with the graph embedded in it to the configured contacts.
  • Record that we sent the alert back in Graphite, so we can overlay those events on the corresponding metric graph and verify that alerts are going out when they should, that the outgoing alerts are hitting your phone without delay, etc.

The Candy

Just to be clear, we’re going to set things up so you can get alert messages from Nagios that look like this (click to enlarge):

And you’ll also be able to track those alert events in Graphite in graphs that look like this (click to enlarge, and note the vertical lines – those are the alert events.):

Defining Contacts

In production, it’s possible that the proper contacts and contact groups already exist. For testing (and maybe production) you might find that you want to limit who receives graphite graphs in email notifications. To test things out, I defined:

  • A new contact template that’s configured specifically to receive the graphite graphs. Without this, no graphs.
  • A new contact that uses the template
  • A new contact group containing said contact.

For testing, you can create a test contact in templates.cfg:

define contact{
        name                            graphite-contact 
        service_notification_period     24x7            
        host_notification_period        24x7 
        service_notification_options    w,u,c,r,f,s 
        host_notification_options       d,u,r,f,s  
        service_notification_commands   notify-svcgraph-by-email
        host_notification_commands      notify-host-by-email
        register                        0

You’ll notice a few things here:

  • This is not a contact, only a template.
  • Any contact defined using this template will be notified of service issues with the command ‘notify-svcgraph-by-email’, which we’ll define in a moment.

In contacts.cfg, you can now define an individual contact that uses the graphite-contact template we just assembled:

define contact{
        contact_name    graphiteuser
        use             graphite-contact 
        alias           Graphite User
        email           someone@example.com 

Of course, you’ll want to change the ’email’ attribute here, even for testing.

Once done, you also want to have a contact group set up that contains this new ‘graphiteuser’, so that you can add users to the group to expand the testing, or evolve things into production. This is also done in contacts.cfg:

define contactgroup{
        contactgroup_name       graphiteadmins
        alias                   Graphite Administrators
        members                 graphiteuser

Defining a Service

Also for testing, you can set up a test service, necessary in this case to bypass default settings that seek to not bombard contacts by sending an email for every single aberrant check. Since the end result of this test is to see an email, we want to get an email for every check where the values are in any way out of bounds. In templates.cfg put this:

define service{
    name                        test-service
    use                         generic-service
    passive_checks_enabled      0
    contact_groups              graphiteadmins
    check_interval              20
    retry_interval              2
    notification_options        w,u,c,r,f
    notification_interval       30
    first_notification_delay    0
    flap_detection_enabled      1
    max_check_attempts          2
    register                    0

Again, the key point here is to insure that no notifications are ever silenced, deferred, or delayed by nagios in any way, for any reason. You probably don’t want this in production. The other point is that when you set up an alert for a service that uses ‘test-service’ in its definition, the alerts will go to our previously defined ‘graphiteadmins’.

To make use of this service, I’ve defined a service in ‘localhost.cfg’ that will require further explanation, but first let’s just look at the definition:

define service{
        use                             test-service 
        host_name                       localhost
        service_description             Some Important Metric
        check_command                   check_graphite_data!24!36
        notifications_enabled           1

There are two new things we need to understand when looking at this definition:

  • What is ‘check_graphite_data’?
  • What is ‘_GRAPHURL’?

These questions are answered in the following section.

In addition, you should know that the value for _GRAPHURL is intended to come straight from the Graphite dashboard. Go to your dashboard, pick a graph of a single metric, grab the URL for the graph, and paste it in (and double-quote it).

Defining the ‘check_graphite_data’ Command

This command relies on a small script written by the folks at Etsy, which can be found on github: https://github.com/etsy/nagios_tools/blob/master/check_graphite_data

Here’s the commands.cfg definition for the command:

# 'check_graphite_data' command definition
define command{
        command_name    check_graphite_data
        command_line    $USER1$/check_graphite_data -u $_SERVICEGRAPHURL$ -w $ARG1$ -c $ARG2$

The ‘command_line’ attribute calls the check_graphite_data script we got on github earlier. The ‘-u’ flag is a URL, and this is actually using the custom object attribute ‘_GRAPHURL’ from our service definition. You can see more about custom object variables here: http://nagios.sourceforge.net/docs/3_0/customobjectvars.html – the short story is that, since we defined _GRAPHURL in a service definition, it gets prepended with ‘SERVICE’, and the underscore in ‘_GRAPHURL’ moves to the front, giving you ‘$_SERVICEGRAPHURL’. More on how that works at the link provided.

The ‘-w’ and ‘-c’ flags to check_graphte_data are ‘warning’ and ‘critical’ thresholds, respectively, and they correlate to the positions of the service definition’s ‘check_command’ arguments (so, check_graphite_data!24!36 maps to ‘check_graphite_data -u <url> -w 24 -c 36’)

Defining the ‘notify-svcgraph-by-email’ Command

This command relies on a script that I wrote in Python called ‘sendgraph.py’, which also lives in github: https://gist.github.com/1902478

The script does two things:

  • It emails the graph that corresponds to the metric being checked by Nagios, and
  • It pings back to graphite to record the alert itself as an event, so you can define a graph for, say, ‘Apache Load’, and if you use this script to alert on that metric, you can also overlay the alert events on top of the ‘Apache Load’ graph, and vet that alerts are going out when you expect. It’s also a good test to see that you’re actually getting the alerts this script tries to send, and that they’re not being dropped or seriously delayed.

To make use of the script in nagios, lets define the command that actually sends the alert:

define command{
    command_name    notify-svcgraph-by-email
    command_line    /path/to/sendgraph.py -u "$_SERVICEGRAPHURL$" -t $CONTACTEMAIL$ -n "$SERVICEDESC$" -s $SERVICESTATE$

A couple of quick notes:

  • Notice that you need to double-quote any variables in the ‘command_line’ that might contain spaces.
  • For a definition of the command line flags, see sendgraph.py’s –help output.
  • Just to close the loop, note that notify-svcgraph-by-email is the ‘service_notification_commands’ value in our initial contact template (the very first listing in this post)

Fire It Up

Fire up your Nagios daemon to take it for a spin. For testing, make sure you set the check_graphite_data thresholds to numbers that are pretty much guaranteed to trigger an alert when Graphite is polled. Hope this helps! If you have questions, first make sure you’re using Graphite’s ‘trunk’ branch, and not 0.9.9, and then give me a shout in the comments.

The Python User Group in Princeton (PUG-IP): 6 months in

November 2nd, 2011

In May, 2011, I started putting out feelers on Twitter and elsewhere to see if there might be some interest in having a Python user group that was not in Philadelphia or New York City. A single tweet resulted in 5 positive responses, which I took as a success, given the time-sensitivity of Twitter, my “reach” on Twitter (which I assume is far smaller than what might be the entire target audience for that tweet), etc.

Happy with the responses I received, I still wanted to take a baby step in getting the group started. Rather than set up a web site that I’d then have to maintain, a mailing list server, etc., I went to the cloud. I started a group on meetup.com, and started looking for places to hold our first meeting.


Meetup.com, I’m convinced, gives you an enormous value if you’re looking to start a user group Right Now, Today™. For $12/mo., you get a place where you can announce future meetups, hold discussions, collect RSVPs so you have a head count for food or space or whatever, and vendors can also easily jump in to provide sponsorship or ‘perks’ in the form of discounts on services to user group members and the like. It’s a lot for a little, and it’s worked well enough. If we had to stick with it for another year, I’d have no real issue with that.

Google Groups

I set up a mailing list using Google Groups about 2-3 months ago now. I only waited so long because I thought meetup.com’s discussion forum might work for a while. After a few meetings, though, I noticed that there were always about five more people in attendance than had RSVP’d on meetup.com. Some people just aren’t going to be bothered with having yet another account on yet another web site I guess. If that’s the case, then I have two choices (maybe more, but these jumped to mind): force the issue by constantly trumpeting meetup.com’s service, or go where everyone already was. Most people have a Google account, and understand its services. Also, since the group is made up of technical people, they mostly like the passive nature of a mailing list as opposed to web forums.

If you’re setting up a group, I’d say that setting up a group on meetup.com and simultaneously setting up a Google group mailing list is the way to go if you want to get a fairly complete set of services for very little money and about an hour’s worth of time.

Meeting Space

Meeting space can come from a lot of different places, but I had a bit of trouble settling on a place at first. Princeton University is an awesome place and has a ton of fantastic places to meet with people, but if you’re not living on campus (almost no students are group members, btw), parking can be a bit troublesome, and Princeton University is famous for having little or no signage, and that includes building names, so finding where to go even if you did find parking can be problematic. So, so far, the University is out.

The only sponsor I had that was willing to provide space was my employer, but we’re nowhere near Princeton, and don’t really have the space. Getting a sponsor for space can be a bit difficult when your group doesn’t exist yet, in part because none of them have engaged with you or your group until the first meeting, when the attendees, who all work for potential sponsors, show up.

I started looking at the web site for the Princeton Public Library. I’ve been involved in the local Linux user group for several years, and they use free meeting space made available by the public library in Lawrenceville, which borders Princeton. I wondered if the Princeton Public Library did this as well, but they don’t, actually. In fact, meeting space at that location can get pretty expensive, since they charge for the space and A/V equipment like projectors and stuff separately (or they did when I started the group – I believe it’s still the case).

I believe I tweeted my disappointment about the cost of meeting at the Princeton Public Library, and did a callout on Twitter for space sponsors and other ideas about meeting space in or near Princeton. The Princeton Public Library got in touch through their @PrincetonPL Twitter account, and we were able to work out a really awesome deal where they became a sponsor, and agreed to host our group for 6 months, free of charge. Awesome!

Now, six months in, we either had to come to some other agreement with the library, or move on to a new space. After six months, it’s way easier to find space, or sponsors who might provide space, but I felt if we could find some way to continue the relationship with the library, it’d be best not to relocate the group. We wound up finding a deal that does good things for the group, the library, the local Python user community, and the evangelism of the Python language….

Knowledge for Space

Our group got a few volunteers together to commit to providing a 5-week training course to the public, held at the Princeton Public Library. Adding public offerings like this adds value to the library, attracts potential new members (they’re a member-supported library, not a state/municipality-funded one), etc. In exchange for providing this service to the library, the library provides us with free meeting space, including the A/V equipment.

If you don’t happen to have a public library that offers courses, seminars, etc., to the general public, you might be able to cut a similar deal with a local community college, or even high school. If you know of a corporation locally that uses Python or some other technology the group can speak or train people in, you might be able to trade training for meeting space in their offices. Training is a valued perk to the employees of most corporations.

How To Get Talks (or “How we stopped caring about getting talks”)

Whether you’re running a publishing outfit, a training event, or user group, getting people to deliver content is a challenge. Some people don’t think they have any business talking to what they perceive as a roomful of geniuses about anything. Some just aren’t comfortable talking in front of audiences, but are otherwise convinced of their own genius. Our group is trying to attack this issue in various ways, and so far it seems to be working well enough, though more ideas are welcome!

Basically, the group isn’t necessarily locked into traditions like “Thou shalt provide a speaker, who shalt bequeath upon our many wisdom of the ages”. Once you’ve decided as a group that having cookie-cutter meetings isn’t necessary, you start to think of all sorts of things you could all be doing together.

Below are some ideas, some in the works, some in planning, that I hope help other would-be group starters to get the ball rolling, and keep it in motion!

Projects For the Group, By the Group

Some members of PUG-IP are working together on building the pugip.org website, which is housed in a GitHub repository under the ‘pugip’ GitHub organization. This one project will inevitably result in all kinds of home-grown presentations & events within the group. As new ideas come up and new features are implemented, people will give lightning talks about their implementation, or we’ll do a group peer review of the code, or we’ll have speakers give talks about third-party technologies we might use (so, we might have two speakers each give a 30-minute talk about two different NoSQL solutions, for example. We’ve already had a great overview of about 10 different Python micro-frameworks), etc.

We may also decide to break up into pairs, and then sprint together on a set of features, or a particularly large feature, or something like that.

As of now, we’ve made enough decisions as a group to get the ball rolling. If there’s any interest I can blog about the setup that allows the group to easily share, review, and test code, provide live demos of their work, etc. The tl;dr version is we use GitHub and free heroku accounts, but new ideas come into play all the time. Just today I was wondering if we could, as a group, make use of the cloud9 IDE (http://cloud9ide.com).

The website is a great idea, but other group projects are likely to come up.

Community Outreach

PUG-IPs first official community outreach project will be the training we provide through the Princeton Public library. A few of us will collaborate on delivering the training, but the rest of the group will be involved in providing feedback on various aspects of the material, etc., so it’s a ‘whole group’ project, really. On top of increasing interactivity among the group members, outreach is also a great way to grow and diversify the group, and perhaps gain sponsorships as well!

There’s another area group called LUG-IP (a Linux user group) that also does some community outreach through a hardware SIG (special interest group), certification training sessions, and participating in local computing events and conferences. I’d like to see PUG-IP do this, too, maybe in collaboration with the LUG (they’re a good and passionate group of technologists).

Community outreach can also mean teaming up with various other technology groups, and one event I’m really looking forward to is a RedSnake meeting to be held next February. A RedSnake meeting is a combined meeting between PhillyPUG (the Philadelphia Python User Group) and Philly.rb (the Philadelphia Ruby Group). As a member of PhillyPUG I participated in last year’s RedSnake meeting, and it was a fantastic success. Probably 70+ people in attendance (here’s a pic at the end – some had already left by the time someone snapped this), and perhaps 10 or so lightning talks given by members of both organizations. We tried to do a ‘matching’ talk agenda at the meeting, so if someone on the Ruby side did a testing talk, we followed that with a Python testing talk, etc. It was a ton of fun, and the audience was amazing.


Socials don’t have to be dedicated events, per se. For example, PUG-IP has a sort of mini-social after every single meetup. We’re lucky to have our meetings located about a block away from a brewpub, so after each meeting, perhaps half of us make it over for a couple of beers and some great conversations. After a few of these socials, I started noticing that more talk proposals started to spring up.

Of course, socials can also be dedicated events. Maybe some day PUG-IP will…. I dunno… go bowling? Or maybe we’ll go as a group to see the next big geeky movie that comes out. Maybe we’ll have some kind of all-inclusive, bring-the-kids BBQ next summer. Who knows?

As a sort of sideshow event to the main LUG meetings, LUG-IP has a regularly-scheduled ‘coffee klatch’. Some of the members meet up one Sunday per month at (if memory serves) 8-11AM at a local Panera for coffee, pastries, and geekery. It’s completely informal, but it’s a good time.

Why Not Having Talks Will Help You Get Talks

I have a theory that is perhaps half-proven through my experiences with technology user groups: increasing engagement among and between the members of the group in a way that doesn’t shine a huge floodlight on a single individual (like a talk would) eventually breaks down whatever fears or resistance there is to proposing and giving a talk. Sometimes it’s just a comfort level thing, and working on projects, or having a beer, or sprinting on code, etc. — together — turns a “talking in front of strangers” experience into more of a “sharing with my buddies” one.

I hope that’s true, anyway. It seems to be. 🙂

Thanks For Reading

I hope someone finds this useful. It’s early on in the life of PUG-IP, but I thought it would be valuable to get these ideas out into the ether early and often before they slip from my brain. Good luck with your groups!

The Happy Idiot

November 1st, 2011

Today is November 1st, and there’s an event that takes place every November called National Novel Writing Month (NaNoWriMo). I don’t believe I have the ability to really write a novel, and have no reason to think anyone would read it if I did. But I would like to make an attempt to write a blog post every day this month, and this month’s post is about The Happy Idiot. Hope you enjoy it and leave comments.

Who is the happy idiot? It’s the person in class who shrugs off fears of looking dopy and raises their hand. It’s the person who, in an architecture meeting, isn’t afraid to be wrong in asserting that a new bottleneck is quickly emerging in the design. It’s the person who gives presentations on topics they’re really only 75% comfortable with, and announces as much to the audience, inviting corrections and more input. It’s the person who invites feedback and asks questions that seem trivial, even if it exposes their ignorance.

We need more happy idiots.

I wholeheartedly accept this role, even though there are circumstances where it might be easier to keep my mouth shut and keep up appearances or it might seem beneficial to not put a dent in some perceived reputation or something like that. The problem I have with doing that is that appearances are, in my experience, largely bullshit. Reputation, in my experience, comes from doing, not from merely being perceived as smart, or good, or whatever. Execute. The rest comes from that.

Furthermore, once you enter the realm of keeping up appearances, you wind up in this horrible vicious cycle where eventually you just always have to clam up to seem smart about everything. Purposefully keeping quiet when you have no idea what’s going on – indeed, *because* you have no idea what’s going on is a close relative to lying, and has the same consequences. Eventually you’ll be cornered to execute and you’ll have no idea what to do. The fear that this will happen will eventually take over your waking hours, causing stress, and it’s all downhill from there.

On the other hand, being the happy idiot means filling in the cracks in your knowledge. It means you’re conscious of your own ignorance. It means you’ll be able to execute more effectively. This starts a positive cycle: you learn more, you execute more effectively, you begin to be perceived as smart, good, whatever, and it’s not completely unwarranted, because you’ve actually asked questions that took guts to ask and as a result you executed in smart ways. Eventually, your dumb questions aren’t perceived as being dumb anymore. Eventually, when you ask a seemingly trivial question, people stop reflexively thinking ‘how does he not know that’ and start thinking about what your brain is about to do with that little tidbit of data.

Further, it means people will trust you more. Think about it. Would you rather give a critical project to someone who absolutely never asks questions and “seems smart”, or the person who asks intelligent questions and executes?

So, I say be the happy idiot. Put yourself out there. If you’re perceived as being dumb for taking steps to be less dumb, then the problem isn’t yours, and you shouldn’t make it yours.


pyrabbit Makes Testing and Managing RabbitMQ Easy

October 23rd, 2011

I have a lot of hobby projects, and as a result getting any one of them to a state where I wouldn’t be completely embarrassed to share it takes forever. I started working on pyrabbit around May or June of this year, and I’m happy to say that, while it’ll never be totally ‘done’ (it is software, after all), it’s now in a state where I’m not embarrassed to say I wrote it.

What is it?

It’s a Python module to help make managing and testing RabbitMQ servers easy. RabbitMQ has, for some time, made available a RESTful interface for programmatically performing all of the operations you would otherwise perform using their browser-based management interface.

So, pyrabbit lets you write code to manipulate resources like vhosts & exchanges, publish and get messages, set permissions, and get information on the running state of the broker instance. Note that it’s *not* suitable for writing AMQP consumer or producer applications; for that you want an *AMQP* module like pika.

PyRabbit is tested with Python versions 2.6-3.2. The testing is automated using tox. In fact, PyRabbit was a project I started in part because I wanted to play with tox.

Here’s the example, ripped from the documentation (which is ripped right from my own terminal session):

>>> from pyrabbit.api import Client
>>> cl = Client('localhost:55672', 'guest', 'guest')
>>> cl.is_alive()
>>> cl.create_vhost('example_vhost')
>>> [i['name'] for i in cl.get_all_vhosts()]
[u'/', u'diabolica', u'example_vhost', u'testvhost']
>>> cl.get_vhost_names()
[u'/', u'diabolica', u'example_vhost', u'testvhost']
>>> cl.set_vhost_permissions('example_vhost', 'guest', '.*', '.*', '.*')
>>> cl.create_exchange('example_vhost', 'example_exchange', 'direct')
>>> cl.get_exchange('example_vhost', 'example_exchange')
{u'name': u'example_exchange', u'durable': True, u'vhost': u'example_vhost', u'internal': False, u'arguments': {}, u'type': u'direct', u'auto_delete': False}
>>> cl.create_queue('example_queue', 'example_vhost')
>>> cl.create_binding('example_vhost', 'example_exchange', 'example_queue', 'my.rtkey')
>>> cl.publish('example_vhost', 'example_exchange', 'my.rtkey', 'example message payload')
>>> cl.get_messages('example_vhost', 'example_queue')
[{u'payload': u'example message payload', u'exchange': u'example_exchange', u'routing_key': u'my.rtkey', u'payload_bytes': 23, u'message_count': 2, u'payload_encoding': u'string', u'redelivered': False, u'properties': []}]
>>> cl.delete_vhost('example_vhost')
>>> [i['name'] for i in cl.get_all_vhosts()]
[u'/', u'diabolica', u'testvhost']

Hopefully you’ll agree that this is simple enough to use in a Python interpreter to get information and do things with RabbitMQ ‘on the fly’.

How Can I Get It?

Well, there’s already a package on PyPI called ‘pyrabbit’, and it’s not mine. It’s some planning-stage project that has no actual software associated with it. I’m not sure when the project was created, but the PyPI page has a broken home page link, and what looks like a broken RST-formatted doc section. I’ve already pinged someone to see if it’s possible to take over the name, because I can’t think of a cool name to change it to.

Until that issue is cleared up, you can get downloadable packages or clone/fork the code at the pyrabbit github page (see the ‘Tags’ section for downloads), and the documentation is hosted on the (awesome) ReadTheDocs.org site.

Shhh… I’m Hunting Talks

October 13th, 2011

Well, it’s that time of year again. The PyCon 2012 Call for Proposals has ended. This means it’s time for the Program Committee to spring into action, evaluating all of the proposals, preparing to champion their favorites, and participating in the interactive meetings that eventually decide the fate of PyCon 2012’s slate of talks, tutorials, and poster sessions.

But… who is the Program Committee?

I asked that question last year. “Who are these people? Is this something I can participate in?” It turns out that *anyone* can join the Program Committee. The genius in the way it works is its simplicity: if you want to be on the Program Committee, you join a mailing list, send an intro email, and you’re added as a committee member.

My first year on the Program Committee was a fantastic experience that served to educate me about the selection process, and to further my conviction that the Python community is the most welcoming, open and inclusive group I’ve ever had the pleasure of being involved with.

So, if you haven’t already, I encourage you to join the Program Committee for this year’s round of meetings to help build what will undoubtedly become the biggest and best PyCon yet.

Thoughts on Python and Python Cookbook Recipes to Whet Your Appetite

June 9th, 2011

Dave Beazley and myself are, at this point, waist deep into producing Python Cookbook 3rd Edition. We haven’t really taken the approach of going chapter by chapter, in order. Rather, we’ve hopped around to tackle chapters one or the other finds interesting or in-line with what either of us happens to be working with a lot currently.

For me, it’s testing (chapter 8, for those following along with the 2nd edition), and for Dave, well, I secretly think Dave touches every aspect of Python at least every two weeks whether he needs to or not. He’s just diabolical that way. He’s working on processes and threads at the moment, though (chapter 9 as luck would have it).

In both chapters (also a complete coincidence), we’ve decided to toss every scrap of content and start from scratch.

Why on Earth Would You Do That?

Consider this: when the last edition (2nd ed) of the Python Cookbook was released, it went up to Python 2.4. Here’s a woefully incomplete list of the superamazing awesomeness that didn’t even exist when the 2nd Edition was released:

  • Modules:
    • ElementTree
    • ctypes
    • sqlite3
    • functools
    • cProfile
    • spwd
    • uuid
    • hashlib
    • wsgiref
    • json
    • multiprocessing
    • fractions
    • plistlib
    • argparse
    • importlib
    • sysconfig
  • Other Stuff
    • The ‘with’ statement and context managers*
    • The ‘any’ and ‘all’ built-in functions
    • collections.defaultdict
    • advanced string formatting (the ‘format()’ method)
    • class decorators
    • collections.OrderedDict
    • collections.Counter
    • collections.namedtuple()
    • the ability to send data *into* a generator (yield as an expression)
    • heapq.merge()
    • itertools.combinations
    • itertools.permutations
    • operator.methodcaller()

* If memory serves, the ‘with’ statement was available in 2.4 via future import.

Again, woefully incomplete, and that’s only the stuff that’s in the 2.x version! I don’t even mention 3.x-only things like concurrent.futures. From this list alone, though, you can probably discern that the way we think about solving problems in Python, and what our code looks like these days, is fundamentally altered forever in comparison to the 2.4 days.

To give a little more perspective: Python core development moved from CVS to Subversion well after the 2nd edition of the book hit the shelves. They’re now on Mercurial. We skipped the entire Subversion era of Python development.

The addition of any() and all() to the language by themselves made at least 3-4 recipes in chapter 1 (strings) one-liners. I had to throw at least one recipe away because people just don’t need three recipes on how to use any() and all(). The idea that you have a chapter covering processes and threads without a multiprocessing module is just weird to think about these days. The with statement, context managers, class decorators, and enhanced generators have fundamentally changed how we think about certain operations.

Also something to consider: I haven’t mentioned a single third-party module! Mock, tox, and nosetests all support Python 3. At least Mock and tox didn’t exist in the old days (I don’t know about nose off-hand). Virtualenv and pip didn’t exist (both also support Python 3). So, not only has our code changed, but how we code, test, deploy, and generally do our jobs with Python has also changed.

Event-based frameworks aside from Twisted are not covered in the 2nd edition if they existed at all, and Twisted does not support Python 3.

WSGI, and all it brought with it, did not exist to my knowledge in the 2.4 days.

We need a Mindset List for Python programmers!

So, What’s Your Point

My point is that I suspect some people have been put off of submitting Python 3 recipes, because they don’t program in Python 3, and if you’re one of them, you need to know that there’s a lot of ground to cover between the 2nd and 3rd editions of the book. If you have a recipe that happens to be written in Python 2.6 using features of the language that didn’t exist in Python 2.4, submit it. You don’t even have to port it to Python 3 if you don’t want to or don’t have the time or aren’t interested or whatever.

Are You Desperate for Recipes or Something?

Well, not really. I mean, if you all want to wait around while Dave and I crank out recipe after recipe, the book will still kick ass, but it’ll take longer, and the book’s world view will be pretty much limited to how Dave and I see things. I think everyone loses if that’s the case. Having been an editor of a couple of different technical publications, I can say that my two favorite things about tech magazines are A) The timeliness of the articles (if Python Magazine were still around, we would’ve covered tox by now), and B) The broad perspective it offers by harvesting the wisdom and experiences of a vast sea of craftspeople.

What Other Areas Are In Need?

Network programming and system administration. For whatever reason, the 2nd edition’s view of system administration is stuff like checking your Windows sound system and spawning an editor from a script. I guess you can argue that these are tasks for a sysadmin, but it’s just not the meat of what sysadmins do for a living. I’ll admit to being frustrated by this because I spent some time searching around for Python 3-compatible modules for SNMP and LDAP and came up dry, but there’s still all of that sar data sitting around that nobody ever seems to use and is amazing, and is easy to parse with Python. There are also terminal logging scripts that would be good.

Web programming and fat client GUIs also need some love. The GUI recipes that don’t use tkinter mostly use wxPython, which isn’t Python 3-compatible. Web programming is CGI in the 2nd edition, along with RSS feed aggregation, Nevow, etc. I’d love to see someone write a stdlib-only recipe for posting an image to a web server, and then maybe more than one recipe on how to easily implement a server that accepts them.

Obviously, any recipes that solve a problem that others are likely to have that use any of the aforementioned modules & stuff that didn’t exist in the last edition would really rock.

How Do I Submit?

  1. Post the code and an explanation of the problem it solves somewhere on the internet, or send it (or a link to it) via email to PythonCookbook@oreilly.com or to @bkjones on Twitter.
  2. That’s it.

We’ll take care of the rest. “The rest” is basically us pinging O’Reilly, who will contact you to sign something that says it’s cool if we use your code in the book. You’ll be listed in the credits for that recipe, following the same pattern as previous editions. If it goes in relatively untouched, you’ll be the only name in the credits (also following the pattern of previous editions).

What Makes a Good Recipe?

A perfect recipe that is almost sure to make it into the cookbook would ideally meet most of the criteria set out in my earlier blog post on that very topic. Keep in mind that the ability to illustrate a language feature in code takes precedence over the eloquence of any surrounding prose.

What If…

I sort of doubt this will come up, but if we’ve already covered whatever is in your recipe, we’ll weigh that out based on the merits of the recipes. I want to say we’ll give new authors an edge in the decision, but for an authoritative work, a meritocracy seems the only valid methodology.

If you think you’re not a good writer, then write the code, and a 2-line description of the problem it solves, and a 2-line description of how it works. We’ll flesh out the text if need be.

If you just can’t think of a good recipe, grep your code tree(s) for just the import statements, and look for ideas by answering questions on Stackoverflow or the various mailing lists.

If you think whatever you’re doing with the language isn’t very cool, then stop thinking that a cookbook is about being cool. It’s about being practical, and showing programmers possibly less senior than yourself an approach to a problem that isn’t completely insane or covered in warts, even if the problem is relatively simple.