Archive for the ‘Productivity’ Category

The bash history command

Thursday, February 19th, 2009

Sometimes I run through the search terms people use within my site, or to get to my site, and I see some interesting stuff. Over the years, I’ve written perhaps hundreds of technical blog posts and articles at various sites and in magazines (and a book), but I have never once touched on the “history” command. In spite of that, someone searched for “what does the history command do”, and somehow landed on this site. Here’s a quick overview I give people who take my Linux From the Ground Up class (I’m currently booking on-site training for the July-September period, by the way).

The history command is built into the bash shell (and zsh, and other shells as well). This means you won’t find it sitting around in /usr/bin. You also will get funny results if you run “man history”, which will bring you to a man page about all of your bash builtins instead of one specific man page for “history”.

Typical (Simple) Usage

You have a shell session open. You’ve been doing some work, and you’ve run several commands. One of the commands was a long nasty compilation command which contained environment variable settings, lots of flags, and references to specific files. It was long, and you don’t want to type it again, but you have to because the compilation failed. You’ve fixed the issue, and want to run the same compilation command.

Instead of typing all of that stuff in all over again, you have a couple of choices (more really, but we’re starting slowly):

  1. If you’re on a machine that uses readline capabilities, you can use the up arrow to move back through your previous commands. This is inefficient.
  2. You can type “history” at the command line

Typing “history” at a shell prompt by itself will print a numbered list of commands you have run. Let’s say your compile command is number 50 in the list. Now typing “!50″ at the command prompt will run that command again without you having to type in the whole disgusting thing.

More Useful Usage of “history”

If you know that you’ve only run, say, “netstat” with one set of arguments, and you keep running it over and over again, you can use a shortcut by just typing “!netstat”. That will run that last command in the history that starts with “netstat”. This is prone to error if you run a command often, but with different arguments.

If you need to run a long ugly command again immediately, but you need to change just a single argument, you can also use carets to do string substitution. So, if you just ran “netstat -plant | grep :80″ and you now want to check for “:22″ instead of “:80″, instead of typing in the whole command again, you can just type “^80^22″. Note that if you ran, in this order:

  1. netstat -plant | grep 80
  2. netstat -plant | grep 111
  3. ^80^22

You’ll get an error. It only works on the last command you ran.

You can also search through your command history in bash using Ctrl-R. This will alter your prompt, and when you start typing the command you’re looking for, it will do an incremental search through your command history and put what it finds on the line. When the right one pops up, hit enter, and the command will run. Note that if you need to make a quick edit to the command before it runs, moving the cursor will paste the command on your prompt, which will return to normal, and you can edit it before hitting enter.

History Command Quirks

There’s one quirk of history that bites lots and lots of people, myself included. If you’re aware of it, you can avoid it biting you. You see, bash keeps its command history for the current shell in memory, and then writes out that memory to your ~/.bash_history file… when the shell exits. There are times when this can be problematic, for example if you open another shell and want to run a command you just ran in the first shell. It won’t be in your history, because the first shell is still open, so its history hasn’t been written out to disk yet.

There are two ways to get the history that’s in memory onto disk. The first is to exit the first shell. The second is to run “history -w” in the first shell. Either will write the history to the ~/.bash_history file, but be forewarned that doing this overwrites anything that was previously in the ~/.bash_history file! Maddening, isn’t it?

If you configure bash ahead of time by adding “shopt -s histappend”, you can tell bash to append, rather than overwrite the history file.

Another quirk of how history is setup to work by default is that it saves every single command you type. If you open a terminal and type nothing but:

cd -
ls -l

Those all take up space in your history. The environment variable HISTSIZE can be set to something like 10,000 to account for this if you want. It’s set to 1000 by default on Red Hat systems. The other thing you can do is tell the history mechanism to conserve space in history by ignoring certain commands, patterns, duplicates, etc, using HISTCONTROL and HISTIGNORE

HISTCONTROL can be set to “ignorespace”, which will not put commands starting with a space in history. I’ve never been able to train myself to type a space before ‘ls’, so I’ve never used that setting. Slightly more useful is the “ignoredups” setting, which will stop the command you just ran from getting into history if it’s a duplicate of the command run just before it. Still not what I’m looking for. Better is “erasedups”, which will delete all previous instances of that command in history before writing the current command to history.HISTCONTROL can take multiple values, separated by colons.

If you just want to make sure “ls” commands never get written to history, you can just use HISTIGNORE for that. It can be set to a colon-delimited list of patterns or commands that, if matched, disqualify the command from being entered into history. So, you can do something like this:


Note that these are essentially command matches, and must match exactly, starting from the beginning of the line. So, ” du” (with a space in front) will be put into the history list, and so will “du -h” (no preceding space) will also make it. You can use, for example, HISTIGNORE=du* if you want to catch that command and anything that follows it.

Really Hardcore History

There’s a presentation online that talks about way more nitty-gritty details of history than I have time to cover. It’s great work, though, so I recommend you check it out here.


Throw out your Perl: One-line aggregation in awk

Monday, January 19th, 2009

I ran into a student from a class I taught last summer. He’s a really sharp guy, and when I first met him, I was impressed with just how much Perl he could stuff into his brain’s cache. He would write what he called ‘one-liners’ in Perl that, in reality, took up 5-10 lines in his terminal. Still, he’d type furiously, without skipping a beat. But he told me when we met that he no longer does this, because I covered awk in my class.

His one-liners were mostly for data munging. The data he needed to munge was mostly data that was pretty predictable. It had a fixed number of fields, a consistent delimiter — in short, it was perfect for munging in awk without using any kind of esoteric awk-ness.

One thing I cover in the learning module I’ve developed on Awk is aggregation of data using pretty simple awk one-liners. For example, here’s a pretty typical /etc/passwd file (we need some data to munge):

ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
avahi:x:70:70:Avahi daemon:/:/sbin/nologin
nscd:x:28:28:NSCD Daemon:/:/sbin/nologin
vcsa:x:69:69:virtual console memory owner:/dev:/sbin/nologin
rpc:x:32:32:Portmapper RPC user:/:/sbin/nologin
rpcuser:x:29:29:RPC Service User:/var/lib/nfs:/sbin/nologin
nfsnobody:x:65534:65534:Anonymous NFS User:/var/lib/nfs:/sbin/nologin
sshd:x:74:74:Privilege-separated SSH:/var/empty/sshd:/sbin/nologin
mysql:x:27:27:MySQL Server:/var/lib/mysql:/bin/bash
ldap:x:55:55:LDAP User:/var/lib/ldap:/bin/false
haldaemon:x:68:68:HAL daemon:/:/sbin/nologin
xfs:x:43:43:X Font Server:/etc/X11/fs:/sbin/nologin

It’s not exotic, cool data that we’re going to infer a lot of interesting things from, but it’ll do for pedagogical purposes.

Now, let’s write a super-simple awk aggregation routine that’ll count the number of users whose UID is > 100. It’ll look something like this:

awk -F: '$3 > 100 {x+=1} END {print x}' /etc/passwd

The important thing to remember is that awk will initialize your variables to 0 for you, which cuts down on some clutter.

Let’s abuse awk a bit further. What if we want to know how many users use each shell in /etc/passwd, whatever those shells may be? Here’s a one-liner that’ll take care of this for you:

awk -F: '{x[$7]+=1} END {for(z in x) {print z, x[z]} }' /etc/passwd

While awk doesn’t technically support multi-dimensional arrays, it also doesn’t have to be numerically indexed. So here, we tell awk to increment x[$7]. $7 is the field that holds the shell for each user, so if $7 on the current line is /bin/bash, then we’ve told awk to increment the value in the array indexed at x[/bin/bash]. So, if there’s only one line containing /bin/bash up to the current record, then x[/bin/bash]=1

There’s a lot of great things you can move onto from here. You can do things that others use Excel for right in awk. If you have checkbook information in a flat file, you can add up purchases only in a given category, or, using the technique above, in every category. If you store your stock purchase price on a line with the current price, you can use simple math to get the spread on each line and tell you whether your portfolio is up or down. Let’s have a look at something like that. Here’s some completely made up, hypothetical data representing a fictitious user’s stock portfolio:


Save that in a file called “stocks.txt”. The columns are stock symbol, number of shares, purchase price, and current price, in that order. This awk one-liner indexes the ‘x’ array using the stock symbol, and the value at that index is set to the amount gained or lost:

awk -F, '{x[$1]=($2*$4)-($2*$3)} END {for(z in x) {print z, x[z]}}' stocks.txt

Hm. Actually, that’s kind of inefficient. I realized while previewing this post that I can shorten it up a bit like this:

awk -F, '{x[$1]=($2*($4 - $3))} END {for(z in x) {print z, x[z]}}' stocks.txt

Glad I caught that before the nitpickers flamed me to a crisp. Always preview your posts! ;-P

Ah, but of course, that’s not enough. This spits out the gain and loss for each stock, but what about the net gain or loss across all of them? You only need to tweak a little bit:

awk -F, '{x[$1]=($2*($4-$3)); y+=x[$1]} END {for(z in x) {print z, x[z]}; print "Net: "y}' stocks.txt

We just added the assignment of the ‘y’ variable before the “END”, and then added a print statement after the “END”.

I hope this helps some folks out there. Also, if your team needs to know stuff like this, I do on-site training!

Holiday Project: Plot Google Calendar Events on Google Map

Tuesday, December 23rd, 2008

[UPDATE: 2009/08/08]: I’ve now gotten stuck on two separate projects, trying to find a bridge between Python code that generates data, and javascript code that is apparently required in order to present it. I haven’t found such a bridge. For one project, I was able to do what I needed with ReportLab (there was no webification requirement). For this project, it’s pretty useless if it’s not on the web, imho. So until I either break down and decide to deal with the monstrosity that *is* javascript, or someone else takes this code and runs with it, here it sits. There is only a slight chance I’ll decide Javascript is something I want to deal with. I’ve written some code in javascript. I’ve never enjoyed it, and I’m not very good at it.

While putting together the US Technical Conferences calendar over the past week or two, I noticed that the location of probably 80% of them (I’m guessing – it’s probably higher) is somewhere in California. I’ve always noticed that there is this trend to hold technical conferences in California, because there’s an enormous concentration of technology workers there. But c’mon! How about an OSCON East or something?

Anyway, I have more going on this holiday season than usual, but I always try to spend some of the downtime doing something interesting, and the Google Maps API is probably one of the few Google APIs I’ve never used. What better way to expose the inequity in conference locales than to plot all of my Tech Conference calendar events on a Google Map? Oh, and it’d be useful for people to be able to visually see where conferences are, and maybe color-code the markers according to Q1 2009, Q2 2009, etc.

I’ll be doing this in Python, by the way, though I’m pretty sure I’ve decided to go ahead and use javascript for the Maps portion. There *is* a Python utility that attempts to relieve you of the hellhound that is javascript, but it’s not documented at all that I’ve seen, and wrappers that attempt to generate javascript tend to be flaky (including one I wrote myself – I’m not picking here). I’m also using geopy for the geocoding, because it provides more flexibility than hard-coding calls against the Google Maps API.

By the way, this isn’t like some new fantastic idea I had. Someone else came up with a solution that works quite some time ago, but it involved several steps including Yahoo pipes and stuff. I really just wanted a script that, called with a parameter or two, would dump the appropriate .html file into a directory, or better, would just be called directly from the browser and take input (baby steps). Later, there are aspirations of plugging it into Django, an area where Google Maps has already seen some integration work, and another victim of my recent exploration/experimentation.

So, I already have some prototype code that might be useful for others who are maybe just starting out with the Calendar API. This bit of code will create an authenticated CalendarService object and dump the title and location of events on a calendar of your choosing. The assumption here is that the location is in some form that is parseable by whatever geocoding service you decide to use. For me (for now), I’m just using city and state locations – not addresses. Here goes:

#!/usr/bin/env python

  from xml.etree import ElementTree # for Python 2.5 users
except ImportError:
  from elementtree import ElementTree
import gdata.calendar.service
import gdata.service
import atom.service
import gdata.calendar
import getpass
import atom
import getopt
import sys
import string
import time
import geopy

calendar_service = gdata.calendar.service.CalendarService() = ''
calendar_service.password = getpass.getpass()
cal_id = 'id_from_calendar_settings_page_or_email_for_your_default_cal'
feed = calendar_service.GetCalendarEventFeed('/calendar/feeds/'+cal_id+'/private/full')
geo = geopy.geocoders.Google('your_google_api_key')

print feed.title.text  # the name of the calendar we're sucking feeds out of.
for event in feed.entry:
  print event.title.text, event.where[0].value_string # The event name, and location.
  (lat,lon) = geo.geocode(event.where[0].value_string)
  print lat,lon

It’s a start. Next I need to figure out the steps between this stage and making the actual cool stuff happen. Hopefully I’ll be back with an update right around the new year.

What Ordinary Users Think About IE: Debunked

Wednesday, December 17th, 2008

Point all of your chain-mail-forwarding family and friends at this post. It’s a collection of things people have said to me, or that I’ve overheard, that reveal little tidbits about what people are thinking when they use IE.

I have to use IE – it’s my internet!

IE is not your internet. IE is what’s known as a web browser. There are lots of different web browsers. IE just happens to be the one that comes with Windows. It doesn’t make it a good browser or anything. It’s just there in the event that you have no other browser. If the only browser on your system is IE, the first thing you should do is use it to download Firefox by clicking here.

If IE is so horrible, how come everyone uses it?

They don’t, actually. There was a time not too long ago where over 90% of internet users used IE. However, with the constant flood of security issues (IE usage really should be considered dangerous at this point), IE’s horrible support of web standards (which makes it hard for web developers to create cool sites for you to use), and its inability to keep up with really cool features in modern browsers, its share of the internet usage market has been declining steadily over the last couple of years. In fact, this source puts IE usage at around 45% currently, so not even a majority of people use IE anymore, if statistics are to be believed. Accurate statistics for browser use are difficult to nail down, and are probably more useful to discern a trend, not hard numbers. Still, the usage trend for IE is moving downward, steadily, and not particularly slowly. If you’re still using IE, you’re almost a dinosaur. Just about the entire tech-savvy world has migrated over to Firefox, with small contingents choosing Safari (Mac only) and Chrome (Windows only). Very small camps also use Opera and Konqueror.

This is also not to be trusted, but it’s my opinion based on observation of the IT field over the past 10 years: of the 40% of people still using IE, probably half of them are forced to use it in their offices because they don’t have the proper permissions on their office computers to install anything else. The other half probably just don’t realize they have any choice in the matter. You do. There are other browsers. I’ve named a few in this post. Go get one, or three, of them.

Will all of the sites I use still work?

It has always been exceedingly rare that a web site actually *requires* IE in order to work properly. Your online banking, email, video, pictures, shopping, etc., will all still work. The only time you might need IE around is to use the Microsoft Update website. In all likelihood, you’ll be much happier with your internet experience using something like Firefox than you ever were with IE. Think about it this way: I’m a complete geek. I use the internet for things ordinary users didn’t even know you could do. I bank, shop, communicate, manage projects, calendars and email, registered and run my business completely online. It’s difficult to think of a task that can be done on the internet that I don’t use the internet for, and I haven’t used IE in probably 8 years, and have not had any issues. If you find a web site that absolutely, positively CANNOT be used UNLESS you’re viewing it with IE, please post it in the comments, and I’ll create a “hall of shame” page to list them all, along with alternative sites you can access WITHOUT IE, which probably provide a better service anyway :)

I’m not technical enough to install another browser.

Who told you that?! That’s silly. You installed Elf Bowling didn’t you? C’mon, I know you did. Or what about that crazy toolbar that’s now fuddling up your IE window? Or those icons blinking down near the clock that you forgot the purpose of. At some point, you have installed something on your computer, and it was, in all likelihood, harder to do than installing Firefox would be. It’s simple. You go here, click on the huge Firefox logo, and it presents you with super-duper easy instructions (with pictures!) and a download. It takes less than 3 minutes to install, and you DO NOT have to know what you’re doing in any way or be geeky in any way to install it. If you can tell whether you’re computer is turned on or not, you’re overqualified to be a professional Firefox installer.

I Like IE. I have no problems with IE.

Whether you realize it or not, you have problems with IE, believe me. I had a cousin who said he had no problems with IE too. Then he came to my house one day, knocked on my door, and when I opened it, he handed me a hard drive from his computer. He said that all of his pictures of his first-born child were on there, and his computer had contracted a virus, and he couldn’t even boot from the hard drive. So it was up to me to recover the only pics he had of his only son being born. True story. Turns out, I tracked down the virus on the hard drive, and it was contracted by IE. Also, it wasn’t the only virus he had. If you think you’re safe because you have antivirus software, you’re sadly mistaken. He had it installed too, but it hadn’t been updated in 6 months, so any viruses released since the last update weren’t recognized by the antivirus software, and were allowed to roam freely onto his hard drive.

There has never, in the history of browsers, been a worse track record with regards to security than IE. Never. I promise – but you’re free to Google around for yourself. Half of the reason antivirus software even exists is purely to protect IE users (though email viruses are a problem independent of what browser you use, admittedly).

The other reason you might say you like IE is because you’ve never used anything else. As an alternative, I strongly suggest giving Firefox a shot.

Why do you care what browser I use?

I’m a technology guy. I’m one of those people that would work with technology even if he wasn’t being paid. Some people care about cooking, or quilting, or stained glass, or candlemaking, or knitting, or sewing, or horticulture, or wine. Heck, my mom cares about every single one of those things! Me, I care about technology, and I care about the internet. I want the internet to be a better place. Browsers play a non-trivial role in making the internet a better place. Also, one reason I care about technology is that it helps people do things they might otherwise be unable to do. Browsers enable users to do great things, and it allows us developers to make great things available to you. But when countless hours are spent trying to make things work with IE, it just slows everything down, and you don’t get cool stuff on the internet nearly as fast as you could.

So, it’s less about me caring what browser you use. In fact, I don’t really care if you use Firefox or not, it just happens to be the best browser out there currently. If you want to try something completely different, I encourage that too. It’s more about me caring about technology, the internet, and your browsing experience.

Open Source Technology US Conference Calendar

Tuesday, December 16th, 2008

One of the best ways to keep up with your field and network at the same time is to attend conferences. It’s one of the things I look forward to every year. After learning that O’Reilly has decided to commit blasphemy and *not* hold OSCON in Portland, Oregon the same week as the Oregon Brewers Festival, I was inspired to look around at what other conferences I might attend in 2009. Turns out, this is a huge pain in the ass, because I can’t find a single, central place that lists all of the conferences I’m likely to be interested in.

So… I created a public Google Calendar. It’s called “US Technical Conferences”. It needs more conferences, but I’ve listed the interesting ones I found. In order to keep the calendar from getting overwhelmingly crowded, I’ve decided that conferences on the list should:

  • Deal with open source technology in some way. This is purposely broad.
  • Be at least 3 days in length

If you want something added to the calendar, I’d be delighted to know about more conferences, so leave a comment! If you want to subscribe to the calendar, it’s public – the xml feed is here, and ical is here.

How Are You Staffing Your Startup?

Monday, December 15th, 2008

I have, in the past, worked for startups of varying forms. I worked for a spinoff that ultimately failed but had the most awesome product I’ve ever seen (neural networks were involved, need I say more?), I helped a buddy very early on with his startup, which did great until angel investors crept in, destroyed his vision, and failed completely to understand the Long Tail vision my buddy was trying to achieve, and I worked for a web 2.0 startup which was pretty successful, and was subsequently purchased… by another startup!

Working in academia for 6 years also exposed me to people who are firing up businesses, or projects that accidentally become businesses, and some of those go nowhere, while others seem to be on the verge of NYSE listing now, while a year ago they were housed in the smallest office I’ve ever seen, using lawn furniture for their workstations.

Of course, I’ve also consulted for, and been interviewed by, a host of other startups – recently, even.

First, the bad news

The bad news is that most or all of these startups are headed by developers, and they have applied *only* dev-centric thinking to their startup. They’ve thought about how to solve all of the app-level issues, mapped out use cases, drawn up interfaces, hacked together prototypes, and done all kinds of app-level work. Then, they’ve hired more developers. Then more after that.

Some seem to have given almost zero consideration to the fact that their application might become successful, and its availability might become quite critical. They haven’t given much thought to things like backups or disaster recovery. They have no plan for how to deploy their application such that when it comes time to scale, it has some hope of doing so without large amounts of downtime, or huge retooling efforts.

They’ve also given very little thought to how to enable their workforce to communicate, access their applications and data remotely without huge security compromises, and generally provide the back end system services necessary to run a business effectively (though, admittedly, most startups don’t require much in the way of things like NFS, or even internally-hosted email in the very beginning).

In short, they’ve either assumed that systems folks’ jobs are so easy that it can be handled by the developers, or they think that scalability lives entirely in their code, or they’re just not thinking about system-level aspects of their application at all. And don’t even get me started about the databases I’ve seen.

I know of more than one startup, right now, months late in going live. None of them, at the time I spoke to them, had a systems person on staff, or a deployment plan that addressed things that a systems person would address. What they had was a lot of developers and a deadline. Epic fail.Yes, even if you use agile development methodologies.

The Good

The good news is that, while some companies hire no systems folks at all and flounder around system-related issues forever, others hire at least one or two good systems folks, and make them a part of a collaborative effort to solve systems problems in interesting ways, utilizing the knowledge and experience of both systems and development personnel to create truly unique solutions. When sysadmins and developers collaborate to solve these issues, I have learned that they can create things that will blow your mind (in a good way).

In fact, Tom Limoncelli wrote recently that systems administration needs more PhDs. Well, I suppose that would help, but I think we’d get really far, really fast, if we could just break down some of the walls between sysadmins and developers, give them a common goal, and let them hash it out. Sysadmins have an understanding of the system and network-related issues that developers aren’t likely to have. Developers, in most cases, can probably write better code, much more quickly, than a sysadmin. Developers and sysadmins working together, sharing their knowledge and communicating with each other, can solve systems problems in new, unique, creative, and very effective ways.

The End

In the end, issues facing startups now blur the line between development and system administration a bit more than in the past. There are problems that need solving for which there is no RPM or Deb package. These problems require some knowledge of how related or analogous problems have been solved in the past. A knowledge of the systems and development approaches that have worked, and why. Enough experience to have seen where and when things go bad, and why. It also requires creative and critical thinking. I think that good, senior systems and development people have these skills, and much more.

For whatever reason, it seems that the only time these two camps ever meet is on opposite sides of a deployment or application support problem. Perhaps this happens with enough frequency for people to think that the two camps can’t, or won’t, work together. They can, and they will. People with a love for technology will come together if the common goal involves furthering technology and its use, regardless of their background. Sure, it takes proper management like any other project, but it can be done.

If you’ve had experiences, good or bad, with dev/sysadmin collaboration, I’d love to hear your stories!

WordPress 2.7 – Ahhhhhh!

Thursday, December 11th, 2008

I guess WordPress doesn’t consider the changes they’ve made in 2.7 (released today) to be big enough to warrant a change to the major version number (which would make it 3.0). However, there are a few features now built-in that I’ve been dreaming about for so long that simply incrementing the second number seems to sell this version short. At least they named it after one of my favorite jazz musicians. This release is called “Coltrane”. Nice.

My top two feature requests: Check!

First and foremost, the number one thing on my list of desired features is now a reality: I can make bulk changes to the categories of my posts. So, when I add a category to WordPress, and then realize that lots of my old posts really belong there, I don’t have to go searching around and changing them by hand. I still might take a stab at doing back-end automation here, by scripting a tool that’ll search the content of all of my posts, and if the content has, say, 2 out of 3 terms in my search criteria, it’ll add the post to the category, using whatever database trickery is necessary. However, this solves almost all of my needs (save my need to hack things, sometimes for its own sake).

The other feature I’ve been wanting for a long time is also now a reality: replying to comments without having to go to the post page to do it. You can now moderate and reply to comments right in the dashboard.

This, for me, is huge. I’ve been waiting for these two particular features since about 2005.

More Baked-in Goodness

Some other niceties are now built-in that used to be addon modules in WordPress, which is great, because I’m always worried about third-party modules breaking and being abandoned as new WP releases come out. The nicest for me, as someone who maintains their own wp install, is the automated WP upgrade. Used to be an addon, now built in.

Another nice feature, if you *are* someone who doesn’t mind third party modules, is that now you can browse available modules, and install them, without leaving the wp interface.

Yes, another complete redesign

The admin interface has been completely overhauled, again. The last time they did this, a buddy and I discussed it, and although he felt one or two things were nicer, I felt that they had not addressed the biggest problems with the interface. Well, they fixed it by doing something I didn’t actually expect: they admitted defeat.

Instead of overhauling the interface, they’ve empowered the user to do it for themselves. Want the editor to fit the width of the browser window? No problem. Never use all of those features in the editing interface? Get rid of them. Only just noticed all those news items in the dashboard? Make them more prominent. You can do all of this by dragging and dropping things around, or collapsing them to ‘icon-only’ view.

I am writing this in 2.7, and in the editor interface, I definitely feel like more of what I need is readily available instead of buried somewhere in the countless blocks and sections and whatnot – which reminds me that there’s also a new (and quite nice) menu interface – also a part of the interface you can customize.

Check out the video and notes on the WordPress site. The tour video does a great job of giving a quick rundown of the new features I’ve mentioned here, and lots and lots of features I *didn’t* cover.

Linux on Laptop = Epic Fail

Tuesday, November 25th, 2008

I brought my MacBook Pro in for a warranty repair yesterday around noon. Since then I’ve been using a Lenovo T61 to get basic work done, and also to see if any progress has been made in the area of Linux support for my laptop. I bought this laptop specifically because a website said that it was very well supported by Linux distributions “out of the box”, including video and wireless. I was sure to make hardware choices that didn’t require special third-party drivers… I’ve been doing this for 10 years, so I have some understanding of how to buy a laptop that I plan to put Linux on. Well, this time I apparently failed.

First, I had Ubuntu installed, and I was never able to keep the wireless card working consistently. To be honest, Ubuntu is the best distro I’ve had on this thing so far. Next, I gave OpenSUSE 11 a shot, and there’s been no end to the issues. Of course, it started with the wireless card. I have an Intel 3945ABG wireless card, according to lspci and dmesg output. In fact, here’s my lspci output right here:

00:00.0 Host bridge: Intel Corporation Mobile PM965/GM965/GL960 Memory Controller Hub (rev 0c)
00:02.0 VGA compatible controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c)
00:02.1 Display controller: Intel Corporation Mobile GM965/GL960 Integrated Graphics Controller (rev 0c)
00:19.0 Ethernet controller: Intel Corporation 82566MM Gigabit Network Connection (rev 03)
00:1a.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #4 (rev 03)
00:1a.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #5 (rev 03)
00:1a.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #2 (rev 03)
00:1b.0 Audio device: Intel Corporation 82801H (ICH8 Family) HD Audio Controller (rev 03)
00:1c.0 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 1 (rev 03)
00:1c.1 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 2 (rev 03)
00:1c.2 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 3 (rev 03)
00:1c.3 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 4 (rev 03)
00:1c.4 PCI bridge: Intel Corporation 82801H (ICH8 Family) PCI Express Port 5 (rev 03)
00:1d.0 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #1 (rev 03)
00:1d.1 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #2 (rev 03)
00:1d.2 USB Controller: Intel Corporation 82801H (ICH8 Family) USB UHCI Controller #3 (rev 03)
00:1d.7 USB Controller: Intel Corporation 82801H (ICH8 Family) USB2 EHCI Controller #1 (rev 03)
00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev f3)
00:1f.0 ISA bridge: Intel Corporation 82801HBM (ICH8M-E) LPC Interface Controller (rev 03)
00:1f.1 IDE interface: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) IDE Controller (rev 03)
00:1f.2 SATA controller: Intel Corporation 82801HBM/HEM (ICH8M/ICH8M-E) SATA AHCI Controller (rev 03)
00:1f.3 SMBus: Intel Corporation 82801H (ICH8 Family) SMBus Controller (rev 03)
03:00.0 Network controller: Intel Corporation PRO/Wireless 3945ABG Network Connection (rev 02)
15:00.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba)
15:00.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04)
15:00.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21)
15:00.3 System peripheral: Ricoh Co Ltd R5C843 MMC Host Controller (rev ff)
15:00.4 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 11)
15:00.5 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 11)

I’m running the KDE4 desktop, and tried using the default NetworkManager icon that’s in the systray to get things working. From what I saw there, it appeared that my card wasn’t scanning. I put in my network details manually, and tried to connect, and it failed with no errors. In the NetworkManager log there was lots of output, but nothing particularly useful. It just said the association took to long and that it was now marking that connection as ‘invalid’. Great. So here I am, trying to use Linux on the desktop, and only 5 minutes after the very first system boot, I’m tailing log files and debugging, and basically playing sysadmin, which is exactly what I don’t want to be doing on my desktop system. Restart NetworkManager, see what dhclient is doing, reboot, check /etc/modprobe.d, lsmod…. fail. Now what?

Well, I opened kwifimanager, and it said that I had indeed associated with an access point. So… I *am* scanning? Hmm. I had no IP address, so I figured I had probably fat-fingered my WEP settings somewhere. Tailing /var/log/messages agrees, saying WEP decryption is failing. So I double-check everything, all looks normal and correct to me, I try again, and No Bueno. *sigh*.

Finally, I reverted to command-line tactics, and ran this little line:

iwconfig wlan0 essid <myssid> key <mykey>

Magically, it works, where all of the GUI nonsense had failed. Now here’s a question: how the hell do you get this to “just work” at boot time? Well, I had about 10 emails to send to clients, so I put that question off and fired up a browser and…. fail. WTF?

I had an IP address, pinged my router, pinged another host on the network, all good. Pinged an external IP I know by heart, fail. Ugh. Ran ‘cat /etc/resolv.conf’ — empty. Apparently, dhclient didn’t update the information it got from my router. It also didn’t update when I set the domain in NetworkManager to ‘home’, because it still said ‘search site’. I added the proper lines in there, and tried again in the browser… fail. Now what?!?

Ran ‘netstat -rn’. I don’t have a default gateway. *sigh*…

route add default gw

And I finally have internet access.

Of course, I can’t work 24 hours a day, so I went to bed, and left my laptop running so I could get right back to work in the morning. Or not.

I had foolishly chosen to use an OpenGL screensaver. Overnight, it completely locked up the machine, rendering it useless without forcibly rebooting it. So much for getting right back to work.

Well, let’s see if I can get some of these issues fixed by updating the software, since I’m now at least connected to the internet (of course, after the forced reboot, I had to do the iwconfig->route add routine again). Ran the updater, picked some extra repositories, and it goes off to set things up. Unfortunately, it also prompts me to import probably 50 or so GPG keys. Annoying. More annoying is, after all of that, it fails to update any of my software, even though it tells me there are updates available. Why you ask? Here’s what I got…

Failed to mount cd:///?devices=/dev/sr0 on /var/adm/mount/AP_0x00000001: No medium found (mount: No medium found)

Click ok. Get same error again. Click ok. Get slightly different error…

Unexpected exception. Failed to mount cd:///?devices=/dev/sr0 on /var/adm/mount/AP_0x00000001: No medium found (mount: No medium found)

Click Ok, get another message…

Please file a bug report about this. See for instructions.

I go there, the URL isn’t valid. I find the Troubleshooting page on my own, and there’s a bunch of generic troubleshooting information there. More command line sysadmin-ish stuff in there. Just the kind of stuff I don’t need to be spending otherwise billable time on. I give up and decide that I’ll just deal with it in its broken-ass state for the next 10 hours or so until I can get my beloved MacBook Pro back.

On Remote Workers and Working Remotely

Tuesday, November 18th, 2008

I’ve been on both sides of the remote worker relationship. On the manager side, I’ve managed some good-sized projects using an all-remote work force. Indeed, I’ve hired, managed, fired, and promoted workers without ever knowing what they look like. On the worker side, I do most of my work remotely, and I have for some time now. Judging by the amount of repeat business I get, I’d say that I’m more than acceptably productive working remotely.

In dealing with various clients, recruiters, prospective employers, business owners, and talking to friends who manage people for a living, I’ve heard pretty much every excuse/reason there is for not wanting to deal with a remote work force. I’ve heard and experienced successes with remote workers as well, and they all have a few key things in common, which are missing from the stories of failure. I’ll talk about them in a minute.

I first want to just say that I’m not some kind of fanboy who thinks remote workers are the answer to every problem. There are valid reasons for not having remote workers. For example, it’d be hard to build cars with a remote work force. Some things (some!) just require a physical presence. Whoever maintains the printers at your company really has to be around to change out ink cartridges and stuff like that.

There are certain classes of jobs, though, that are well-suited to working remotely. There are even classes of jobs that are necessarily performed remotely to some degree (field sales and support technicians for example), that could be made 100% remote with the proper tools and processes in place.

So what makes a remote worker success story different from a story of failure?

Always be prepared…

The number one difference I’ve seen between success and failure in managing a remote work force is that  successful managers spent the time to prepare the managers, the team, the department, the organization, and the remote workers themselves to work remotely.

If you don’t prepare for a remote work force, you will fail miserably. As a result, I’m a big advocate of treating “Let’s go remote!” as an internal project with goals and milestones just like any other project. Preparing an organization to manage a remote work force takes a good deal of forethought, with a focus on communication and collaboration tools, reporting, accountability, scheduling, etc. In addition, you have to prepare the remote workers themselves, to insure they know what’s expected of them in terms of reporting their status, scheduling, communication, etc. They also need to know *about*, and *how to use* the tools they’ll be expected to use from home.

You have to plan this. You have to prepare, or you’re going to be like the HR manager who told me their company no longer allows for remote workers because “we tried it once and the guy made a complete mess of things”. When I asked the HR manager why he attributed that to the geographic location of the worker, he said “good point, he could just as well have made a mess here in the office”. You need good workers no matter where they’re going to work. The workers need expectations and goals from the manager, and the manager needs feedback and communication (and results!) from the worker. Tools help to facilitate these things. This is already a long post, so I’ll probably make a tools list in another post.

Communicate, and set expectations

Before the tools come other higher-level decisions and communication. For example, one problem I’ve heard more than once about remote workers is “we can’t hire a remote worker full-time, because then everyone will want to work from home”. As if they didn’t already all want to work from home! Everyone would love to have the option! Even if they didn’t take advantage of it, they’d consider it a really cool perk! They’d tell all of their friends about it, because it would make them jealous, and guess who their friends will contact first when they start to look for other opportunities?

You have to start somewhere, and you can’t just swing the barn doors open and let everyone go their own way on day 1. If you have an existing corporate structure in place with assets and services and regular meetings and the like, then you have to decide who can make the most benefit from a remote situation the soonest, make them the pilot group, and manage the expectations of the rest of the organization while the pilot group prepares to move to a remote workspace.

1, 10, 100, 1000

A common software application rollout strategy is to make it accessible to 1 user, then 10, then 100, then 1000, then… move up from there. In preparing your organization or department, you might consider a similar strategy.

I work for a client right now where I’m the “1”. If I can work effectively with the rest of the team (in the office), if I can produce results, remain accessible as-needed during working hours, manage the expectations of my team with regards to my presence (appointments happen), and overall be an asset to the team, then the management may decide that it can work on some larger scale – even if ‘larger’ means 2 instead of 1. It might also be useful to do a ‘remote rotation’ so that glitches can be caught early before making a physical presence in the office optional.

Success, of course, means getting together with the team and figuring out what tools will be used to best emulate an office working environment. We use IRC for 99% of our communication, falling back to email when we need to cc managers, we have a wiki for documentation and status updates, we have a trouble ticket system, everyone has everyone else’s phone number, blackberry PIN, or whatever. We’re a technical group doing system administration. It’s working wonderfully.

“But if the sysadmins work from home, the developers will want to work from home!” Maybe so. That’s where you have to manage expectations, and communicate with your workers to let them know that the company’s ‘office optional’ project is in an early alpha stage, that it’s being tested on the group most familiar with the technologies involved, and most capable of exploiting those technologies successfully to produce results. Once the geeks work out the shortcomings, and management is able to evaluate the effectiveness of the plan, the tests will become more widespread.

Really, it’s not a whole lot different from doing anything else that affects the whole company: changing payroll providers, healthcare options, software and desktop hardware upgrades and replacements… it just takes communication. The process has to be managed, just like every other process.

There’s more than one way to do it!

There’s no one solution out there. When I joined php|architect Magazine in 2003, it was run by Marco Tabini, and I was a remote editor. A couple of months after joining, I became editor in chief, and was in charge of remotely managing the magazine. I did it differently from Marco, but he still remained involved and engaged through good communication.

Python Magazine was created and managed by me, and for the entire lifespan of the magazine, I have not seen anyone else involved in its production in person. Ever. Design, production, web site admin, executive administration, tech editors, authors, accountants… time lines, budgets and planning documents… all remote, and mostly delegated. I started the magazine with the thought that at some point someone more engaged in the community and with Python should take charge — I was just a “temp” to get the vision off the ground. Sure enough, when I handed the magazine over to Doug Hellmann, he did things differently from me, and it’s working out wonderfully for him as well!

Everyone has their own management style. Don’t think that just because your management style is a little unique you can’t handle remote workers. Good managers are creative, and aren’t afraid to execute on creative solutions.

I’m a Top 25 Geek Blogger… for some value of “Top”

Monday, November 10th, 2008

I’m not someone who wakes up every day and looks at how my blog is ranked by all of the various services. I check out my WordPress stats, but that’s really about it. However, someone went and did some of the work for me, and they’ve decided that, of the blogs that they read or that were suggested to them, this blog ranks #20 in a listing of 25.

I’m really flattered, but wonder if it’s an indicator that this is a quality blog, or that they should aim higher in their blog reading ;-P  Either way, listing 25 bloggers in a flattering way is a fantastic marketing technique, because most of us are probably egomaniacal enough to say “Hey! Look!” and link back to the list on *your* blog, resulting in lots of traffic. Kudos, and thanks Mobile Maven!