I’m continuing to take on the items in Dave Thomas’s Code Kata collection. It’s a nice way to spend a Sunday night, and it’s a good way to get my brain going again before work on Monday morning. It’s also fun and educational 🙂
CodeKata 4 is called “Data Munging”. It’s not very difficult data munging, really. I think the more interesting bit of the Kata is how to deal with files in different languages, what tools are well suited to the task, and trying to minimize code duplication.
Description
Code Kata 4 instructs us to download weather.dat, a listing of weather information for each day of June, 2002 in Morristown, NJ. We’re to write a program that identifies the day which has the smallest gap between min and max temperatures.
Then, it says to download football.dat, a listing of season statistics for Soccer teams. We’re to write a program that identifies the team with the smallest gap between goals scored for the team, and goals scored against the team.
Once those are done, we’re asked to try to factor out as much duplicate code as possible between the two programs, and then we’re asked a few questions to help us think more deeply about what just transpired.
My Attack Plan
The first thing that jumped into my brain when I saw the data files was “Awk would be perfect for this”. I fiddled with that for a little too long (my awk is a little rusty), and came up with this (for weather.dat):
>$ awk 'BEGIN {min=100000}; $1 ~ /^[1-3]/ {x[$1]=$2-$3; if (x[$1]<min){ min=x[$1]; winner=$1}} END {print winner, min} ' weather.dat.dat14 2
It works, and awk, though it gets ugly to some, reads in a nice, linear way to me. You give it a filter expression, and then statements to act on the matching lines (in braces). What could be simpler?
After proving to myself that I hadn’t completely lost my awk-fu, I went about writing a Python script to deal with the problem. I read ahead in the problem description, though, and so my script contains separate blocks for the two data sets in one script:
#!/usr/bin/env python import sys import string data = open(sys.argv[1], 'r').readlines() data.sort() if 'weather' in sys.argv[1]: winner = 1000000 winnerday = None for line in data: #filter out lines that aren't numbered. if line.strip().startswith(tuple(string.digits)): # we only need the first three fields to do our work l = line.split()[:3] # some temps have asterisks attached to them. maxt = l[1].strip(string.punctuation) mint = l[2].strip(string.punctuation) diff = int(maxt) - int(mint) if diff < winner: winner = diff winnerday = l[0] print "On day %s, the temp difference was only %d degrees!" % (winnerday, winner) if 'football' in sys.argv[1]: winner = 1000000 winnerteam = None for line in data: if line.strip().startswith(tuple(string.digits)): l = line.split() team, f, a = l[1], int(l[6]), int(l[8]) diff = abs(f - a) if diff < winner: winner = diff winnerteam = team print "Team %s had a for/against gap of only %d points!" % (winnerteam, winner)
Really, the logic employed is not much different from the awk solution:
- Set a default for ‘winner’ that’s unlikely to be rivaled by the data 🙂
- Set the default for the winning team or day to ‘None’
- Filter out unwanted lines in the dataset.
- Grab bits of each line that are useful.
- Assign each useful bit to a variable.
- Do math.
- Do comparisons
- Present results.
Refactoring
Part 2 of the Kata says to factor out as much duplicate code as possible. I was able to factor out almost all of it on the first shot at refactoring, leaving only the line of code (per file) to identify the relevant columns of each data set:
#!/usr/bin/env python import sys import string data = open(sys.argv[1], 'r').readlines() data.sort() winner_val = 1000000 winner_id = None for line in data: if line.strip().startswith(tuple(string.digits)): l = line.split() if 'weather' in sys.argv[1]: identifier, minuend, subtrahend = l[0], int(l[1].strip(string.punctuation)), int(l[2].strip(string.punctuation)) elif 'football' in sys.argv[1]: identifier, minuend, subtrahend = l[1], int(l[6]), int(l[8]) diff = abs(minuend - subtrahend) if diff < winner_val: winner_val = diff winner_id = identifier print winner_id, winner_val
Not too bad. I could’ve done some other things to make things work differently: for example I could’ve let the user feed in the column header names of the ‘identifier’, ‘minuend’ and ‘subtrahend’ columns in each data set, and then I could just *not* parse out the header line and instead use it to identify the list index positions of the bits I need for each line. It’d make the whole thing ‘just work’. It would also require more effort from the user. On the other hand, it would make things “just work” for just about any file with numbered lines of columnar data.
I have to admit that the minute I see columnar data like this, awk is the first thing I reach for, so I’m sure this affected my Python solution. The good news there is that my thinking toward columnar data is consistent, and so I treated both files pretty much the same way, making refactoring a 5-minute process.
In all, I enjoyed this Kata. Though I didn’t take it as far as I could have, it did make me think about how it could be improved and made more generic. Those improvements could incur a cost in terms of readability I suppose, but I think for this example it wouldn’t be a problem. I’m working on a larger project now where I have precisely this issue of flexibility vs. readability though.
I’m reengineering a rather gangly application to enable things like pluggable… everything. It talks to networked queues, so the protocol is pluggable. It talks to databases, so the database back end is pluggable, in addition to the actual data processing routines. Enabling this level of flexibility introduces some complexity, and really requires good documentation if we reasonably expect people to work with our code (the project should be released as open source in the coming weeks). Without the documentation, I’m sure I’d have trouble maintaining the code myself!