Archive for the ‘Uncategorized’ Category

Brain Fried Over NoSQL

Saturday, June 26th, 2010

So, I’m working on a pet project. It’s in stealth mode. Just kidding — I don’t believe in stealth mode 😉

It’s a twitter analytics dashboard that actually does useful things with the mountains of data available from the various Twitter APIs. I’m writing it in Python using Tornado. Here’s the first mockup I ever did for it, just like 2 nights ago:

It’s already a lot of fun. I’ve worked with Tornado before and like it a lot. I have most of the base infrastructure questions answered, because this is a pet project and they’re mostly easy and in some sense “don’t matter”. But that’s what has me stuck.

It Doesn’t Matter

It’s true. Past a certain point, belaboring choices of what tools to use where is pointless and is probably premature optimization. I’ve been working with startups for the past few years, and I’m painfully aware of what happens when a company takes too long to react to their popularity. I want to architect around that at the start, but I’m resisting. It’s a pet project.

But if it doesn’t matter, that means I can choose tools that are going to be fun to dig into and learn about. I’ve been so busy writing code to help avoid or buffer impact to the database that I haven’t played a whole lot with the NoSQL choices out there, and there are tons of them. And they all have a different world view and a unique approach to providing solutions to what I see as somewhat different problems.

Why NoSQL?

Why not? I’ve been working with relational database systems since 1998. I worked on large data reporting projects, a couple of huge data warehousing projects, financial transaction systems, I worked for Sybase as a consulting DBA and project manager for a while, I was into MySQL and PostgreSQL by 2000, used them in production environments starting around 2001-02… I understand them fairly well. I also understand BDB and other “flat-file” databases and object stores. SQLite has become unavoidable in the past few years as well. It’s not like I don’t understand the compromises I’m making going to a NoSQL system.

There’s a good bit of talk from the RDBMS camp (seriously, why do they need their own camp?) about why NoSQL is bad. Lots of people who know me  would put me in the RDBMS camp, and I’m telling you not to cry yourself to sleep out of guilt over a desire to get to know these systems. They’re interesting, and they solve some huge issues surrounding scalability with greater ease than an RDBMS.

Like what? Well, cost for one. If I could afford Oracle I’d sooner use that than go NoSQL in all likelihood. I can’t afford it. Not even close. Oracle might as well charge me a small planet for their product. It’s great stuff, but out of reach. And what about sharding? Sharding a relational database sucks, and to try to hide the fact that it sucks requires you to pile on all kinds of other crap like query proxies, pools, and replication engines, all in an effort to make this beast do something it wasn’t meant to do: scale beyond a single box. All this stuff also attempts to mask the reality that you’ve also thrown your hands in the air with respect to at least 2 letters that make up the ACID acronym. What’s an RDBMS buying you at that point? Complexity.

And there’s another cost, by the way: no startup I know has the kind of enormous hardware that an enterprise has. They have access to commodity hardware. Pizza boxes. Don’t even get me started on storage. I’ve yet to see SSD or flash storage at a startup. I currently work at MyYearbook.com, and there are some pretty hefty database servers there, but it can hardly be called a startup anymore. Hell, they’re even profitable! 😉

Where Do I Start?

One nice thing about relationland is I know the landscape pretty well. Going to NoSQL is like dropping me in a country I’ve never heard of where I don’t really speak the language. I have some familiarity with key-value stores from dealing with BDB and Memcache, and I’ve played with MongoDB a bit (using pymongo), but that’s just the tip of the iceberg.

I heard my boss mention Tokyo Tyrant a few times, so I looked into it. It seems to be one of the more obscure solutions out there from the standpoint of adoption, community, documentation, etc., but it does appear to be very capable on a technical level. However, my application is going to be number-heavy, and I’m not going to need to own all of the data required to provide the service. I can probably get away with just incrementing counters in Memcache for some of this work. For persistence I need something that will let me do aggregation *FAST* without having to create aggregation tables, ideally. Using a key/value store for counters really just seems like a no-brainer.

That said, I think what I’ve decided to do, since it doesn’t matter, is punt on this decision in favor of getting a working application up quickly.

MySQL

Yup. I’m going to pick one or two features of the application to implement as a ‘first cut’, and back them with a MySQL database. I know it well, Tornado has a built-in interface for it, and it’s not going to be a permanent part of the infrastructure (otherwise I’d choose PostgreSQL in all likelihood).

To be honest, I don’t think the challenge in bringing this application to life are really related to the data model or the engine/interface used to access it (though if I’m lucky that’ll be a major part of keeping it alive). No, the real problem I’m faced with is completely unrelated to these considerations…

Twitter’s API Service

Not the API itself, per se, but the service providing access to it, and the way it’s administered, is going to be a huge challenge. It’s not just the Twitter website that’s inconsistent, the API service goes right along. Not only that, but the type of data I really need to make this application useful isn’t immediately available from the API as far as I can tell.

Twitter maintains rate limits on the API. You can only make so many calls over so short a period of time. That alone makes providing an application like this to a lot of people a bit of a challenge. Compounding the issue is that, when there are failwhales washing up on the shores, those limits can be dynamically decreased. Ugh.

I guess it’s not a project for the faint of heart, but it’ll drive home some golden rules that are easy to neglect in other projects, like planning for failure (of both my application, and Twitter). Also, it’ll be a lot of fun.

Python IDE Frustration

Thursday, May 13th, 2010

I didn’t think I was looking for a lot in an IDE. Turns out what I want is impossibly hard to find.

In the past 6 months I’ve tried (or tried to try):

  • Komodo Edit
  • Eclipse w/ PyDev
  • PyCharm (from the first EAP build to… yesterday)
  • Wingware
  • Textmate

Wingware

First, let’s get Wingware out of the way. I’m on a Mac, and if you’re not going to develop for the Mac, I’m not going to pay you hundreds of dollars for your product. Period. I don’t even use free software that requires X11. Lemme know when you figure out that coders like Macs and I’ll try Wingware.

Komodo Edit

Well, I wanted to try the IDE but I downloaded it, launched it once for 5 minutes (maybe less), forgot about it, and now my trial is over. I’ll email sales about this tomorrow. In the meantime, I use Komodo Edit.

Komodo Edit is pretty nice. One thing I like about it is that it doesn’t really go overboard forcing its world view down my throat. If I’m working on bunny, which is a one-file Python project I keep in a git repository, I don’t have to figure out their system for managing projects. I can just “Open File” and use it as a text editor.

It has “ok” support for Vi key bindings, and it’s not a plugin: it’s built in. The support has some annoying limitations, but for about 85% of what I need it to do it’s fine. One big annoyance is that I can’t write out a file and assign it a name (e.g. ‘:w /some/filename.txt’). It’s not supported.

Komodo Edit, unless I missed it, doesn’t integrate with Git, and doesn’t offer a Python console. Its capabilities in the area of collaboration in general are weak. I don’t absolutely have to have them, but things like that are nice for keeping focused and not having to switch away from the window to do anything else, so ideally I could get an IDE that has this. I believe Komodo IDE has these things, so I’m looking forward to trying it out.

Komodo is pretty quick compared to most IDEs, and has always been rock solid stable for me on both Mac and Linux, so if I’m not in the mood to use Vim, or I need to work on lots of files at once, Komodo Edit is currently my ‘go-to’ IDE.

PyCharm

PyCharm doesn’t have an officially supported release. I’ve been using Early Adopter Previews since the first one, though. When it’s finally stable I’m definitely going to revisit it, because to be honest… it’s kinda dreamy.

Git integration is very good. I used it with GitHub without incident for some time, but these are early adopter releases, and things happen: two separate EAP releases of PyCharm made my project files completely disappear without warning, error, or any indication that anything was wrong at all. Of course, this is git, so running ‘git checkout -f’ brought things back just fine, but it’s unsettling, so now I’m just waiting for the EAP to be over with and I’ll check it out when it’s done.

I think for the most part, PyCharm nails it. This is the IDE I want to be using assuming the stability issues are worked out (and I don’t have reason to believe they won’t be). It gives me a Python console, VCS integration, a good class and project browser, some nice code analytics, and more complex syntax checking that “just works” than I’ve seen elsewhere. It’s a pretty handsome, very intuitive IDE, and it leverages an underlying platform whose plugins are available to PyCharm users as well, so my Vim keys are there (and, by the way, the IDEAVim plugin is the most advanced Vim support I’ve seen in any IDE, hands down).

Eclipse with PyDev

One thing I learned from using PyCharm and Eclipse is that where tools like this are concerned, I really prefer a specialized tool to a generic one with plugins layered on to provide the necessary functionality. Eclipse with PyDev really feels to me like a Java IDE that you have to spend time laboriously chiseling, drilling, and hammering to get it to do what you need if you’re not a Java developer. The configuration is extremely unintuitive, with a profuse array of dialogs, menus, options, options about options and menus, menus about menus and options… it never seems to end.

All told, I’ve probably spent the equivalent of 2 working days mucking with Eclipse configuration, and I’ve only been able to get it “pretty close” to where I want it. The Java-loving underpinnings of the Eclipse platform simply cannot be suppressed, while things I had to layer on with plugins don’t show up in the expected places.

Add to this Eclipse’s world-view, which reads something like “there is no filesystem tree: only projects”, and you have a really damned annoying IDE. I’ve tried on and off for over a year to make friends with Eclipse because of the good things I hear about PyDev, but it just feels like a big hacky, duct-taped mess to me, and if PyCharm has proven anything to me, it’s that building a language specific IDE on an underlying platform devoted to Java doesn’t have to be like this. When I finally got it to some kind of usable point, and after going through the “fonts and colors” maze, it turns out the syntax highlighting isn’t really all that great!

A quick word about Vi key bindings in Eclipse: it’s not a pretty picture, but the best I’ve been able to find is a free tool called Vrapper. It’s not bad. I could get by with Vrapper, but I don’t believe it’s as mature and evolved as IDEAVim plugin in PyCharm.

So, I’ll probably turn back to Eclipse for Java development (I’m planning on taking on a personal Android project), but I think I’ve given up on it for anything not Java-related.

Vim

Vim is technically ‘just an editor’, but it has some nice benefits, and with the right plugins, it can technically do all of the things a fancy IDE can. I use the taglist plugin to provide the project and class browser functionality, and the kicker here is that you can actually switch to the browser pane, type ‘/’ and the object or member you’re looking for, and jump to it in a flash. It’s also the most complete Vim key binding implementation available 😉

The big win for me in using Vim though is remote work. Though I’d rather do all of my coding locally, there are times when I really have to write code on remote machines, and I don’t want to go through the rigmarole of coding, pushing my changes, going to my terminal, pulling down the changes, testing, failing, fixing the code on my machine, pushing my changes, pulling my changes… ugh.

So why not just use Vim? I could do it. I’ve been using Vim for many years and am pretty good with it, but I just feel like separating my coding from my terminal whenever I can is a good thing. I don’t want my code to look like my terminal, nor do I want my terminal to look like my IDE theme. I’m SUPER picky about fonts and colors in my IDE, and I’m not that picky about them in my terminal. I also want the option of using my mouse while I’m coding, mostly to scroll, and getting that to work on a Mac in Terminal.app isn’t as simple as you might expect (and I’m not a fan of iTerm… and its ability to do this comes at a cost as well).

MacVim is nice, solves the separation of Terminal and IDE, and I might give it a more serious try, but let’s face it, it’s just not an IDE. Code completion is still going to be mediocre, the interface is still going to be terminal-ish… I just don’t know. One thing I really love though is the taglist plugin. I think if I could just find a way to embed a Python console along the bottom of MacVim I might be sold.

One thing I absolutely love about Vim, the thing that Vim gets right that none of the IDEs get is colorschemes: MacVim comes with like 20 or 30 colorschemes! And you can download more on the ‘net! The other IDEs must lump colorscheme information into the general preferences or something, because you can’t just download a colorscheme as far as I’ve seen. The IDE with the worst color/font configuration? Eclipse – the one all my Python brethren seem to rave about. That is so frustrating. Some day I’ll make it to PyCon and someone will show me the kool-aid I guess.

The Frustrating Conclusion

PyCharm isn’t soup yet, Wingware is all but ignoring the Mac platform, Eclipse is completely wrong for my brain and I don’t know how anyone uses it for Python development, Komodo Edit is rock solid but lacking features, and Komodo IDE is fairly pricey and a 30-day trial is always just really annoying (and I kinda doubt it beats PyCharm for Python-specific development). MacVim is a stand-in for a real IDE and it does the job, but I really want more… integration! I also don’t like maintaining the plugins and colorschemes and *rc files and ctags, and having to understand its language and all that.

I don’t cover them here, but I’ve tried a bunch of the Linux-specific Python IDEs as well, and I didn’t like a single one of them at all. At some point I’ll spend more time with those tools to see if I missed something crucial that, once learned, might make it hug my brain like a warm blanket (and make me consider running Linux on my desktop again, something I haven’t done on a regular ongoing basis in about 4 years).

So… I don’t really have an IDE yet. I *did* however just realize that the laptop I’m typing on right now has never had a Komodo IDE install, so I’m off to test it now. Wish me luck!

PyTPMOTW: PsycoPG2

Wednesday, April 21st, 2010

What is this module for?

Interacting with a PostgreSQL database in Python.

What is PostgreSQL?

PostgreSQL is an open source relational database product. It has some more advanced features, like built-in networking-related and GIS-related datatypes, the ability to script stored functions in multiple languages (including Python), etc. If you have never heard of PostgreSQL, get out from under your rock!

Making Contact

Using the pscyopg2 module to connect to a PostgreSQL database couldn’t be simpler. You can use the connect() method of the module, passing in either the individual arguments required to make contact (dbname, user, etc), or you can pass them in as one long “DSN” string, like this:

dsn = "host=localhost port=6000 dbname=testdb user=jonesy"
conn = psycopg2.connect(dsn)
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)

The DSN value is a space-delimited collection of key=value pairs, which I construct before sending the dsn to the psycopg2.connect() method. Once we have a connection object, the very first thing I do is set the connection’s isolation level to ‘autocommit’, so that INSERT and UPDATE transactions are committed automatically without my having to call conn.commit() after each transaction. There are several isolation levels defined in the psycopg2.extensions package, and they’re defined in ‘extensions’ because they go beyond what is defined in the DB API 2.0 spec that is typically used as a reference in creating Python database modules.

Simple Queries and Type Conversion

In order to get anything out of the database, we have to know how to talk to it. Of course this means writing some SQL, but it also means sending query arguments in a format understood by the database. I’m happy to report that psycopg2 does a pretty good job of making things “just work” when it comes to converting your input into PostgreSQL types, and converting the output directly into Python types for easy manipulation in your code. That said, understanding how to properly use these features can be a bit confusing at first, so let me address the source of a lot of early confusion right away:

cur = conn.cursor()
cur.execute("""SELECT id, fname, lname, balance FROM accounts WHERE balance > %s""", min_balance)

Chances are, min_balance is an integer, but we’re using ‘%s’ anyway. Why? Because this isn’t really you telling Python to do a string formatting operation, it’s you telling psycopg2 to convert the incoming data using the default psycopg2 method, which converts integers into the PostgreSQL INT type. So, you can use “%s” in the ‘execute()’ method to properly convert integers, strings, dates, datetimes, timedeltas, lists, tuples and most other native Python types to a corresponding PostgreSQL type. There are adapters built into psycopg2 as well if you need more control over the type conversion process.

Cursors

Psycopg2 makes it pretty easy to get your results back in a format that is easy for the receiving code to deal with. For example, the projects I work on tend to use the  RealDictCursor type, because the code tends to require accessing the parts of the resultset rows by name rather than by index (or just via blind looping). Here’s how to set up and use a RealDictCursor:

curs = conn.cursor(cursor_factory=psycopg2.extras.RealDictCursor)
curs.execute("SELECT id, name FROM users")
rs = curs.fetchall()
for row in rs:
   print rs['id'], rs['name']

It’s possible you have two sections of code that’ll rip apart a result set, and one needs by-name access, and the other just wants to loop blindly or access by index number. If that’s the case, just replace ‘RealDictCursor’ with ‘DictCursor’, and you can have it both ways!

Another nice thing about psycopg2 is the cursor.query attribute and cursor.mogrify method. Mogrify allows you to test and see how a query will look after all input variables are bound, but before the query is sent to the server. Cursor.query prints out the exact query that was actually sent over the wire. I use cursor.query in my logging output all the time to catch out-of-order parameters and mismatched input types, etc. Here’s an example:

try:
    curs.callproc('myschema.myprocedure', callproc_params)
except Exception as out:
    print out
    print curs.query

Calling Stored Functions

Stored procedures or ‘functions’ in PostgreSQL-speak can be immensely useful in large complex applications where you want to enforce business rules in a single place outside the domain of the main application developers. It can also in some cases be more efficient to put functionality in the database than in the main application code. In addition, if you’re hiring developers, they should develop in the standard language for your environment, not SQL: SQL should be written by database administrators and developers, and exposed to the developers as needed, so all the developers have to do is call this newly-exposed function. Here’s how to call a function using psycopg2:

callproc_params = [uname, fname, lname, uid]
cur.callproc("myschema.myproc", callproc_params)

The first argument to ‘callproc()’ is the name of the stored procedure, and the second argument is a sequence holding the input parameters to the function. The input parameters should be in the order that the stored procedure expects them, and I’ve found after quite a bit of usage that the module typically is able to convert the types perfectly well without my intervention, with one exception…

The UUID Array

PostgreSQL has built-in support for lots of interesting data types, like INET types for supporting IP addresses and CIDR network blocks, and GIS-related data types. In addition, PostgreSQL supports a type that is an array of UUIDs. This comes in handy if you use a UUID to identify items and want to store an array of them to associate with an order, or you use UUIDs to track messages and want to store an array of them together to represent a message thread or conversation. To get a UUID array into the database quickly and easily, it’s really not too difficult. If you have a list of strings that are UUID strings, you can do a quick conversion, call one function, and then use the array like any other input parameter:

my_uuid_arr = [uuid.UUID(i) for i in my_uuid_arr]
psycopg2.extras.register_uuid()
callproc_params = [
myvar1,
myvar2,
my_uuid_arr
]

curs.callproc('myschema.myproc', callproc_params)

Connection Status

It’s not a given that your database connection lives on from query to query, and you shouldn’t really just assume that because you did a query a fraction of a second ago that it’s still around now. Actually, to speak about things more Pythonically, you *should* assume the connection is still there, but be ready for failure, and check the connection status to diagnose and help get things back on track. You can check the ‘status’ attribute of your connection object. Here’s one way you might do it:

    @property
    def active_dbconn(self):
        return self.conn.status in [psycopg2.extensions.STATUS_READY, psycopg2.extensions.STATUS_BEGIN]:

So, I’m assuming here that you have some object that has a connection object that it refers to as ‘self.connection’. This one-liner function uses the @property built-in Python decorator, so the other methods in the class can either check the connection status before attempting a query:

if self.active_dbconn:
    try:
        curs.execute(...)
    except Exception as out:
         logging.error("Houston we have a problem")

Or you can flip that around like this:

try:
   curs.execute(...)
except Exception as out:
    if not self.active_dbconn:
        logging.error("Execution failed because your connection is dead")
    else:
         logging.error("Execution failed in spite of live connection: %s" % out)

Read On…

A database is a large, complex beast. There’s no way to cover the entirety of a database or a module that talks to it in a simple blog post, but I hope I’ve been able to show some of the more common features, and maybe one or two other items of interest. If you want to know more, I’m happy to report that, after a LONG time of being unmaintained, the project has recently sprung back to life and is pretty well-documented these days. Check it out!

Way Better Python Indentation In Vim

Friday, November 13th, 2009

Just wanted to share this tip I just stumbled across:

If you code in Python, and in your ~/.vimrc file you have ‘set smartindent’, change that to ‘set nosmartindent’ and add the following two lines (I’m told it can be combined into a one-liner, but I use two lines, so…)

filetype plugin on
filetype indent on

The indentation support for Python is much better. What I need to do now is write a little vim-fu that will tell vim to decrease the line indent if I enter two newlines or something, since there are no block delimeters (like braces or brackets) to tell vim about the block level.

Python, Creating XML, Recursion, and Order

Wednesday, November 4th, 2009

I love being challenged every day. Today I ran across a challenge that has several solutions. However, most of them are hacks, and I never feel like I really solved the problem when I implement a hack. There’s also that eerie feeling that this hack is going to bite me later.

I have an API I need to talk to. It’s pure XML-over-HTTP (henceforth XHTTP). Actually, the first issue I had was just dealing with XHTTP. All of the APIs I’ve dealt with in the past were either XMLRPC, SOAP, which have ready-made libraries ready to use, or they used GET parameters and the like, which are pretty simple to deal with. I’ve never had to actually construct an HTTP request and just POST raw XML.

It’s as easy as it should be, really. I code in Python, so once you actually have an XML message ready to send, you can use urllib2 to send it in about 2 lines of code.

The more interesting part is putting the request together. I thought I had this beat. I decided to use the xml.dom.minidom module and create my Document object by going through the DOMImplementation object, because it was the only thing I found that allowed me to actually specify a DTD programmatically instead of hackishly tacking on hard-coded text. No big deal.

Now that I had a document, I needed to add elements to it. Some of these XML queries I’m sending can get too long to write a function that manually creates all the elements, then adds them to the document, then creates the text nodes, then adds them to the proper element… it’s amazingly tedious to do. I really wish Python had something like PHP’s XMLWriter, which lets you create an element and its text in one line of code.

Tedium drives me nuts, so rather than code this all out, I decided to create a dictionary that mirrored the structure of my query, with the data for the text nodes filled in by variables.

query_params = {'super_special_query':
                   {
                      'credentials': {'username': user, 'password': password, 'data_realm': realm},
                      'result_params': {'num_results': setsize, 'order_by': order_by},
                       query_type: query_dict
                    }
                }

def makeDoc():
    impl = getDOMImplementation()
    dt = impl.createDocumentType("super_special", None, 'super_special.dtd')
    doc = impl.createDocument(None, "super_special", dt)
    return doc

def makeQuery(doc, query_params, tag=None):
    """
        @doc is an xml.minidom.Document object
        @query_params is a dictionary structure that mirrors the structure of the xml.
        @tag used in recursion to keep track of the node to append things to next time through.

    """

    if tag is None:
        root = doc.documentElement
    else:
        root = tag

    for key, value in query_params.iteritems():
        tag = doc.createElement(key)
        root.appendChild(tag)
        if isinstance(value, dict):
            makeQuery(doc, value, tag)
        else:
            root.appendChild(tag)
            tag_txt = doc.createTextNode(value)
            tag.appendChild(tag_txt)

    return doc.toxml()

doc = makeDoc()
qxml = makeQuery(doc, query_params)

This is simplistic, really. I don’t need to deal with attributes in my queries, for example. But it is generic enough that if I need to send different types of queries, all that’s required is creating another dictionary to represent it, and passing that through the same makeQuery function to create the query.

Initial testing indicated success, but that’s why you can’t rely on only simple initial tests. Switching things up immediately exposed a problem: The API server validated my query against a DTD that enforced strict ordering of the elements, and Python dictionaries do not have the same notion of “order” that you and I do.

So there’s the code. If nothing else, it’s a less-contrived example of what you might actually use recursion for. Tomorrow I have to figure out how to enforce the ordering. One idea is to have a separate list to consult for the ordering of the elements. It requires an extra outer loop to go through the list, get the name of the next tag, then use that value to ask the dictionary for any related values. Seemed like a good way to go, but I had a bit of difficulty figuring out how to make that work at all. Maybe with fresh eyes in the AM it’ll be more obvious — that happens a lot, and is always a nice surprise.

Ideas, as always, are hereby solicited!

Shell Scripting: Bash Arrays

Thursday, January 15th, 2009

I’m actually not a huge fan of shell scripting, in spite of the fact that I’ve been doing it for years, and am fairly adept at it. I guess because the shell wasn’t really intended to be used for programming per se, it has evolved into something that sorta kinda looks like a programming language from a distance, but gets to be really ugly and full of inconsistencies and spooky weirdness when viewed up close. This is why I now recode in Python where appropriate and practical, and just about all new code I write is in Python as well.

One of my least favorite things about Bash scripting is arrays, so here are a few notes for those who are forced to deal with them in bash. 

First, to declare an array variable, you can assign directly to a variable name, like this: 

myarr=('foo' 'bar' 'baz')

Or, you can use the ‘declare’ bash built-in: 

declare -a myarr=('foo' 'bar' 'baz')

The ‘-a’ flag says you want to declare an array. Notice that when you assign elements to an array like this, you separate the elements with spaces, not commas. 

Arrays in bash are zero-indexed, so to echo the value of the first element of myarr, we do this: 

echo ${myarr[0]}

Now that you have an array, and it has values, at some point you’ll want to loop over it and do something with each value in the array. Almost anyone who utilizes an array will at some point want to do this. There’s a little bit of confusion for the uninitiated in this area. For whatever reason, there is more than one way to list out all of the elements in an array. What’s more, the two different ways act different if they are used inside of double quotes (wtf?). To illustrate, cut-n-paste this to a script, and then run the script: 

#!/bin/bash
myarr=('foo' 'bar' 'baz')
echo ${myarr[*]}
echo ${myarr[@]}
echo "${myarr[*]}"
echo "${myarr[@]}" # looks just like the previous line's output
for i in "${myarr[*]}"; do # echoes one line containing all three elements
   echo $i
done
for i in "${myarr[@]}"; do  # echoes one line for each element of the array.
   echo $i
done

Odd but true. The “@” expands each element of the array to its own “word”, while the “*” expands the entire set of elements to a single word. 

Another oddity — to get just a count of the elements in the array, you do this: 

echo ${#myarr[*]} 

Of course, this also works: 

echo ${#myarr[@]}

And the funny thing here is, these two methods do not appear to produce different results when inside of double quotes. I’d be hard pressed, of course, to figure out a use for counting the entire set of array elements as “1”, but it still seems a little inconsistent. 

Also note that you don’t have to count the elements in the array – you can count the length of any element in the array, too: 

echo ${#myarr[0]} 

That’ll return 3 for the array we defined above. 

Have fun!

WordPress 2.7 – Ahhhhhh!

Thursday, December 11th, 2008

I guess WordPress doesn’t consider the changes they’ve made in 2.7 (released today) to be big enough to warrant a change to the major version number (which would make it 3.0). However, there are a few features now built-in that I’ve been dreaming about for so long that simply incrementing the second number seems to sell this version short. At least they named it after one of my favorite jazz musicians. This release is called “Coltrane”. Nice.

My top two feature requests: Check!

First and foremost, the number one thing on my list of desired features is now a reality: I can make bulk changes to the categories of my posts. So, when I add a category to WordPress, and then realize that lots of my old posts really belong there, I don’t have to go searching around and changing them by hand. I still might take a stab at doing back-end automation here, by scripting a tool that’ll search the content of all of my posts, and if the content has, say, 2 out of 3 terms in my search criteria, it’ll add the post to the category, using whatever database trickery is necessary. However, this solves almost all of my needs (save my need to hack things, sometimes for its own sake).

The other feature I’ve been wanting for a long time is also now a reality: replying to comments without having to go to the post page to do it. You can now moderate and reply to comments right in the dashboard.

This, for me, is huge. I’ve been waiting for these two particular features since about 2005.

More Baked-in Goodness

Some other niceties are now built-in that used to be addon modules in WordPress, which is great, because I’m always worried about third-party modules breaking and being abandoned as new WP releases come out. The nicest for me, as someone who maintains their own wp install, is the automated WP upgrade. Used to be an addon, now built in.

Another nice feature, if you *are* someone who doesn’t mind third party modules, is that now you can browse available modules, and install them, without leaving the wp interface.

Yes, another complete redesign

The admin interface has been completely overhauled, again. The last time they did this, a buddy and I discussed it, and although he felt one or two things were nicer, I felt that they had not addressed the biggest problems with the interface. Well, they fixed it by doing something I didn’t actually expect: they admitted defeat.

Instead of overhauling the interface, they’ve empowered the user to do it for themselves. Want the editor to fit the width of the browser window? No problem. Never use all of those features in the editing interface? Get rid of them. Only just noticed all those news items in the dashboard? Make them more prominent. You can do all of this by dragging and dropping things around, or collapsing them to ‘icon-only’ view.

I am writing this in 2.7, and in the editor interface, I definitely feel like more of what I need is readily available instead of buried somewhere in the countless blocks and sections and whatnot – which reminds me that there’s also a new (and quite nice) menu interface – also a part of the interface you can customize.

Check out the video and notes on the WordPress site. The tour video does a great job of giving a quick rundown of the new features I’ve mentioned here, and lots and lots of features I *didn’t* cover.

MySQL Problem and Solution Posts: r0ck.

Tuesday, November 18th, 2008

Taming MySQL is… challenging. Especially in very large, fast-growth, ‘always-on’ environments. It’s one of those things where you seemingly can never know all there is to know about it. That’s why I really like coming across posts like this one from FreshBooks that describes a very real problem that was affecting their users, how they dealt with it, why *that* failed, and what the final fix was. Post a link to your favorite MySQL Problem and Solution post in the comments (oh yeah, and “subscribe to comments” should be working now!)

Microsoft Makes Progress, but Still Misses the Point

Monday, July 28th, 2008

I’m still digesting certain parts of OSCON 2008, but I think I’ve finally settled on a conclusion for my thoughts on Sam Ramji’s presentation of Microsoft’s contributions to open source projects, and their progress toward rethinking how they do business to be more friendly toward competing platforms and technologies, so that other platforms can integrate with Microsoft products more easily.

I think the talk should’ve been entitled “What Microsoft Has Done for You Lately”. To Microsoft’s credit, they have, in fact, done things that would’ve been unimaginable just a few years ago. However, I think they sort of still miss the point — or they didn’t and just don’t know how to do anything about it.

One must admit that Microsoft has made strides in making various technologies run in a Windows environment. PHP can be run on IIS, and it’s a supported language in their development tools. They’ve opened up the SMB/CIFS spec so that the Samba team can create more robust tools to integrate Windows and UNIX environments, and they’ve become a major sponsor of the Apache Software Foundation. Things like this are nice, but I can’t really accept them as any kind of long-term, reliable solution. Sure, they’ve opened up the SMB spec, but what happens when Microsoft decides to change the spec, or they decide to deprecate SMB altogether?

The answer is that we’re left in the dust, and that leads to my point. The major difference between opening up a spec and opening up the source code is that if you open the spec and then decide to go in another direction, we’re left in the lurch. If you open the source code and decide to change directions, the community building tools around that functionality can decide for themselves to either change directions or fork and exec, so to speak. The real point here is that Microsoft is under no obligation to involve any of the communities with the Open Source world in discussions about their direction with regards to any of the technologies they use. They’re a closed-source, proprietary company, with a fiduciary responsibility to look out for their investors and their bottom line. If they see an opportunity to make more money because more customers want to see them do something that requires something other than SMB, they’re free to do that, and if I were a shareholder, I think I’d want them to do it, because I’d want my stock price to go higher.

I have yet to see a corporate entity involve the community in their direction with regards to technology. I have seen lots of hand-waving, a lot of lip service, and a lot of people speaking at conferences, mostly trying to hide really big elephants behind a well-worded (but otherwise empty) speech, at least vetted, if not written by, marketing and PR folks. Talks like this are pretty transparent.

I think the issue is a hard one. How do you involve a community mostly unconcerned about financial matters (or, at least, mostly not involved with them directly on a regular basis), in a decision-making process that necessarily involves coming to a solution that is “profitable”? Well, actually, I think the issue starts at a higher level than that. The real trouble is a situation which was beautifully illustrated in a talk by Robert “r0ml” Lefkowitz at this year’s OSCON: the corporations are viewing these interactions as taking place between “thinkers” and “doers”, instead of two “thinker” parties which happen to have different philosophies about how things should be done. Given the numerous cultural differences between the groups, one could forgive the corporate types for making the error (by the way, if anyone finds the “Praxis/Techne” talk by r0ml online in video format – post the link! There *was* a video camera there!)

Of course, specific to Microsoft, there are other issues besides simple matters of tech-level software interaction. There’s the issue of software patents, and Microsoft’s “promise” not to sue. There’s also the issue of standards, and MIcrosoft’s attempts to either own them, or destroy those which it can’t own. These are issues that are deep, cultural issues at Microsoft that no single person giving a talk at OSCON is in a position to solve (or to convince me they’ve solved). If Microsoft really does want to interact with the community and interoperate with our software, they should probably be prepared to spend years and dollars just to earn some level of trust. I sincerely hope that they are sincerely trying to open up, but I can’t help but have my doubts.

OSCON Day 1: The BoF Board, for your perusal

Monday, July 21st, 2008

I’ve posted a picture of the BoF board for day 1. Click on it to see bigger sizes. The full size image (maybe smaller) is perfectly suitable for reading at your leisure. I’ll update this if/when I see significant changes to it:

IMG_4477.JPG