Well, I got some feedback on my last post, and I had some time on my hands tonight, and Python is pretty darn easy to use.
As a result, loghetti is making great strides in becoming a faster log filter. To test the performance in light of the actual changes I’ve made, I’m asking loghetti only to filter on http response code, and I’m only asking for a count of matching lines. I’m only asking for the response code because I happen to know that it will cause loghetti to skip a lot of processing which once was done per-line on every run, but which is now done lazily, on an as-requested basis. So, for example, there’s no reason to go parsing out dates and query strings (two costly operations when you’re dealing with large log files) if the user just wants to query on response codes.
Put another way “Hey, I only want response codes, why should I have to wait around while you process dates and query strings?”
So, here’s where I was when this little solo-sprint started:
strudel:loghetti jonesy$ time ./loghetti.py --code=404 --count 1MM.log
Matching lines: 10096
real 5m52.103s
user 5m35.196s
sys 0m3.214s
Almost 6 minutes to process one million lines. For reference, that “1MM.log” file is 246MB in size.
Here’s where I wound up as of about 5 minutes ago:
strudel:loghetti jonesy$ time ./loghetti.py --code=404 --count 1MM.log
Matching lines: 10096
real 3m53.350s
user 3m50.498s
sys 0m1.641s
Hey, looky, there! I even got the same result back. Nice!
Ok, so it’s not what you’d call a ‘speed demon’, especially on larger log files. But testing with a 25MB log with 100k lines in it using the same arguments took 25 seconds, and at that point it’s at least usable, and I’m actually going to be using it to do offline processing and reporting, and it’ll be on a machine larger than my first-generation Intel MacBook Pro, and for that type of thing this works just fine, and it’s easier to run this than to sit around thinking about regular expressions and shell scripts all day.
I’m still not pleased with the performance – especially for simple cases like the one I tested with. I just ran a quick ‘grep | wc -l’ on the file to get the same exact results and it worked in about one half of one second! Sure, I don’t mind trading off *some* performance for the flexibility this gives me, but I still think it can be better.
For now, though, I think I might rather support s’more features, like supporting a comparison operator other than “=”, or specifying ranges of dates and times.