I have a small EC2 instance running with a 25GB EBS volume attached. It has a database on it that I need to manipulate by doing things like dropping indexes and creating new ones. This is on rather large (multi-GB, millions of rows) tables. After running one DROP INDEX operation that ran all day without finishing, I killed it and tried to see what was going on. Here’s the results of the first 10 minutes of testing:
-bash-3.2# dd if=/dev/zero of=/vol/128.txt bs=128k count=1000 1000+0 records in 1000+0 records out 131072000 bytes (131 MB) copied, 0.818328 seconds, 160 MB/s
This looks great. I’d love to get 160MB/s all the time. But wait! There’s more!
-bash-3.2# dd if=/dev/zero of=/vol/128.txt bs=128k count=100000 dd: writing `/vol/128.txt': No space left on device 86729+0 records in 86728+0 records out 11367641088 bytes (11 GB) copied, 268.191 seconds, 42.4 MB/s
Ok, well… that’s completely miserable. Let’s try something in between.
-bash-3.2# dd if=/dev/zero of=/vol/128.txt bs=128k count=10000 10000+0 records in 10000+0 records out 1310720000 bytes (1.3 GB) copied, 15.4684 seconds, 84.7 MB/s
So the performance gets cut in half when the number of 128k blocks is increased 10x. This kinda sucks. I’ll keep plugging along, but if anyone has hints or clues, let me know. If this is the way it’s going to be, then this is no place to run a production, IO-intensive (100,000s and maybe millions of writes per day, on top of reads) database.