I’ve actually needed to perform this calculation in the past, but never knew it had a proper name! The value produced by this simple calculation will tell you how much of your network pipe you can actually fill at any given point in time. To figure this out, you need two values: the available bandwidth, and the latency (or delay) between the two communicating hosts.
In this example, I’ll figure out my BDP between my server in the basement, and this blog.
First, I’ll use ‘ping’ to figure out the “delay” value:
jonesy@canvas:~$ ping www.protocolostomy.com PING www.protocolostomy.com (74.53.92.66) 56(84) bytes of data 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=1 ttl=252 time=57.2 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=2 ttl=252 time=57.4 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=3 ttl=252 time=57.3 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=4 ttl=252 time=57.3 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=5 ttl=252 time=57.2 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=6 ttl=252 time=57.1 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=7 ttl=252 time=57.0 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=8 ttl=252 time=57.0 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=9 ttl=252 time=56.9 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=10 ttl=252 time=56.8 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=11 ttl=252 time=56.7 ms 64 bytes from web36.webfaction.com (74.53.92.66): icmp_seq=12 ttl=252 time=56.7 ms
This is a nice set of values, all hovering around 57ms. I’ll use 57 as my RTT. For my available bandwidth, I guess I could just use my ISPs advertised speed of 6Mbps upload and 3Mbps download. Let’s see what my BDP looks like in that scenario.
(6,000,000 b/s * 0.57s) = ~342kb/s
(342 kb/s) / 8 = 42KiB/s
So, if there were no other components involved, I could potentially move 42 kilobytes per second across my connection. However, there are other components involved, most notably the send and receive buffers in the Linux kernel. In kernels prior to 2.6.7, these buffers were really configured by default to strike a balance between good performance for local network connections and low system overhead (e.g. CPU and memory used to process connections and packets). They were not optimized for moving large data sets over long-haul paths. However, more recent kernels now automatically tune the values so that you should receive excellent performance in machines with over 1GB RAM and a BDP of under 4MB. I only just learned that this is the case – I had been searching around for my notes from 2001 about echoing values into files under /proc/sys/net, and using the sysctl variables…. no more!
If you want to see if your system has autotuning enabled, check to see if /proc/sys/net/ipv4/tcp_moderate_rcvbuf is set to “1”, by cat’ing the file. The corresponding ‘sndbuf’ file is essentially irrelevant since sender-side autotuning has been enabled since the early 2.4 kernels.
Of course, I don’t actually see this kind of performance. This was a quick and dirty test using the worst possible tool for the job: scp. Tools like ssh and scp add lots and lots of overhead at various levels due to protocol overhead and (especially) encryption. So given that, my performance really isn’t bad. I’ll see how high I can get it and post my results next week, just for giggles. 🙂