the_simple_computer

Adventures in Linux TCP Tuning

Updated August 21, 2014.

This site is no longer being maintained so anything below could still be accurate, or very outdated.


Recently I looked deep into adjusting Ubuntu’s TCP stack to squeeze out any extra performance. When the dust settled, it proved a worthwhile effort. I gained an increase in download speed on my home network but there’s a lot more to the story than that.

I should say that I am not a network administrator. Much of this article is the culmination of several weeks of my own reading, testing and figuring out what worked best given the particular network. Maybe it will inspire others to do great things with their lives, or maybe it will not. All of this comes with the obvious disclaimer that your results will vary. You can’t break anything with what we’ll be doing here and if you find the changes hurt more than help, they’re easy to reverse.

Your Connection is Slower than You Think

Let’s begin with some clarifications so we’re all on the same page. People think they have a 1.5 megabyte per second or a 10 megabyte per second connection, or whatever the number advertised by their internet service provider (ISP). However, there is a very subtle difference between the abbreviations Mbps and MBps.

Transfer speeds over a network are usually measured in megabits per second, not megabytes, and that is what your ISP measures your internet connection with. Contrast that to your web browser and most other applications which measure downloads in bytes or kilobytes, and it can easily get confusing.



Eight megabits equals one megabyte and here is a handy conversion calculator for doing such work. The bottom line is that if your ISP says you have a connection speed of 10 Mbps, this means 10 megabits per second. This translates into a more widely understood max download rate of 1.25 megabytes per second (MBps). That lowercase b makes a significant difference.

Most home internet connections are characterized by low(er) latency, low bandwidth and low cost. They are usually asymmetrical, meaning your upload speed is often a tiny fraction of your download speed. The amount of connections your home computer sends and receives both by internet and local area network is extremely small compared to say, a web or database server, so such a large speed discrepancy between directions is usually not a problem.

Yet despite asymmetry, residential connections can be considered elephant networks too, as defined by RFC 1072 (a bandwidth delay product > 100,000 bits). What the corporate admins recommend for their TCP needs is actually not too different from what we’ll do to max out our ability to binge on memes and Epic Rap Battles.


Bandwidth Delay Product

BDP is a calculation of how much data your network supports in transit between two points (client & server, two peers, etc.). It’s based on your connection’s latency and available bandwidth but BDP can also indicate the ideal advertised TCP window size. To find your BDP, multiply the bandwidth by the round trip time (latency), then divide the product by 8. For example, figuring the BDP for my 1.5 Mbps internet connection would look like this:

Using a latency of 110 milliseconds to some servers I pinged in the U.S., I end up with a 21.12 kilobyte BDP which does actually fall into LFN territory. But wait…how does calculating with a fixed latency accurately represent the return time (RTT) from websites and services all over the world? It doesn’t.

I get a 47 ms RTT from speedtest.net in the United States but if I ping thesimplecomputer.info in Stockholm, that return time jumps to about 200 ms; if I ping The Japan Times in Tokyo, I get 175 ms. A higher latency means a higher BDP and warrants a higher default receive window size but what latency should I use: 200 ms or 47, or pick a middle road of 110 ms shown above?

For this reason, we’re not going to stress over BDP with internet connections. If we were focusing exclusively on local or private network tuning, exacting BDP would be more important because variations in latency would be much less. Right now, just think of knowing a BDP for a given RTT as one point in getting to know your network.


The Testing Ingredients

This laptop packs an Intel Sandy Bridge i5-2467M CPU, 8 GB of RAM, a Realtek RTL8111/8168B gigabit ethernet controller and an Intel Centrino 6230 b/g/n wireless card. The operating system is Ubuntu 12.10 with the distro-supplied kernel 3.5. Three separate networks were used to see how download speeds would respond to the TCP stack changes. Two were consumer internet connections: a DSL line rated at 1.5 Mbps and a 8.5 Mbps cable connection. The third network belonged to a university.

University staff told me there were several provider sources of the campus’s internet service and the student WiFi network was rated up to 1 Gbps. I let the ISPs handle DNS resolution and for the two home connections, I used a recently orphaned Netgear WNDR 3400 router with all default settings.

I used cURL to download Ubuntu Mini (30 MB), a Windows 7 theme (11.5 MB) and a 10.0 MB empty file from thesimplecomputer.info I made with dd. They were downloaded in succession, not simultaneously, and cURL was run 3 times for each system configuration on each network.



Several configurations were used:

  1. Ubuntu's default sysctl settings.
  2. Default settings, other than the congestion avoidance algorithm set to Vegas.
  3. Default settings but with Westwood congestion avoidance algorithm.
  4. Default receive window size to 82 kilobytes, maximum receive and send window buffer sizes to 16 MB.
  5. The last run added some other sysctl strings. This is the Final Settings category in the results sheet but more on that later.

Except for the university line which I only had wireless access to, each configuration was run over WiFi and ethernet. All downloads were after 11 PM in hopes of avoiding as much competing network traffic as possible.


The Results

All the raw numbers and averages are available in this ODF spreadsheet. On the next page you’ll also find a link to the entire sysctl.conf file I used for the Final Settings results.

Home Ethernet

On average I gained a solid 2 Kb/s or so on the 1.5 Mbps line. The biggest change was in throughput which was much more consistent than previously. Cubic and Westwood were essentially tied on ethernet but Cubic was just barely ahead of Westwood on WiFi. Vegas was only slightly behind on both connections. The final download times for each congestion algorithm were all so close for this line that there would be no perceivable difference between them. If I were setting up a stationary computer specifically for this network, I would continue using the default Cubic just to have one less kernel module to load on boot.

Apartment WiFi

The 8.5 Mbps line was in an apartment building with about 10 other SSIDs visible to Network Manager, a nightmare of surrounding wireless activity. The averages are a headache of regressions and the individual times look like random numbers. The only hint at a pattern was a speed increase across the board when going from Ubuntu’s default settings to the larger buffers, but still using Cubic. Vegas performed well when it was the only change made to the TCP stack but the final settings generally had less consistent download speeds and were often worse than Ubuntu’s default configuration.

This connection especially didn’t like the tSc and Ubuntu Mini downloads. A traceroute revealed I was connecting to thesimplecomputer.info through 24 network hops whereas the 1.5 Mbps line (about 40 miles away from the apartment) followed only 13. The route to Ubuntu was slightly different here too, and spent more time bouncing around the ISP’s system than it did anywhere else.

The average results under Final Settings indicate that Westwood may be the way to go over Cubic but if this were my home connection, I’d want more download times from different server locations to see more of the network’s characteristics. The difference afterwards will still be small, possibly not even worth the effort, and this may just be one WiFi connection I’d have to take a hit on. They all won’t be perfect and there’s no optimizing your way out of enormous RF congestion and an inefficient ISP. One thing I didn’t think to do until afterwards was to use the router’s 5 GHz wireless frequency instead of 2.4 GHz, or try different WiFi channels but that still wouldn’t have changed anything over ethernet.

University WiFi

The university’s network was erratic as expected, but surprisingly busy for it being nearly midnight on a Friday. Here the averages don’t tell the whole story because the individual speeds are all over the place. One very good download was often reprimanded by a bad one and the huge inconsistency of all three congestion algorithms reaching from all three download locations was lost in the averaging.

It’s hard to gauge a strong reading from only 3 runs. Cubic was only mediocre under Default but showed better signs of consistency. Westwood had the most individual high download speeds on this network though its speeds were the least uniform. When Westwood was the only stack change, it did considerably better than Vegas and even Cubic at times.

Vegas did exceptionally well with the final settings and gave a 57% increase in download speed (using the averages) to the Windows theme alone, while the other two file downloads also benefited. Cubic too ended with a strong result but with the inconsistency of each algorithm, you’d not notice any difference between them.

In the end, Westwood was the better choice if changing only the congestion algorithm and window buffer sizes. I was interested to see the travel route differences to the tSc and Ubuntu Mini files compared to the other two lines but the university blocks both ICMP and TCP tracerouting.

We know WiFi is much more flaky than ethernet so it will be infinitely more difficult to pin down ideal settings for a plethora of different wireless networks compared to a stationary desktop computer. The only real choices are to either use default settings, or what works best only some of the time.

Clearly more testing would be needed to resolve anything further but the results sufficiently show that there is indeed benefit in optimizing Ubuntu’s TCP stack. A performance increase can be measured and reproduced across various networks but again, your exact results will vary. Now lets get to the good stuff.

Page II: Setting TCP for Fun and Profit