Why are websites slow?

Published 19 October, 2014

This article is about some optimizations to improve website loading times. I'm not a javascript or HTML expert, so this article is more about the technical side of network communication and HTTP.

Motivation

Several big companys like Amazon and Yahoo made some test by improving their page loading time. Amazons revenue increased by 1% for every 100ms, Yahoo increased their traffic by 9% for every 400ms of improvment and Mozilla got around 60mio more Firefox downloads by reducing their page loading time by 2.2sec. I don't know how reliable those statistics are, but it is very logical that people close websites when the loading time is to high.

If you compare the speed of a native running user interface like WPF and a webpage you can see a big gap. When I click on a link on Wikipedia, it takes at least one seconds until the page is completely loaded. We ignore that time, because we know that websites are slow. We expect that a website is slow. Now imagine it would take one second the enter a number in your Windows Calculator. We just accept those loading times on websites, because we are accustomed to that. Yes I know, it's not fair to compare the loading time of a website with the button of a calculator, but I don't want to be fair, I want to have a better user experience on websites.

Most websites have a lot of different external assets. For example javascripts, css files, images. Every web developer knows that external assets are evil, because they slow down the loading time. For some reason nearly every frontend developer uses css sprites to keep the asset count low, but many stopped at that point and don't care about other optimizations.

Why is it so expensive to have multiple small assets?

With HTTP the browser have to request every single file using a new HTTP header and (I assume you don't support keep-alive, I will get to that later) a new TCP connection. Let's do a side trip to physics. There is no way (yet) to propagate an information with an higher speed than the speed of light (c ≅ 300000 km/s). In reality the speed of an impuls in copper or fiber optic cable is lower. The factor is called velocity of propagation (VoP). The VoP of fiber and copper is mostly around 75% (+- 20). That means the max. speed of data in the internet is around 225000km/s. But only if you ignore the delay of routers and other network hardware and of course the fact that there is not direct and straight line from source to target.

I made some ping tests with my server. Of course the latency on long distance connections (e.g. California to Berlin) was higher than a short distance connection. But the ratio wasn't linear, but logarithmic. This is because you have straight undersea cables without "intelligent" network hardware which may cause delays. Here are some results:

SourceTargetDistanceDuration (round-trip)Speed
California Berlin 9080 km 171 ms 106198 km/s
Toronto Berlin 6470 km 108 ms 119815 km/s
Singapore Berlin 9910 km 262 ms 75648 km/s
Chennai Berlin 7300 km 150 ms 97333 km/s
Johannesburg Berlin 8870 km 207 ms 85700 km/s
Düsseldorf Berlin 480 km 33 ms 29090 km/s

I didn't make the test under any scientific conditions. The source machines used different network hardware to access the internet. The test results also depend on time of day, special events and so on. But I hope it gives you a rough idea how slow the internet is. And why you shouldn't play high performance games on servers on a different continent :-)

This is very important, because it is a big bottleneck of digital communication. 20 years ago people had slow internet connections using dial up modems. Today everyone have a high speed broadband connection, because it was easy to invent better technology and to build new lines and backbones. So the transfer rate isn't a problem anymore.

The bottleneck is in many cases is the latency today. Compared to the transfer rate problem the latency problem isn't easy to fix. As far as I know the quantum teleportation doesn't work yet thus we're bound to the speed of light in the next few years.

Therefore the bigger problem is to avoid changes of direction in network communication. Let's have a look how TCP and HTTP work. To establish a connection, TCP uses a three-way handshake. A browser in Toronto wants to download 10kb from a server in berlin. The latency is at 100ms. I don't consider stuff like DNS lookups.

+0.000 secClient sends TCP SYN package to server
+0.050 secServer receives TCP SYN package and responds with TCP SYN-ACK package.
+0.100 secClient receives TCP SYN-ACK package. The connection is established now. Client sends ACK package and the first package with payload: The HTTP request header.
+0.150 secServer receives TCP ACK package and HTTP request header. Sends an TCP ACK for the request header and the HTTPd processes the request.
+0.160 secAfter 10ms processing time, server sends HTTP response header and HTTP payload. The payload is to big for a single package, so it will fragment it into several packages.
+0.211 secClient receives all data and sends ACK packages for them. It took a bit longer to get all data this time, because the TCP payload was higher. But 10kb on a 10mbit connection isn't that much, I assumed that 10kb need 1ms.

It took 211ms to download 10kb from a HTTP server. Most of the delay caused by the changes of direction with a high latency. If you would download 20kb instead of 10kb on the same connection, it would only last around 1ms longer (=212ms). With HTTPS it is even slower because you additionally have a TLS handshake.

HTTP keep-alive

HTTP keep-alive is a very good solution to reduce these delays. With keep-alive you can tell the server that you don't want to close the TCP connection after the transfer is done. So on next request the browser can reuse the TCP connection. Of course, you still have to send HTTP requests, so you only save around 100ms with a 100ms latency connection. And there are still a lot of web server which doesn't support HTTP keep-alive, but there is only one reason not to use HTTP keep-alive: If you are sure there is only one request per page view. This is a big performance killer.

Most browsers use more than one TCP connection per server. Chrome and Firefox up to 6, IE10 up to 8. But RFC 2616 (the HTTP/1.1 standard) says it shouldn't be more than 2 connections per server. This allows the browser to wait and download for multiple responses at the same time.

HTTP cache

The HTTP cache doesn't solve your problem in most cases, if you don't know how to use it. The HTTP server have to tell when a file expires. But in most cases the server doesn't know (and not even the web developer) when a file expires, so it will tell something about 60 seconds, or 0 seconds. You can configure the default expire time, but this static expiring date will not help you. If a file isn't expired yet the browser will not ask the server again for the file while the file is still valid. So you cannot set the date to one year, or else you update your file, but most of you visitors will get the change a few month later. But if you set the expire time to a minute, you can only speed up the next one or two clicks on your website.

But there is the HTTP code 304 "Not Modified" for content that is already expired on browser side, but still valid on server side. So the server can tell and the browser doesn't have to download it again.

That is true, but is useless for small files, as you can see in the list above, the browser still have to send a request for that file and have to wait for response. The transfer of the file is nearly instantly. So the HTTP cache in this form only applies for bigger files.

Optimization 1: Avoid assets

Yes, this is very lame. But everytime I analyze a website I see 10 different javascript files (jquery, 6 jquery plugins and 3 for that specific website), 10 css files (one reset file, one main.css, 6 for jquery plugins), and a lot of small images which aren't stored in css sprites.

If you don't have a css sprite and your image is very small, you should consider to use a data url.

Compress all your JS and CSS files into one file per type. Make sure if you really have to use something like jQuery for small websites. By the way, jQuery is a beautiful lib, but it is also very big. This page shows how to do basic jQuery stuff with native javascript. Also make sure your server supports gzip compression and even uses it on dynamic responses, not just static files.

Optimization 2: Change URL on every file update

Fortunately it is still possible to use the HTTP cache very efficient even for small files. To be honest I don't know if that technique have a name. Usually you give you files urls like /assets/style.css Just add a virtual directory in your file hierarchy, which you ignore on dispatching:

/assets/abc/style.css
/assets/asdf/style.css
/assets/b61d8cda8f20b204e980c998ecf8427f/style.css
/assets/43caa2b2aa7e3b5b3b5aadb9a88290a0/style.css

You have to dispatch all these URLs to the same file (e.g. /assets/style.css).

Now the important part: On every request for that file, respond with a very high expire time. e.g. 10 years. The browser will never ask your server again for that file. It will not even try to establish a connection, as long as it is present in the browser cache an not expired yet. Once you want to update your asset, just replace all URLs to your asset with a new virtual directory name.

The name of the virtual directory is irrelevant, as long as you change it everytime you change the asset. A good practice is to use the file hash or the file modified time. I assume that it's pretty easy to replace all URLs to the file, because today most of HTML structures are generated by a framework or CMS anyway. Of course you cannot use that technique on your html files. In best case your browser will only send one single file request per page view (when you have a warm cache).

Optimization 3: Partial content downloads

I hate it, that you have to generate and download the full HTML structure for every single page, but only a bit content changes. Most of the navigation and so on remains.

A good solution is to download just the content that changes. To do so, just register via javascript a click listener on all your page links, check if it's a page where you can handle those partial downloads and fire an ajax call.

Now your backend just have to check what is different on the requested page. After the browser updated the DOM, you will have the problem that you're on a different page, but the page address in your browsers address bar is still the same. Just have a look on the new javascript method history.pushState(), which is a part of HTML5. With that method you can modify the current page address without changing the DOM/JS context.

If the browser doesn't support pushstate or ajax calls, just don't prevent the default operation of all links, so your website will still work with old browser. Just a bit slower and inefficient.

To make sure this will not falsify your visitor analysis like Google Analytics, call the api that your context is a new page. With Google Analytics you can do it that way: ga('send', 'pageview', newUrl);

Optimization 4: Prefetch content

If optimization #3 works, you could also prefetch the content for all possible sites the user will request next. If your page have a lot of links, you could write a more intelligent algorithm to prefetch the content. For example just prefetch the 5 links with the highest chance the user will click on them. Or just start to prefetch when the mouse is over a link (between mouseover event and mouseclick event are usually at least 100ms).

Once you collected some possible next page views, send your prefetch ajax call. But please make sure you bundle all requests into one single call.

To avoid you download the same content over and over again, for example your imprint, which is linked on every single site, you should cache your results. To make sure you have a warm cache when you close the browser tab and open it again, you should use the Local Storage. Local Storage is also an HTML5 feature (or, it was a part of HTML5, but now moved to the Web Storage specification), which allows you to store large data (at least 5MB per domain) for long time on browser side.

To avoid the same problem you have with your HTTP cache, you should have a token (like the file hash in optimization #2) for your current content version number and store it into your prefetch cache. Every full HTML file and prefetch-ajax-response should provide the current content token, so your frontend script knows when the prefetch cache is outdated.

But the user will not see most of the prefetched content. This is waste of traffic!

You will have a bit higher traffic, yes. But today, traffic of some HTML files and even assets shouldn't be a problem anymore, even if you have a few thousand visitors per day.

Hopefully, this article has given you an idea how to do such (a bit exotic) optimizations and why they're important. By the way: I'm using all mentioned optimizations on my blog and on most browsers you can see a page nearly instantly after clicking an internal link now :-)

If you have any questions or annotation, feel free to leave a comment. :-)