Not as SPDY as You Thought

SPDY is awesome. It’s the first real upgrade to HTTP in 10+ years, it tackles high latency mobile networks performance issues and it makes the web more secure. SPDY is different than HTTP in many ways, but its primary value comes from being able to multiplex many requests/responses from client to server over a single (or few) TCP connections.

Previous benchmarks tout great benefits, ranging from making pages load 2x faster to making mobile sites 23% faster using SPDY and HTTPS than over clear HTTP. However, when testing real world sites I did not see any such gains. In fact, my tests showed SPDY is only marginally faster than HTTPS and is slower than HTTP.

Why? Simply put, SPDY makes HTTP better, but for most websites, HTTP is not the bottleneck.

The Bottom Line

If you don’t have time to read the full details, here’s the quick summary.

I tested the top 500 websites in the US (per Alexa), measuring their load time over HTTPS with and without SPDY, as well as over HTTP. I used a Chrome browser as a client, and proxied the sites through Cotendo to control whether SPDY is used. Note that all the tests – whether HTTP, HTTPS or SPDY – were proxied through Cotendo, to ensure we’re comparing apples to apples.

The results show SPDY, on average, is only about 4.5% faster than plain HTTPS, and is in fact about 3.4% slower than unencrypted HTTP. This means SPDY doesn’t make a material difference for page load times, and more specifically does not offset the price of switching to SSL.

I started this test because I found previous tests to be bad representations of the real world. This test is therefore different in several ways:

  • I only enabled SPDY for 1st party content.
    Website owners don’t control 3rd party domains and how they’re delivered.
  • I combined 1st party domains, but not 3rd party domains.
    Most previous tests flattened the page into a single domain by creating static copies of pages, which is an artificial environment where SPDY thrives.
  • I did not use a client-side proxy, but rather reverse-proxied the website.
    Using a client side proxy again creates one client/proxy connection where all requests are multiplexed, which is beneficial to SPDY but not realistic.
  • I tested real world websites, with all their warts.
    This includes many domains on the page, unoptimized page, inefficient backends, etc. Most other data I’m aware of is either from the highly optimized Google websites or from static copies of websites, which eliminates many real world bottlenecks.

I’ll let you decide if these differences make it a better test or a worse test, but it helps understand why the results are different.

There could be many reasons why SPDY does not help, but the two that stand out are:

  1. Web pages use many different domains, and SPDY works per domain. This means SPDY can’t reduce connections or multiplex requests across the different domains (with some exceptions), and its value gets diminished.
  2. Web pages have other bottlenecks, which SPDY does not address. For example, SPDY doesn’t prevent scripts from blocking downloads of other resources, nor does it make CSS not block rendering. SPDY is better than HTTP, but for most pages, HTTP is not the bottleneck.

The Test

For this experiment, I needed a set of websites to test, a client that supports SPDY and reverse-proxy that support SPDY.

For the websites, I chose the top 500 websites in the US, as defined by Alexa. The percentage of porn sites on that list is a bit alarming, but it’s a good representation of websites users browse often.

For a proxy, I used the Cotendo CDN (recently acquired by Akamai). Cotendo was one of the early adopters of SPDY, has production-grade SPDY support and high performance servers. Cotendo was used in three modes – HTTP, HTTPS and SPDY (meaning HTTPS+SPDY).

For a client, I used WebPageTest’s Chrome agent (with Pat Meenan’s help). WebPageTest automats a real Chrome browser (version 18 at the time of my tests), and through that supports SPDY. Note that Chrome randomly disables SPDY on 5% of browser runs, but WebPageTest disables this sampling. I measured each page 5 times, over 4 different network speeds, including Cable, DSL, low-latency mobile and high latency mobile.

Since some websites use multiple 1st party domains, I also used some Akamai rewriting capabilities to try and consolidate those domains. Roughly speaking, most resources statically referenced in the HTML were served through the page’s domain. This helped enable SPDY for those resources and consolidate some domains.

Lastly, since time of day and Internet events can skew results, I repeated the test 3 times, twice during the day and once overnight. In total I ran 90,000 individual page loads, or 30,000 per mode, more than enough for statistical accuracy.

The Results

The main result was that SPDY didn’t make the websites faster. Many different views of the data repeated this result:

  • SPDY was only 4.5% faster than HTTPS on average
  • SPDY was 3.4% slower than HTTP (without SSL) on average
  • The median SPDY acceleration over HTTPS was 1.9%
  • SPDY was faster than HTTPS in only 59% of the tests
  • SPDY is only 2.1% faster than HTTPS when comparing the average load time of each URL/scheme, across batches and network speeds
  • SPDY’s acceleration over HTTPS was 4.3%, 6.3% and 2.8% in each of the three test batches

When looking at individual network speeds, the numbers changed a bit but the conclusions did not. The following table summarizes SPDY’s impact compared to HTTP and HTTPS per network speed:

Network Speed
(Down/Up Kbps, Latency ms)
SPDY vs HTTPS SPDY vs HTTP
Cable (5,000/1,000, 28) SPDY 6.7% faster SPDY 4.3% slower
DSL (1,500/384,50) SPDY 4.4% faster SPDY 0.7% slower
Low-Latency  Mobile (780/330,50) SPDY 3% faster SPDY 3.4% slower
High-Latency  Mobile (780/330,200) SPDY 3.7% faster SPDY 4.8% slower

The exact numbers are not important, what matters is that they’re all small. No matter how you look at it, the conclusion is that SPDY doesn’t make a big difference.

If SPDY is better, why isn’t it faster?

The short answer is – it doesn’t fix the current bottlenecks. While digging through the data, I built up a couple of more detailed theories as to why it didn’t speed things up.

Too Many Domains

SPDY optimizes on a per-domain basis. In an extreme case where every resource is hosted on a different domain, SPDY doesn’t help. Web pages today use many different domains (most of them 3rd party domains), and thus keep SPDY from providing value.

The average web page (in this test) required resources from 18 different domains. Fewer than half of all resources were served from the same domain the HTML was fetched from. SPDY’s value comes primarily from reducing the number connections by multiplexing requests, and the large number of domains on a page keep that value from manifesting.

Looking at the individual tests, we can see SPDY cut the average number of connections to the page’s domain from 6.2 to 2.6* (compared to HTTPS) – a dramatic reduction. However, the total number of connections (includes all domains) averaged 34.9 for HTTPS and 30.5 for SPDY, which is similar in absolute numbers but not nearly as significant.

Even if all domains used SPDY, the results are unlikely to change. On average, 9 domains (out of 18) were used by one request only. 4 additional domains served 2 resources. Each such domain requires a TCP connection, and would not benefit from SPDY.

* Chrome seems to use one connection for the page, and a second for the resources. In some cases a late resource led to a third connection, or a random resource fetched from the non-SSL version of the site, raising the average to 2.6.

Blocking resources

Loading a page is not as simple as downloading all resources in parallel. For example, while loading a page, browsers usually don’t download any images until JavaScript and CSS files are fetched and processed. CSS files may import other CSS files, which the browser can’t know about in advance. Some scripts generate new resources for the browser to fetch.

These delays are not addressed by SPDY, and from all I can tell the related browser behavior has not changed. For many (if not most) pages, these delays are the true bottleneck, leaving little room for SPDY improvements.

Some Takeaways

There are two sets of takeaways you can draw from this study.

If you’re a website owner, the first thing you should do is adjust your expectations. Switching your site to SPDY will move you forward, but it will not make your site much faster. To get the most out of SPDY, you should work to reduce the number of domains on your page, and to address other front-end bottlenecks. Doing so is a good move anyway, so you wouldn’t be wasting your time.

If you’re a browser maker, or a participant in the SPDY community, you should put more effort into tackling these problems. For instance, Chrome already attempts to reduce connections by sharing the same connection across hosts that share the same IP and certificate. Certain changes to SPDY & SSL can further expand such reuse, thus accelerating pages and reducing server load. Another path is to build better SPDY awareness in the browser in an attempt to mitigate other bottlenecks. For instance, a more aggressive look-ahead behavior can download more resources up-front, and use request priorities to avoid congestion. To my knowledge, some of those concepts have been discussed, but none have been pushed forward yet.

I believe SPDY is still awesome, and is a step in the right direction. This study and its conclusions do not mean we should stop working on SPDY, but rather that we should work on it more, and make it even faster.

Posted on June 12th, 2012
  • Pingback: Community News: Examining the Performance Poverty Line | New Relic blog

  • http://stevesouders.com/ Steve Souders

    Great post, Guypo! I think SPDY improves web performance (speed) so while I’m confident your results are correct there are a few counterpoints I’d like to mention.

    First is that the top sites are more optimized than the rest of the Web. Looking at the HTTP Archive the average Page Speed score is 90 for the Top 100 sites, but drops to 83 for the Top 1000, and then 74 for the Top 200K. SPDY is more likely to improve sites that are less optimized, so if you stepped outside of the top 500 sites it’s likely you’d get better results for SPDY. It’d be interesting to re-run the experiment with a more diverse selection of sites.

    Second is that I’ve evangelized the use of domain sharding for the past three years. Using more domains is worse for SPDY as you mention, but in the absence of SPDY this pattern has been fairly widely adopted – at least among the top sites. For example, 25 of the top 50 sites use domain sharding for the content they own. (I determined this based on a list of unique domains for top sites from HTTP Archive.) If SPDY was more widely adopted then website owners wouldn’t have to do domain sharding and all sites would be faster.

    It’s great to keep poking at SPDY and other optimizations to find out where they hold true and what needs to be improved. I’m still a fan of SPDY for the larger Web.

    • Guypo

      Thanks – and just to be clear, I’m also a fan of SPDY, it just needs to get better.

      Domain Sharding would indeed get in the way, and I tried to mitigate that problem by moving all resources statically referenced on the page to be served from the page’s domain (using some fo the FEO technology). Even if my mitigation didn’t work, I still expect at most 2-3 of the average 18 domains on a page to be 1st party, so my gut is the results would not change.I also agree with Billy’s comment, and actually don’t expect the results to change much when going to the lower-end sites. I couldn’t figure out a way to bulk-measure this, but anecdotal checks almost always show high-end websites have a ton of blocking scripts, poor coding practices, basic cacheability problems, etc. They’re probably better than the low-end sites, but I think for both sets of sites, HTTP is simply not the bottleneck, and SPDY is therefore less helpful. 

      IMO the real solution is to keep expanding SPDY to support multiplexing across domains, and to make browsers more aggressive when SPDY is present to overcome other bottlenecks.

      • SilentLennie

        SPDY can help with those blocking scripts, because with SPDY you can push resources to the browser even before it knows it needs them.

  • http://twitter.com/zoompf Zoompf Inc.

    I disagree with Steve. SPDY’s advantages of HTTP should only be truly significant on sites that are already highly optimized.

    SPDY isn’t magic fairy dust that makes your website faster. SPDY fixes some issues with HTTP that can make the *transmission* of content faster. The benefits of SPDY’s better transmission are only visible on sites that are already highly optimized. If the content is already optimized (minified, compressed, images losslessly optimized) and its organization/structure is optimized (CSS combined, scripts deferred, images sprited) than this more efficient transmission can be seen and site is faster. I fail to see how sites in the Top 1000 or 10,000 with their bloated images and tons of domains/3rdp arty widgets, blocking scripts, and poorly structured pages will somehow better show the benefits of SPDY.

    • Mark S.

       Are you going to notice any difference (apart from synthetic tests) if the pages are already highly optimized? I doubt it. And even then you won’t find more than anything marginal in performance.
      In addition, the top 1000 or 10,000 pages ARE a good measure – why? Because that is what people visit around the world. The world is not a lab, it’s not a perfect site, it’s not going to use one single domain; there will be ads, scripts, widgets, etc. pulled from many different places because that is how the socially infused internet works these days.

  • http://blog.yoav.ws Yoav Weiss

    Just a crazy thought.
    Let’s assume:
    * 3rd party content domains are the cause for slowing down SPDY
    * 3rd party content doesn’t rely on cookies

    We could rewrite 3rd party content URLs to be routed through the 1st party site’s server, and have a server-side module that sends the request to the real destination and sends back the response.
    Example:
    http://3rd.party.com/bla.js
    will be turned into
    http://1stparty.site.com/ext/3rd.party.com/bla.js

    Kinda like we used to do 3rd party AJAX before CORS.
    That would increase connection reuse, avoid opening new connections for a single resource, and would let SPDY do its work on 3rd party domains.

    I’m not sure it is practical (because of the “don’t rely on cookies” assumption), but it would make an interesting test.

    • http://www.callum-macdonald.com/ Callum Macdonald

      For anyone wanting to experiment with this, mod_pagespeed has this option in the form of its MapProxyDomain[1] option. I haven’t tested it personally, but combined with SPDY, it has the potential to offer real speedup.

      One downside of implementing SPDY on our own site is that all our 3rd party assets now get loaded over SSL, which dramatically increases the number of OCSP requests. It’s probably why our tests show SPDY is significantly slower than HTTP.

      https://developers.google.com/speed/pagespeed/module/domains#MapProxyDomain

  • Darrick Wiebe

    Isn’t it considered best practice to spread your resources across multiple domains to allow additional simultaneous HTTP requests? If that’s the case then it seems like a false benchmark to test pages that are optimized for HTTP under SPDY without first performing equivalent optimizations to make it SPDY friendly.

    • Guypo

      Domain sharding is indeed a best practice for HTTP. To mitigate that problem, since I couldn’t easily separate 1st party domains from 3rd party domain, I used our Front-End Optimization engine to move most of the resources statically linked on the page to the page’s domain. 

      For example, if http://www.foo.com served the page, and had an image served from http://shard.foo.com/img.png, the page was modified to reference the image as http://www.foo.com/shard/img.png. 

      This created a slight bias in favor of SPDY, since I was reasonably likely to move resources from 3rd party domains to the page’s domain. However, my experience shows most of the resources the page statically references (as simple HTML tags) are 1st party resources, and most 3rd party resources (at least those that most affect performance) are included via scripts, iframes, etc.

      Clearly this isn’t perfect, but I did a lot of spot-checking across the data test, and it seemed to hold pretty well.

  • Davide

    These tests show:
    – that SPDY has a big potential compared to HTTPS
    – that most sites are optimized for HTTP or ill-optimized

    A nice discussion would be: how to leverage SPDY potential in real world sites? What are the best practices for SPDY and how better would a site perform if carefully optimized for SPDY?

    If SPDY improves over HTTP all we need to do is make good use of the improvements. It’s not a crazy expensive, complicated technology we’re talking about.

  • twiseen

    >For example, while loading a page, browsers usually don’t download any images until JavaScript and CSS files are fetched and processed. CSS files may import other CSS files, which the browser can’t know about in advance. Some scripts generate new resources for the browser to fetch.

    I thought this was addressed by server push, you push stuff to the browser from server before the browser requested it and then browser just “picks it up” on request.

    • Guypo

      Servers can definitely push content, or hint at it, to get the browser to download them ahead of time. Some tests discussed in the spdy-dev mailing lists estimated an extra 8% improvement from doing so, which is significant.

      However, building a framework that can anticipate the resources on a page before the page gets there isn’t simple – so I would not expect many websites to throw that on. I think the browsers themselves can consider alleviating some of the constraints they set around which resources block others, letting priorities handle that instead. I’m not aware of any data indicating they tried that and saw bad results, would be happy to see such data if it exists.

  • http://twitter.com/dritans Drit Suljoti

    Great post with very interesting insights.

    I was wondering, did you test a couple of the sites outside of webpagetest and on a local consumer connection? Just to ensure that there was no adverse impact by dummynet, the client side network emulator WPT relies on to emulate the bandwidth speeds. It shouldn’t impact it, but you never know.

    • Guypo

      I didn’t run any performance measurements with other tools, as I didn’t have any other tool that supported SPDY at my disposal (I know Catchpoint tests it, it just wasn’t as handy for me to use), but I did confirm some of the behaviors Chrome showed with a local installation. 
      I’m not sure how dummynet would come into play, though – it just throttles the network, and while it distributes packets in an overly-organized manner, it shouldn’t affect the results…

      That said, it’s definitely worth repeating this test with a different browser/measurement tool to see if the results are different.

      • http://twitter.com/dritans Drit Suljoti

        Thanks for the info!
        Although I work for one of those tools/services, I always recommend to confirm eye opening findings in the actual browser (just to rule out the extra layers introduced by the tools). Luckily all the browsers have some sort of builtin network monitor – which would help (although they at time can impact the results too).

        • Guypo

          I agree, which is why I used Chrome to see the behavior is consistent – the problem is performance measurements vary so widely, you have to have some automated tool to get any sort of accuracy. 

          FWIW, the network behavior in Chrome looked the same as it did through WebPageTest, and I couldn’t find any noticeable speedup when loading SPDY vs non-SPDY pages in my local browser (though it’ll have to be a very big speedup for it to show in such a non-scientific manual test)

          • http://twitter.com/dritans Drit Suljoti

            If I don’t mistake WPT relies on the the API Chrome has to capture the network data. So the only difference would have been dummynet.

            So it all boils down to what you stated – domain sharding is already helping most of the sites. Also you are relying on Cotendo – which serves the content from servers close to the users – hence TCP connections are fast. The two together, might make the value of multiplexing not worth it.

          • Patrick Meenan

            WebPagetest uses the Chrome network API’s for the request data for SSL (SPDY) but the page-level timings are based on the load event firing and independent of the request data (i.e. looking at a waterfall will look similar but the aggregate metrics used for the test wouldn’t depend on them).

  • http://profiles.google.com/dukejeffrie Tiago Silveira

    Wait, there’s something I don’t see explained in detail. How did you subtract the latency between contendo and the original server? A proxy can’t be faster than the originating source, right?

    • Guypo

      All the tests were proxied through Cotendo – including HTTP, HTTPS and SPDY. So in all cases, the extra latency of browsing through a proxy was included. So while it’s possible it affected the results a bit (no simulation is perfect), it affected it equally for all tests.

      • http://profiles.google.com/dukejeffrie Tiago Silveira

        So all the requests turn into a sequence of HTTP requests? That doesn’t seem a fair comparison, the parallelism of SPDY is nulled by the second hop HTTP requests.

        Maybe a more accurate test would be to cache everything and not make any outbound requests?

        • Guypo

          The requests turn into HTTP requests, but the CDN opens up many connections to the origin, so they’re not put into any sort of sequence. Those connections are also running on the backbone of the web (between the CDN and the server) and thus are very fast. 

          Caching everything will create a different bias, but I’m not sure it’ll be a better one… If the responses are all equally fast, the value of multiplexing is also diminished, since none of the request would have “hogged” the connection… 

          Anyways, I agree it’s an interesting alternate take. I’ll try to accumulate other variants and consider another test. 

          • EpaL

            As they say “the devil is in the details” and I think in this case, the details of your test methology show some critical flaws.

            In short, hitting the top 500 sites from a CDN in New York is hardly an accurate test. All these CDN is doing is turning your nice, fast SPDY requests back into regular old ‘slow’ HTTP requests. You’ve done absolutely nothing to improve access to the site itself (since those sites don’t support SPDY yet).

            For example, lets say one of the sites is Hewlett-Packard and their hosting is in California. You’re CDN might be ‘on the backbone of the web and have (for the purposes of this test) unlimited bandwidth but you’ve just introduced ~60ms of latency to the process! And that 60ms of latency is where it truly hurts: HTTP.

            You got to remember here: SPDY is designed to speed things up in two ways: compressing content and parallelising requests to avoid the latency. All you’ve done is removed the latency from you to the CDN but the CDN still has to make lots of HTTP requests (including all the TCP setup/tear downs) across that 60ms of latency.

            No wonder it’s still slow.

          • Guypo

            First of all, I share the details of how I tested intentionally so you can draw your own conclusions. 
            That said, I disagree with your statements for various reasons, primarily these three:1) CDNs don’t suffer from HTTP the same way browsers do, since they open many connections to the origin and often keep those connections alive. Therefore, while there is indeed extra latency, the CDN will still multiplex many more requests to the website at once through many open connections (far more than what the browser opens).2) SPDY is primarily designed to address the pain-points of the last mile, between the CDN and the browser, so it was applied to the area that matters the most. 3) For the vast majority of websites, the way SPDY will be truly implemented is through a CDN, since it’s the front line of your website. Therefore, this test will represent what website owners are likely to actually do.
            Performance testing is a hairy beast, and you can’t possibly capture all the different permutations in one test. This is a view that I believe reflects best how real websites are going to use SPDY, which is why I found it important to test it and share the findings.

          • SilentLennie

            I’m not sure about 3.

            You are probably right on the short term, but on the long term that very much depends on IPv6-deployment and  TLS-SNI-extension-deployment (HTTP virtual host support for SSL/HTTPS) and easily availability of SSL-certs (and DNSSEC/DANE could solve that).What I do wonder about, is this: If you take the top 500 sites, you’re actually talking to a lot of CDNs  on the backend to download the resources over HTTP. A CDN to CDN connection is probably not a very good representation for such a test.As Steve mentioned above, you should probably try some sites much, much lower on the Alexa list and compare those. They probably won’t be using a CDN. (although a lot of WordPress sites do use wp.com / wordpress.com for certain resources which I believe is also a CDN ?)

          • Guypo

            I agree there are complexities with SSL and CDNs, though I still think most high end sites would always use a CDN, as it provides very significant acceleration and offloading, regardless of SPDY. 

            The point about CDN to CDN is very true, but at least we’re comparing apples to apples since all schemes were tested with that factor. However, it may indeed be interesting to run the same test on sites that don’t use a CDN, and see if the results differ.

  • Marcelo Fernandez

    Great article. I have two questions:

    – Can we reproduce the test entirely? I mean, is Cotendo CDN is available for free/research/testing URLs like Coral CDN?
    – From what location did you run WPT? Have you tried from a distant location to Cotendo CDN?

    Regards

    • Guypo

      I’m afraid Cotendo is not a free CDN. It’s been acquired by Akamai, and you can definitely contact Akamai about trying it out (best to do that through this form: http://www.akamai.com/html/forms/sales_form.html)

      The tests were run from EC2’s east-coast location, and routed through Cotendo proxies in New York. I did not try a proxy that’s further away…

  • Matt Welsh

    Hi Guy, nice post! I like the study a lot. One thing I want to point out is that your “mobile” measurements are not taken on a mobile device: They appear to be using a desktop browser with “mobile-like” traffic shaping – correct? I would be curious to see the results duplicated with a real phone, since the performance characteristics on mobile devices are vastly different than with desktop browsers.

    That said – I appreciate that our original results showing the 23% speedup for SPDY on mobile may not hold in a situation with many domains on the page. We were measuring in the context of a forward proxy where everything goes through SPDY – my hope is that over time, all major websites support SPDY so we need to evaluate the performance in that situation. Skate to where the puck is going to be, so to speak.

    • Guypo

      You’re right – the tests were done using the desktop Chrome, running on EC2 instances, not real devices. The tools I had available made it easy to simulate mobile network speeds, but nothing more.

      I completely agree mobile has many additional performance characteristics, and it’ll be very interesting to run the same test (with the differences it had from your previous test) on a real mobile devices. Maybe once we enhance Mobitest a bit more :)

      I see a lot of value in testing SPDY through a forward proxy to understand the value the protocol has as a whole, but IMO the problem of 3rd party domains is only going to get worse. We should probably both look at how SPDY & browsers can better handle the current situation, alongside coaching users on best practices when using SPDY.

  • http://techwhack.com operamaniac

    I guess the biggest bottlenecks today are third party widgets like the tweet/like/share buttons! 

  • WangoTang

    Dude is making a whole lot of sense dude. Wow.
    Anony-Net.tk

  • http://rendion.myopenid.com/ render

    Look people, its pretty obvious spdy isnt going to do anything for you, ever heard of software racing hardware?  What do you think this is AOL on dialup?  This kind of optimization is a complete waste of time.  Gzip your shit and be done with it.

    Google needs to fuck off with pretending they are reinventing the web, its bullshit.

    And yes Im a troll.  the smartest fucking troll you will ever meet.  Stop jerking off with SPDY.

  • Mark S.

    Why not simply forget these experimental protocols and focus on HTTP 2.0 instead, create a STANDARD?

    • Riventree

      Ever wonder whether the lack of replies means you had a really great point or a really bad one? Many of the readers of this article will have taken the time to read the wikipedia entry and found that SPDY is indeed one of the IETF’s candidates for an HTTP 2.0 framework.

  • http://www.3pmobile.com/ Peter Cranstone

    Great post. Echo’s what i’ve been saying as well. Site admins will NOT add SPDY for a loss of speed or even a minor speed bump. We know because we upgraded Mod_Gzip to support BZ2 (25% more compression) and all we heard was a collective “yawn”.

    There is really only one way to make the web go faster and that’s real time context. If you know more about the user, device and location in real time BEFORE you have to respond then you can optimize accordingly.

    Of course getting more data from a stateless protocol is quite challenging, but it has been done.

    • aggieben

      If we could shave 25% off of our page load times with a reasonable amount of effort, we would absolutely jump on it. Even for 10% we’d probably take a stab at it.

  • Testman

    You did not test SPDY at all but TLS+SPDY !

    In such a condition, your test only shows is that SSL add overhead to bare TCP. Not a big news IMHO.

    “Real sites” are made to make “real HTTP limitation workaround”, if you keep those workaround so that SPDY is not at ease, then it is also easy to say that SPDY can not take advantage over HTTP1.x ;-)

    I think this is an interresting point of view, but I am not sure the reason that made you do this is fullfiled by your test.

    IMHO you should have several categories  :
     – real life site
     – new life site (real life ones removing tons of multiple hosts put to augment the bandwidth)
     – “cathedral like” site (site that are architecture to show the benefit of SPDY

    Mix all of them and you will got a good benchmark.

    Please update your article title, content and conclusions according to the fact you did not made SPDY bench but SPDY+TLS.

    • Guypo

      I disagree with those statements. 

      First of all, SPDY requires HTTPS today, meaning the performance of TLS naturally affects it. The SPDY protocol itself doesn’t require TLS, but there’s no way to use it in the real world without it (since gateways and other middleware software will block/break it). Just to clarify, the connection between the CDN and the origin server was used over clear HTTP (no SSL), since SSL was not required there.

      I agree it’s important to separate the boost you get form SPDY from the slow-down you get form TLS, which is why I compared SPDY to both HTTPS and HTTP delivery. That showed me that SPDY was not much faster than HTTPS, which is comparing apples to apples. 

      The comparison to HTTP slowed the acceleration from SPDY was less than the slow-down from TLS *for my agents*. My test agents were virtual EC2 instances, which have computation power that is comparable to low-powered laptops/desktops, but are likely more powerful than practically any smartphone/tablet. 

      As for testing modified websites, my point is that the previous tests have effectively done that… I don’t doubt Google indeed got the numbers they published, I’m just saying it’s not reflective of what real websites would get. 

      Getting rid of 3rd parties on your page is a great practice regardless of SPDY, and yet websites just keep adding them on for reasons other than performance. I would expect that trend to strengthen, not shrink, and so I think SPDY (and HTTP/2.0) should design to optimize real websites, as opposed to expecting websites to change.

      • max

        > not reflective of what real websites would get
        and the fact that you could drop all the complexity behind workarounds to make original http work close to spdy’s speed is not important to you?

      • Abhisshek Das

        we disagree with your whole propaganda article   with fake test  and fake result 

      • TestMan

        You have not tested SPDY but TLS+SPDY AFAIK your whole article is misleading.

        Plus, you have not indicated any caching/pooling scenarios. Which means behind the scene, your RP is only doing “good ole” HTTP/1.1 with its usual bottllneck.

        As a consequence what your test is only showing is the small “on the wire” optimisation difference without any impact on real life scenarios that people would get while migrating a server to SPDY.

        I would better see stats from tweeter or google on load avg, page loading time, etc …

        • Jonas

          How would you run SPDY without TLS? Have you even read the standard?

  • Dave Taht

    I would appreciate tests with fq_codel in the mix.

  • http://twitter.com/ironfroggy Calvin Spealman

    This study is very flawed. Talking to a proxy by SPDY doesn’t
    magically make the connection between that proxy and the original site
    use the SPDY protocol, everything was still going through HTTP at some
    point for the majority of these sites. Further, the exclusion of 3rd
    party content fails to consider how much of this would be 1st party in a
    think-SPDY-first architecture, where you know you’ll reduce round
    trips, so putting this content on your own domain all together would be
    better, anyway.

    In short, while we might not see huge speed
    increases on day one from SPDY, it has a lot more promise and value than
    this flawed study suggests.

  • Abhisshek Das

    copy pasting few comment from SlashDot page http://tech.slashdot.org/story/12/06/17/1351251/spdy-not-as-speedy-as-hyped

    SPDY does not depend at all on CPUs or your “internet speed”. It does depend on the browser (with both Firefox and Chrome supprting SPDY now) and, critically, the server. That last is also why the article author did not see much of a speedup – most content providers don’t support SPDY yet. Going to non-SPDY servers and believing that it will evaluate SPDY for you is absolutely ridiculous.
    I came here to say something like “read the article, the guy is from Akamai and would know to only use servers that serve SPDY – such as many of the Google properties”. But then, I read the fine article (blog) and realized the guy doesn’t know enough to do that and just used “the top 500 sites” – which means a very large chunk of them don’t know what SPDY is and he only used it between himself and his proxy. Great test that was. So your point is well taken. Bogus test means bogus results. 

  • Abhisshek Das

    1. HTTP Pipeline support proved very difficult to implement reliably; so much so that Opera was the only major browser to turn it on. It can be enabled in Chrome and Firefox but expect glitches. By all accounts SPDY’s framing structure is far easier to reliably implement.
    2. WIth SPDY, it’s not just the content that’s compressed but the HTTP headers themselves. When you look at the size of a lot of URLs and cookies that get passed back and forth, that’s not a insignificant amount of data. And since it’s text, it compresses quite well.
    3. SSL is required for SPDY because the capability is negotiated in a TLS extension. Many people would argue that if this gets more sites to use SSL by default, that’s a Good Thing.
    4. If you’re running SPDY, the practice of “spreading” site content across multiple hostnames, which improves performance with normal HTTP sites, actually works against you, since the browser still has to open a new TCP connection for each hostname. This is an implementation issue more than an issue with the protocol itself; I expect web developers to adjust their sites accordingly once client adoption rates increase.
    5. The biggest gains you can get from SPDY, which few have implemented, is the server push and hint capability; this allows the server to send an inline resource to a browser before the client knows it needs it (i.e. before HTML or CSS is processed by the browser).
    But as someone else as pointed out, the author’s test isn’t really valid, as he didn’t test directly to sites that support SPDY natively, he went through a proxy.
    The website I work for is supporting SPDY, and the gains we’ve seen are pretty close to the ~20-25% benchmarks reported by others. As many have pointed out, this author’s methodology is way broken. I’d recommend testing to sites that are known to support SPDY (the best-known are Google and Twitter), with the capability enabled and then disabled (You can set this in Firefox’s about:config, Chrome requires a command line lauch with –use-spdy=false in order to do this, though).

    • SilentLennie

      1. A lot of mobile browsers do have HTTP Pipeline support enabled by default.

  • Larry Masinter

    I’d just like to see some agreement on what “faster” means for HTTP/2.0. It’s hard to discuss improvements if you don’t agree about what you’re trying to improve. http://lists.w3.org/Archives/Public/ietf-http-wg/2012JulSep/0840.html.

  • NoBigGovDuh

    This seems insane. Why not just package pages into a LZMA compressed file with a small block size that stays on the clients hard drive. Then the browser can send the hash of that file in the initial request along with the specific page variables and the server can respond with a new LZMA file if the site has changes or an update if there are minor changes. If nothing has changed you send back the compressed data in JSON format. LZMA decompresses very quickly and can really shrink text, Images can be separated out into store only archives downloaded in a predefined order to allow loading of needed images first.

  • http://twitter.com/DenisTRUFFAUT Denis TRUFFAUT

    SPDY (2) is slower than HTTP on highly optimized sites – Waiting for SPDY (3) and its push ability to give a final word – http://research.microsoft.com/pubs/170059/A%20comparison%20of%20SPDY%20and%20HTTP%20performance.pdf

  • Martin Marcher

    Isn’t that benchmark flawed because you still fall back to having the same old http(s) connections from your proxy to the origin servers?

  • edwardpro

    we counld’t think it faster or slower than another in one browser. Based spdy, we may build a faster api webserver to service mobile application. thus the advantage of spdy is spead out.

  • http://twitter.com/michaelmd michaelmd

    do you need a CA certificate to use it?

    (none of the articles I’ve seen mention anything about this)

    if it needs that its not for me ..

    I don’t buy into the idea of buying trust …

    or relying on unknown third parties to tell me whether I should trust something
    (are people gullible or what?)

  • peterbooth

    Guy,

    Thanks for taking the time to do this work and present it to the technical community in an open-minded fashion. I think that Akamai are in a very special position in terms of access to data and its great to see contributions for the greater technical good.

    I was a little surprised by the depth of negative reaction. Performance testing is astoundingly difficult, time-consuming, with countless opportunities to break things, and requires much more considered thought than blurting out a response to an online discussion. I’ve done my share of criticizing test results and its also much harder, yet more useful, to try and replicate test results.

    I like the focus on real world sites. I know from experience that many large real world sites are a chaotic agglomeration of years of work by different generations of people using different toolsets and frameworks that are magically working together to present the illusion of a coherent website. These sites wont get rewritten for SPDY any more than they were rewritten for CSS, HTML5, IE9, Chrome, etc. It will be an interesting piecemeal, step by step process.

    Again, thanks for taking the time to do this experiment.

  • Joe Donahue

    Can you make the list of websites used in the study available, 18 redirections seems like an all lot of the sample of websites I have looked at.