Introducing LQIP – Low Quality Image Placeholders

For web pages today, images are a real challenge.

On one hand, images account for over 60% of the page weight. This means they play a major role in overall page load time, motivating dev teams to try and make images as small (byte-wise) as possible. On the other hand, new devices boast retina displays and higher resolutions, and designers are eager to leverage these screens and provide beautiful rich graphics. This trend, along with others, led to a 30% growth in the average number of image KB on a page in the last year alone.

This conflict is partly due to what I think of as “Situational Performance”. If you’re on a fiber connection – like most designers – the high quality images won’t slow you down much, and will give you a richer experience. If you’re on a cellular connection, you’ll likely prefer a lower quality image to a painfully slow page.

Fortunately, not all hope is lost.
A few months ago we created a new optimization called Low Quality Image Placeholders, or LQIP (pronounced el-kip) for short. This optimization proved useful in bridging the gap between fast and slow connections, and between designers and IT, so I figured it’s worth sharing.

Core Concept

LQIP’s logic is simple. In a sense, this is like loading progressive JPEGs, except it’s page wide. There are more implementation details below, but it boils down to two main steps:

  • Initially load the page with low quality images
  • Once the page loaded (e.g. in the onload event), replace them with the full quality images

LQIP gives us the best of both worlds. On a slow connection, the user will load a fully usable page much faster, giving them a significantly better user experience. On a fast connection, the extra download of the low quality images – or the delay of the high quality ones – does not lead to a substantial delay. In fact, even on the fast connection the user will get a usable page faster, and they’ll get the full rich visuals a moment later.

Real World Example

Here’s an example of the Etsy homepage loaded with and without LQIP. Note that Etsy isn’t actually applying this optimization – I made a static copy of their homepage and applied the optimization to it using Akamai FEO.

On a DSL connection speed, LQIP boosted the page visual by about 500ms (~20%), while on FIOS it was only 100ms faster (10%). This acceleration came from the fact the overall page weight dropped from ~480KB to ~400KB thanks to the lower quality images. All in all, not bad numbers for a single optimization – especially on a highly optimized web page like Etsy’s home page.

DSL Connection, Without LQIP (above) vs. With LQIP (below)

lqip-etsy-dsl

FIOS Connection, Without LQIP (above) vs. With LQIP (below)

lqip-etsy-fios

The faster visual isn’t the whole story, though. The LQIP page you see actually uses lower quality images than the other. While the LQIP page weighs 80KB less before onload, it weighs 40KB more by the time the full quality images were downloaded. However, the page is definitely usable with the low quality images, keeping the user from idly waiting for the bigger download. You can see an example of a regular and low quality image in the table below – I didn’t turn quality down too far.

Image Before LQIP (15.6 KB)

Image After LQIP (5.2 KB)

lqip-sample-before lqip-sample-after

It’s just intelligent delivery

LQIP also helps on the political front, by bridging the gap between IT/Dev and the designers.

The designers are happy because their full-quality images are showed to the user, unmodified. Sure, the images are a bit delayed, but the end result will usually show up within a few seconds, and their handiwork would remain unharmed.

IT is happy because they deliver a fast page to their users, even on slow connections. The low quality images may just be placeholders, but (assuming quality wasn’t too drastically reduced) the page is fully usable long before the full images arrive.

Implementation Tips

LQIP implementation includes three parts:

  1. Prepare the low quality images (server-side)
  2. Load the low quality images (client-side)
  3. Load the high quality images (client-side)

Step #1 varies greatly by your system. You can create the images in your CMS systems, duplicate them in your build system, or adjust quality in real-time using tools like Akamai Edge Image Manipulation.

Step #2 is simple – just load the images. You can do so with simple img tags, CSS, or using your favorite scripted image loader. If you use small enough images, you can even inline images (matches Ilya’s recommendations). In Akamai, we use LQIP in combination with loading images on-demand, reducing the number of requests as well.

Step #3 is where a new script probably comes in. A simple flow would be:

  1. Create a JS function that iterates the IMG tags on the page, and for each:
    1. Determines the full quality image URL (using a naming convention or an extra attribute on the IMG tag)
    2. Modifies the SRC attribute to point to this full URL (will reload the image)
  2. Call your JS function in the onload event

If you want to get fancy, you can load the high quality image in a hidden IMG tag, and then swap the low quality image with it at the onload event. This will prevent the low quality image from disappearing before the full quality image is fully downloaded, which can hinder the user experience.

Lastly, if you use CSS to load your images, you can also swap the low quality images for higher quality images by loading/applying a new CSS.

Summary

I’m pretty excited about LQIP.

It helps bridge the gap between two conflicting and growing needs, would work on old and new browsers alike, and is (relatively) easy to implement. It’s a “perceived performance” optimization, which is how we should all be thinking – and I believe it’s an optimization everybody should apply.

Posted on April 2nd, 2013
  • http://twitter.com/the_real_mkb mkb

    Sounds like the old LOWSRC attribute on IMG tags. I was about to say “hey don’t browsers do this?” but I didn’t realize they had all dropped support for it!

    • Guypo

      Yep, it’s lowsrc all over again, and in fact lowsrc could have been a way to implement it… if modern browsers supported it.

      Still, one notable difference is the recommendation to wait until onload before downloading the full quality images. AFAIK, lowsrc would download the high quality version of the image as soon as the low quality one arrived, if not in parallel.

  • http://twitter.com/bjankord Brett Jankord

    I like this solution a lot. I played around with a similar concept when I was looking at a better way to load hi res images. Moving the img swap to the onload makes perfect sense, improves the perceived perf.

  • Steve Souders

    Great post and great optimization. The other thing I like about this is it works well with browser preloaders – the preload scanner sees the normal IMG SRC and starts downloading the low res images earlier (in parallel with scripts etc). Many responsive image solutions, otoh, don’t benefit from browser preload scanners.

    I would upgrade the suggestion to swap the high res image *after* it’s done downloading from “fancy” to “critical”. But do you know how many browsers clear the low quality image before the full quality image is fully downloaded? Here’s a test page: http://stevesouders.com/tests/image-swap.php . Chrome 26 and Firefox 19 both leave the old image in place until the new one is fully downloaded.

    • tnorthcutt

      Safari (6.0.3) also leaves the old image in place until the new one is ready.

    • Guypo

      I believe the issue was not with images that are slow to return, but rather with big images. For example, if you used a low quality version of a big “baseline” JPEG, and then loaded the original, the low quality image would disappear as soon as the first bytes of the full JPEG arrive, paining that jpeg from the top.

      • Steve Souders

        ahhh – that’s likely – In which case I would definitely recommend the “fancy” hack. ;-)

  • http://twitter.com/penibelst Anatol Broder

    This is a great technique I use in my last small responsive project Geisterstunden. Server side there is a PHP script (Timthumb) for resizing the original image to dedicated size just by changing the URL’s parameters.

    BTW, my javascript replaces the image URLs in the resize event too.

    The website feels pretty fast. The visitor sees the low quality file (300×100 px, 16 KB) immediately and can start to interact with the site: scroll, read, etc. After the higher resolution file (e.g. 1166×389 px, 181 KB) is loaded, all the visitor maybe notices is the blurry image gets sharp.

    The small overhead by loading two images instead of one is negligible. In return you get low latency user experience. My decision to use that technique was influenced by the article Cheating Or Good Design?

    • http://twitter.com/penibelst Anatol Broder

      I have published my JS + PHP solution called Imadaem on Github. Feel free to try it.

  • http://www.facebook.com/nikolay.matsievsky Nikolay Matsievsky

    Why not progressive JPEG?

    • Guypo

      Progressive JPEGs still delay the rest of the page. For example, if you have 7 images (even if they’re all progressive jpegs), the first 6 would hog all the connections, and the 7th would not start until one of the 6 completes.

      LQIP is similar in concept, but applies page wide – first download the low-quality images across the page, and only when all of that is done (plus the other page elements), sharpen the images.

      It’s possible you can accomplish this with SPDY and progressive JPEGs, by sending down only the first bytes of each progressive image as higher priority than the rest. However, doing so would be pretty complicated, and will still delay the page load time.

      • http://www.facebook.com/nikolay.matsievsky Nikolay Matsievsky

        Not very useful if load time is huge – need to wait 10-15s to see high quality images. Also not very useful when load time is small :) The most useful case – intermediate, when 4-10s is being used, so LQIP can play.

        • Guypo

          I would argue it’s super useful when load time is huge, since the page is completely usable with the low quality images.

          And even a fast page may be made slow in bad conditions…

          I think it’s the least useful for a page with no images, or with only small images, which can be fetched in one round trip anyway. But such pages are not common.

  • Tim Vereecke

    Hi, very interesting technique which I’m investigating to add to my site.

    Would you recommend this technique as well on pages with a large amount of smaller content images, I’m thinking of following use cases:

    - User avatars +/- 60x60px? On some articles or news feeds you can quickly have 50-60 different avatars.

    - Or product thumbnails (eg. Some product searches return 100+ thumbnails from 120x90px and each one is around 4-5K)

    (I’m talking about content images and not styling images which would be covered by CSS sprites)

    One concern I have related to this is the caching effect being reduced, when visiting another page a large part of the avatars might already be in the cache (low-res + high-res). But I guess you would see the low-res versions until the “onload” event?

    Thank you for your feedback,

    Tim

    • Guypo

      A decent guess is that if a high quality avatar image is in cache, the low quality version would be in cache too. So the low quality image would still load fast, but the high quality would load a bit late. You can also trigger the high quality image download in the onload event of the image – will delay onload, but get the images loaded up more quickly.

      Note that if you have a lot of small images, you should consider combining them using Data URIs or Spriting, at least into groups. It hurts caching, but will save a lot of HTTP overhead for non-cached resources, so consider (and measure) the tradeoff.

      • Tim Vereecke

        Thank you Guy, I will do some tests/measurements

  • Andy

    Hi!

    I’ve been trying this technique out, and although I get the benefits of the boost in visuals, I’m concerned about the doubling of requests and the total (sort of) increase total weight of the downloaded images.

    In my tests, the file size before LQIP was 328kB (1 900px jpeg with quality set to 60, 1 request in total, any screen size).

    The file size after LQIP was applied was 487kB (widescreen) & 173kB (small screen) – (1 900px/320px jpeg with quality set to 60, 1 900px/320px fallback image with quality set to 15, 2 requests in total).

    So the benefits gained in visual boost and decreased weight of images on small screen is actually quite the opposite on a wide screen, where the file size was dramatically increased + the extra low quality image being requested.

    Are my concerns valid, or am I doing it wrong?

    • Guypo

      I would argue the visual boost is the main goal, so if you’ve achieved a page that is perceived to be faster, you’re good. It’s true you’ll download some excess bytes, but since those happen in the background and only when needed, that extra weight doesn’t seem worrisome to me.

      In addition, it looks like you chose to deliver a bigger “final” image when using LQIP than the one you chose as “good enough” before. If you think the bigger screen indeed benefited from the bigger image, it means you also got a richer final visual when using LQIP, without sacrificing visual load time – another win. If you don’t think it matters, then I suggest you revert your high-res image to the 328Kb you originally used.