Menu
the REFERENCE

Google Hackathon - and how to win one

On January 16th, 50 people from The Reference went to the GooglePlex in Brussel for the first and for now only European Google Mobile Speed Hackathon - a whole day of learning and hands-on efforts to improve the speed of some of our mobile projects. At least 15 teams would be competing for the best performance boost on our Sitecore, Umbraco and Drupal projects.

 

Acceptance speech Jeroen

 

To be honest, since I'm familiar with PageSpeed InsightsI didn't expect to see any amazing new insights about speed improvement. Most of our projects already scored in the 80 range, so promises of high boost-factors were unlikely.

Common practices such as bundling and minifying scripts/css, using HTTPS, efficient cache policies, minimize redirects should be a no-brainer. These are all things we are already doing automatically. 

Introduction

The day started with a presentation by Antoine Brossault, Mobile UX Manager @ Google, about issues that cause a low pagespeed. He then introduced techniques that showed us how to resolve those issues and improve the mobile pagespeed. There was a second presentation later in the day on AMP and PWA, but I'll admit by then I was already solely focused on optimizing our project.

The most important part of the optimization project for us was the load time of the first page and - specifically - everything above the fold. During the Google Hackathon, we used Lighthouse to measure the page speed of our project. The result for our project?

Ugh, not so good ... Turns out Google recently changed the scoring system and is now expecting a lot more from us.

Lighthouse speed insights

Lighthouse immediately makes a bunch of suggestions on how to improve your score. I will not go into too much detail. Just try it out on your own site. I will document the techniques we used to improve our score.

Images

Lighthouse suggests

  • lazy loading your images (to prevent bandwidth saturation in those first crucial seconds)
  • compressing your images as much as possible (makes sense, but still)
  • using WEBP for all images (Serve images in Next-Gen formats)

The last one - unfortunately - is only supported on Chrome, but I guess you could provide a fallback-list, so Chrome finds the .webp and other browsers (who do not support it) get a .png. I did not explore this option due to lack of time.

The reason this one is not so simple, is because all our projects use a CMS (Sitecore or Umbraco) and you can't easily expect content editors to upload properly sized images in 2 different formats. (You can EXPECT it, but it won't happen :-)). The solution would be to add a module that does the conversion for you. Compression, yes : Dianoga (Sitecore) and Tinifier(Umbraco). Converting to webp? I'll let you know.

Compress images

Lighthouse-suggestion : Properly size images, Efficiently encode imagesCompressing

Compressing ALL images in your media library is easier said than done, when you only have a few hours in the hackathon to show your new and improved site.

Luckily, I had prior experience with Tinifier (an Umbraco plugin using the API's of https://tinypng.com). I quickly installed it and let it run in the background. It ran for about an hour, while I worked on other tasks.

Lazy load images

Lighthouse-suggestion : Defer offscreen images

While I was installing/configuring this, Koen - my frontend teammate - installed LazySizes, a javascript library for lazy loading your images. It postpones loading images until they are needed, i.e. images only becoming visible when the user starts to scroll down. All you need to do is put class="lazyload" and use data-attributes, and load the lazysizes-scipt ofc.

Lazy Load Images

Defer render-blocking resources

Render-blocking scripts

Lighthouse-suggestion : Eliminate render-blocking resources

When your browser encounters a script-tag, it will pause rendering the page, download the script, parse it, execute it, then continue rendering your page. (hence "render-blocking") You'll want to avoid this as much as possible. If your script performs something crucial, that needs to happen asap, consider using async (e.g. GoogleTagManager). If less crucial and only running after the document has been loaded anyway, defer it to the end.

We are used to having 2 javascript-bundles, one at the top of the page, one at the bottom. Using defer yielded much better result, but it took some time to split the top bundle into a critical part and a deferrable part.

render block resources 

Always render GTM at least async. Don't let the marketing guys tell you otherwise. Have them modify their scripts to work with this, not the other way around.

Unused CSS

Lighthouse-suggestion : Defer unused CSS

This is a little more tricky, and we didn't manage it before hitting the deadline, but we implemented it afterwards. The trick is to :

  • identify the bare minimum CSS that is necessary to properly style the page above the fold
  • put this INLINE
  • lazy load the rest of your CSS (using e.g. LoadCSS)

Every (milli)second counts

By now, the suggested improvements no longer promised multiple seconds speed boosts, but rather a few 100 ms at the time. Lighthouse-suggestions : Preload key request, Pre-fetch & DNS-prefetch, ...

Preload fonts

By preloading fonts, you tell the browser "Hey, I'll be needing this shortly. If you have time, download it now. If not, just do it later."

Don't forget the crossorigin attribute!

Cross origin attribute 

DNS-prefetch

By pre-fetching the DNS for things like facebook, googletagmanager, ... IF there are resources to spare, you shave another fraction of the top, by the time these scripts/fonts/... need to be loaded.

DNS prefetch

The result?

The resulting score, right before the deadline, with no time left to inline critical CSS, was very satisfying. We now scored 90% on performance.

The 100% score on best practice (2nd score from the right) should be mandatory for all projects. The reason we started the day with 93% was a 404 on a fallback image that wasn't even visible except in older browsers. So, also for these small oversights, Lighthouse is a very handy tool.

After presenting this result to the other teams and discussing what we did, all the teams got to vote on who was the winner of the day, and our team was awarded with that honour.

Result Lighthouse optimization

A little perspective

Some factors that worked in our favour, compared to some of the other projects :

  • Our project was in Umbraco. I rebooted at least 50 times, trying stuff out. Not something you easily do with e.g. Sitecore (sorry guys)
  • Our project was not yet in production (compared to projects running multiple years already). This means GTM wasn't yet bloated with heavy scripts, often containing slowing factors and huge redirect chains. --> Have your marketing guys clean this up once in a while.
  • Our project was only 6 months old, so there was no legacy script, often reducing load times, loading megabytes of script, most of which you don't even know is still needed.

Bonus - applying what we've learned

After this, I was itching to put this newfound knowledge to good use, so I revisited my last project (Skyn) and checked it's initial score and suggestions.

Skyn results Lighthouse

Wait, 17%? AND I forgot to check minimization of Javascript AND css? 

Alright, to the batmobile!

  • Minify Scrips, CSS
  • Compress and lazyload all images
  • Check text compression (sad story, read on)
  • Defer scripts

With an effort of less than 2 hours, the result is below. The site also feels more snappy than it used to.

Optimized Skyn in Lighthouse

Planned efforts for when there's more time : split scripts in a critical and a non-critical bundle, defer the last one. Split CSS in critical inline and loadCSS the rest.

Dynamic compression

Regarding text compression. It WAS enabled, just not working all the time. Some of you may know that dynamic compression on IIS is disabled when CPU goes to 90% and re-enabled when it drops below 50% again. Skyn is an Umbraco Cloud project, never going higher than 10%, on a shared server with other projects (not ours, just unknown other Cloud projects). Compression is only active about 60% of the time and never during peak hours, so this is an issue. I'm in contact with Umbraco to fix the issue. I'd suggest raising the lower threshold from 50% to 75% or something.

Play with it!

How about you? Let us know if a few simple tricks, a couple of hours well-spent, can improve your pagespeed.

Good luck!

Subscribe to our newsletter

 

Don't miss out

top
It's more than digital, it's your business
The Reference is nothing without its customers. Carglass is the car window repair and replacement specialist for whom we've built a fully responsive Sitecore website. Read more about this client.