Perspectives of Client Side Rendering

Client side rendering, a technique of loading a website from the server just once and then rendering subsequent views via JavaScript, is often associated with rich web applications that offer interactive experiences. This generally allows fast transitions and efficient loading of additional data.

Facing criticism

Lately, this technology has been more and more used on more “regular” websites, with the help of technologies such as React, Angular, and Ember. This has been often criticized for not being much friendly for Search Engine Optimization (SEO) and often marked as “over-engineering”. Such websites needed to load and execute hundreds of kilobytes of JavaScript just to render content on the screen and time to render could be many times slower especially on older Android devices. This problem has been mostly solved by a method of prerendering the application on the server and serving HTML alongside JavaScript for a particular request. That way, time to render can be more or less the same as that of standard websites. But time to render is not the only obstacle. These sites still require much more JavaScript to function and that takes longer to load and longer to execute. This delays the time to interactivity.

Our WordPress theme Scribe is an example of a website that uses this technique. It prerenders on the server via PHP and WordPress and then on the client side it becomes an Ember.js web application.

We believe that the year of 2017 has a potential to deal with this last issue once and for all and such web apps will not only have fast transitions but also fast initial load. There is currently a lot of work on code-splitting, dead code elimination and other techniques that could lower the size of javascript files dramatically. It is also easier than ever to incorporate a Service Worker, that allows efficient caching of assets causing the web application to boot much faster on later visits. The web browsers are also getting better at JavaScript optimization and people slowly get more and more powerful devices.

Exploring the possibilities

Faster transitions are not the only benefit of client-side rendering and single page applications. These web apps can also efficiently load content in advance, allowing to keep browsing even if the connection drops out. The offline-first approach can be more efficient. You can visit a blog or news website no matter how fast your internet is. If anything else, you get at least old content you loaded on the previous visit before the new articles load in. During all that, the web app can inform you what is happening – if you have the latest data, if you are offline or online and the time of last data synchronization.

Additionally, such web app can be more in control of how the website behaves. One example would be an efficient loading of images. Currently in Scribe, when navigating from an article list to article detail, Scribe looks for previously loaded images and displays the one with the highest resolution before it loads the larger one on the background and replaces it in place. Similarly, when transitioning from article-detail to article-list why load small version of image when a bigger version of the image has already been loaded? Displaying a bunch of high-resolution images could cause performance issues, but the algorithm can at least find a good middle ground.

We can also experiment with internet speed and device speed detection. Potentially, lower versions of images could be loaded for slow connections. This has been already found unreliable in the past, but at least it is good to know that we can revisit this idea later and see whether the Service Workers and maybe some new browser APIs can help.

Current state of things

Currently, our theme Scribe still has longer initial load time than well optimized static sites. The time to render is fast, usually less than 0.8s on Chrome and 0.5s on Edge.

 

  • WP Median – Median from Top 10 best selling themes on themeforest.com for 2017
  • TTI – Time to Interactivity
  • TTR – Time to Render
  • TTFB – Time to First Byte

(*Time to the first byte is subtracted from other metrics so that differences in server response time don’t skew the results, in real-world both time to render and time to interactivity would generally be 0,3 – 1s longer)

(** EDIT 18. 3. 2017: Time to Interactivity of Scribe has since improved but it is not evident from numbers for Moto G1. We have delayed quite a lot of unessential logic via requestIdleCallback, but most benchmarks still register this logic as blocking the interactivity. Benchmarking TTI is still problematic)

In terms of time to render, Scribe does not fall behind static websites. Time to interactivity is still very good on the desktop but it is noticeable on old android phones. New android phones and iOS devices usually have very similar time to interactivity as laptop and desktop computers.

The first load is definitely very important. If the website freezes and is not interactive or takes a long time to render, the user may just give up or be discouraged from navigating to additional pages, thinking they would load slow as well. There are current studies that directly link initial loading time to conversion and profit of the website.

On the contrary, there is currently no data on what is the benefit of fast or instant page-to-page transitions. But I personally believe the potential is huge. The possibility of quick browsing through the site without any delay seems very powerful. Whenever I visit a website with client side rendering I feel encouraged to look around, click links and browse.

Hopefully, we will soon be able to analyze that the client side rendering causes longer browsing times and higher conversions (such as email subscription, on Scribe).

It is important to consider, what part of users are truly precious and for what part of users we should optimize our technology. It is common, that a lot of visitors generally leave in a matter of seconds. Maybe a portion of them would not leave if the site loaded faster, but a lot of them still would. Maybe it can actually be worth it to focus on that significant minority of visitors who stay longer, browse around and return again later. Scribe strives to do exactly that – create an audience.

We believe this is a very valid approach for websites of various kinds. In the case of Scribe, it would be blogs with a well-defined topic. The only requirement is that the articles follow a certain pattern so the visitor is encouraged to read more and return later. A good use-case for Scribe could also be blogs the focus on media content, be it images or video and audio.

The future of Scribe

Currently, Scribe does only a fraction of the possibilities outlined above. We have yet to find a good solution to integrate Service Worker with Ember and WordPress. We also plan to further improve the use of the client-side rendering, like support for Audio and Video posts when the audio or video is pinned on top of the screen when navigating through the website. We are currently working on better integrations with other WordPress optimizations caching. There is definitely a lot of space to explore. Feel free to subscribe below to see our progress.


Beta version of Scribe is currently available on meetserenity.com/scribe. If you have any feedback, let us know!