Dust.js: Client-Side Templating

On Hacker News today there was a mention of how LinkedIn is using the Dust.js templating system. Dust.js is doing what XSLT always promised to do but never delivered on.

In the post, they make a pretty compelling argument for why to use client-side templating. Their problems are all similar to the ones I’ve seen countless times — anywhere a services based architecture is employed like at TV.com.

At LinkedIn they have a services based architecture with applications written in a slew of frameworks across multiple languages. Because of this, it’s hard to re-use visual components. Requiring that all components be written in the same language reduces productivity of developers and hinders their ability to rapidly prototype new features, but having the templates reimplemented in each language makes maintenance a chore. What LinkedIn settled on doing was to only require applications to produce JSON responses (light weight and efficient to render) and rely on client-side (browser) rendering to transform the layout templates with JavaScript. This puts the expensive cost of rendering on the browser (totally scalable), and allows for LinkedIn to cache the layout on CDNs to accelerate their delivery (in addition to the CSS, Images, JS, etc.)

post client Dust.js: Client Side Templating














Some might argue that well written HTML+CSS is essentially the same thing. It’s not. First, HTML is verbose; the amount of data needed to even express a simple document far exceeds that of JSON because it includes layout information. Then generating the HTML is an expensive operation on the server involving parsing some source templates in (ERB, JSP, Jinja, Smarty, etc) and assembling them in a string object with lots of concatenation. Lastly, no matter how “light” you make the HTML, you’re still mixing layout with data so you can’t statically cache the layout on a CDN without using ESI (fancy XSLT).

Compare this to simply requiring apps to produce JSON responses. JSON is lightweight and efficient to generate. Most scripted languages have native bindings to libjson so that rendering is done in C.

Lastly, if the thought of templating done entirely in the browser is not appealing (perhaps for SEO reasons), there’s always Node.js which can sit in between; however, this negates the CDN-effect for template caching and puts the computational onus back into your datacenter / cloud.

I’m not a “frontend” guy, so maybe that’s why this immediately appeals to me. I’m curious what frontend developers think about this approach?


  1. This is a good approach to further separate the view from the data. As modern browsers’ javascript engines become faster and more efficient, this approach becomes more interesting.

    However, as you state in your article, the content generated with javascript on the browser cannot be indexed by search engines crawlers. This has a direct impact on SEO and content indexing.

    So, what I prefer over browser-only rendering is the hybrid rendering technique, which allows to render certain parts of a webpage using browser-generated content and render other parts, which are most important to indexing, using plain old server-side rendering.

    This way, one can optimize the performance of a website, bandwith footprint and server load by only rendering what is needed for indexing and leaving the rest to the browser.

  2. Nathan Hanna says:

    @!Nicolas – depending on the server-side technology you have at your disposal what you are suggesting is possible. In fact, Java 6+ has Rhino.JS (https://developer.mozilla.org/en-US/docs/Rhino) integrated into it’s compiler. Thus, you are able to execute and run JavaScript on the server without too much work.

    Other sever-side languages (PHP, .NET, Python, etc.) may require a bit more work to integrate with Rhino.JS or Node.JS (http://nodejs.org/) to run JavaScript server-side. Just know it is ultimately possible to retain SEO and leverage the power of Dust.js.

    • Stan says:

      @Nathan can you explain how it’s possible how to retain SEO and leverage Dust.js? The IT team at the ecommerce company I work at wants to use Dust.js for our site. I’ve work really hard to build up our SEO and am worried we are going to hurt our company.

      • e says:

        You raise a very legitimate concern. The solution proposed is best suited for sites (like Facebook or LinkedIn) where much of the content is not indexed. Also, it’s worth pointing out that LinkedIn only does this for portions of their content, not the entire site — probably for the concern you raise.

        The only way I see to entirely eliminate this concern is forgoing the ability to offload the templates to the edge (e.g. client side) and have them rendered by NodeJS by proxying your requests through it. This preserves the advantage of a common templating language, but still puts the burden of page generation on the server side.

        The alternative solution, which is in the gray area of SEO, is to do a hybrid approach whereby you use user-agent detection to pass robots through nodejs and everyone else directly to the Dust.js version. The risk is you run afoul some anti-cloaking algorithms, albeit legitimately. I say this is a gray area because it’s (arguably) become acceptable practice to do this for mobile sites. Also, Google is capable of rendering JS, but other crawlers might not be as advanced.

        Hope you find an acceptable workaround.

        • @!Nicolas and @Stan, Your are correct that when you are using client-side technologies is a handicap for SEO, but this architecture is for websites. This arquitecture is for management applications, where redered pages are in private zones, so SEO is not important.

          If you want use the powerful of a js template library, like dustjs, you always can use this library in server side, using nodejs or an implementation of a proxy with Java/JavaEE (using, for example, a simple servlet with Rhino or JSR-223)

Leave a Reply

Your email address will not be published. Required fields are marked *