Architecting a JSS app for the cloud

Sitecore MVP and Principal Developer
Valtech

June 13, 2019

A trend we see with a lot of our projects nowadays is that building features for your application is only part of the puzzle. What about actually getting your application live?

Personally, I think bringing devops practices into your dev team is a very positive thing. The developers that build the app can design the architecture and infrastructure with a lot more freedom if they can control the deployment footprint.

This post ties into the series: an intro to going live with jss.

Designing your JSS architecture

By choosing JSS, your application architecture will be based around a few key concepts. Content flows from the CMS into your client-side app via a headless service. *

* This assumes you run a Node proxy and render the app via SSR mode

This architecture requires a few key entities. You have a production Sitecore instance providing the Layout Service which gets consumed by your client-side app that runs out of Node.

Locally you can run all of this in connected mode: jss start:connected. Things like hot module replacement mean the developer experience can be very efficient.

Designing how to use the proxy?

This was one of the areas that went through the most design iterations - in summary our end design was to proxy everything. Everything covers: Sitecore images, the layout service, old razor Sitecore pages, Sitecore assets and more.

Why proxy everything?

It meant our deployments became a lot easier. We could blue/green in one place, at the DNS layer in front of the proxy. It does mean our JSS deployment is coupled to a specific colour of our Sitecore deployment however that's not a concern - it forces version compatibility between deployments.

Configuring the proxy

If you do take this approach of proxying everything you need to either configure pathRewriteExcludeRoutes in config.js or introduce the pathRewriteExcludePredicate function. We opted for the function as it gave us better control over different url rules - in particular how we handled language variations of legacy Sitecore pages.

How to approach caching?

Caching plays an absolutely key role in any Sitecore deployment, and with the addition of the LayoutService even more options are now available. The approach we took wouldn't suit every deployment and due to external design considerations, we chose to edge-cache the output of the layoutService.

This does mean Sitecore personalization won't work out the box - instead we inject personalization logic into the SSR directly.

Shaping the content

We touched on this in the previous post, our preference throughout the application design was to favour content resolvers and context extensions over GraphQL queries.

One downside of this approach, we need to deploy Sitecore as well as JSS when we release.

Improving node performance

Much like a classic asp.net deployment where you'd want to remove debug mode and tune up your configuration, Node has similar properties available. A key one to set is: NODE_ENV=production

By default, Node will run on a single core so however hi-spec a machine you use, you'll never get the most out of your app. This can be solved in quite a few ways, either via external tools to bootstrap the application loading, or via Clustering.

We opted for Clustering - it requires a small amount of code and can be built into the application code base.

Monitoring the system(s)

JSS introduces more moving parts - not only do you need to monitor your Sitecore application for the layoutService but also the Node application.

We introduced logging specifically targeting the calls into the layoutService from the SSR - providing us details of which layoutService url to call, which site to resolve, which language to use and how long each call took.

These stats all feed into dashboards so we can see the performance over time of the different moving parts.

Designing your JSS infrastructure

Due to reasons imposed by the client our infrastructure is split between Azure for Sitecore and AWS for the SSR. Not ideal but due to the headless design causes us no functional problems.

We never expose our origins (i.e. the JSS app and Sitecore app) directly to the internet, instead we sit behind an external proxy and WAF. This means we need to be careful about how we handle urls - relying heavily on relative urls and the targetHostName in the config.

Planning your urls

It took a while to arrive at a consistent structure - the goal was to have the same url patterns in dev, qa and all the way to prod. Getting this right upfront will definitely make things easier to debug!

In our content delivery setup, the different layers that need urls are:

  • Direct to SSR
    • With and without colour, and region
  • Direct to Sitecore
    • With and without colour, and region

Examples being:

  • SSR
    • local-uk.jss.url-name.co.uk
    • local-uk-green.jss.url-name.co.uk
    • local-uk-green-euwest1.jss.url-name.co.uk
  • Sitecore
    • local-uk.sitecore.url-name.co.uk
    • local-uk-green.sitecore.url-name.co.uk
    • local-uk-green-northeurope.sitecore.url-name.co.uk

Hosting the applications

Here your options are pretty much endless. All of this could run on physical tin, VM's, containers or even out of serverless functions *

* Saying that, I'm not sure Sitecore itself could run out a lambda!?!?

We opted for running Sitecore out of WebApps in Azure, and Node out of VM's in AWS - all orchestrated via Beanstalk.

Summary

Adding JSS to your application does bring extra complexity but also opens up a lot of opportunities about how to get creative with the design and deployment.

With some careful planning around what your end goal design would be you can easily setup an architecture and infrastructure that's easy to work with and equally simple to deploy into.

Next up, how to deploy JSS.

Meet the Challenges of Today's Digital Economy

Ready to take that first step and rise to your digital potential? Contact Valtech today.
Talk to us

Contact us

Let's reinvent the future