Building High-Performance Digital Experience Platforms Is The Ultimate Team SportDec 08, 2022
There are plenty of headless and headful CMS solutions that you can use to build a content-driven website or app. But using a CMS to build a scalable digital experience platform that can handle millions of page views, thousands of concurrent users, and spikes in traffic (think e-commerce on Black Friday) is an entirely different task. This is where a product like dotCMS differentiates itself from the rest of the CMS market - dotCMS has been designed for high-scale and high-performance content delivery. However, it’s essential to understand that dotCMS operates in a more extensive ecosystem of adjacent platforms and infrastructure. The theory of constraints applies here - your site/app will only be as fast as the slowest component it uses. All the platforms and technology that go into a successful and scalable solution contribute to the overall performance, and to maximize overall performance, your architecture needs to be carefully designed, built, and tested before going live - like any team sport, these steps need to be carefully orchestrated. Getting to and playing in the Super Bowl requires everybody on the team to do their best job every single time. Building and operating a high-performing digital experience platform is no different.
Let me share some stats from dotCMS-powered digital experience platforms to understand what I’m talking about. Here are some performance numbers from different installations of dotCMS customers:
As you can see, these are not your ordinary mom/pop websites and process serious traffic. To perform at these levels takes work. From a CMS perspective, there are five critical steps when building a high-performance digital experience platform. Roughly, these are
- Go Live
A high-performance mindset starts with the design of any platform that needs to scale and process high-volume traffic. There are three angles to look at performance from a dotCMS perspective: Front-end, Back-end, and load on the repository.
Let’s dig a little deeper into each of them.
Of course, you are minifying your style sheets and compressing your images; that’s a given to me. The front end is where it usually gets busy and potentially nasty. People focus on limited data points like page views, traffic, sessions, and unique visitors. Those are great starting points, but in the end, with an API-first platform like dotCMS, it comes down to the API calls / second and the peaks in that load pattern. The good news with dotCMS – unlike vendors like Contentful, Contentstack, and Episerver – we do not throttle API calls or have hard API-rate limits. That’s like driving on the German Autobahn with a Porsche 911 Turbo in second gear only. Not cool.
The first thing to consider is a Content Delivery Network (CDN). DotCMS works with any CDN, and a CDN can offload a lot of the front-end load. This makes even more sense if most requests are hitting static files, and you have tons of them. It’s cumbersome and expensive to have those requests go to the application server in your runtime – or worse – to the repository and underlying database. This would require unnecessary and expensive cloud computing power, and a CDN is faster and more cost-effective.
A little caveat with CDNs is that some of them don’t cache GraphQL APIs, which is a bummer because GraphQL is a very efficient protocol compared to REST and, in my opinion, survives REST in the long run. This is also where dotCMS brings additional built-in caching power. It’s currently an enterprise plugin, but it will find its way into the core of dotCMS shortly, and we have seen dramatic improvements by bringing API responses down from 58ms to 4ms (close to a 15x improvement). Let’s say you have 91M visits per day (real example above), and each visit translates into 10 GraphQL API calls, which your CDN doesn’t cache….you can do the math.
Next to the CDN and the GraphQL caching, there are more caching opportunities in dotCMS. Block caching, particularly when running hybrid dotCMS, is very powerful and one of your many performance pals.
Being an API-first CMS with scripting API tooling is great and allows for custom-built endpoints. However, be mindful of your API flow design. We have seen implementations where customers made 100 API calls per request, which could have been done with just one (1). This doesn’t make your platform performant. Think 91M visits per day again, and you get the point.
Nothing to see here? I don’t think so. A large enterprise with 200-400 concurrent CMS editors in the platform is not uncommon for dotCMS. This is where it makes sense to have a dedicated authoring environment for just that purpose and offload production/live with this load and any other environments in your publishing architecture. And being a decoupled platform, that dedicated authoring environment can be sitting behind your firewall, so your security chief is happy too! Depending on the availability requirements, this authoring environment can run on a single node or a cluster.
Promoting content through the publishing architecture.
This is where push publishing helps to support your team. Note, however, that push publishing is designed to promote deltas between environments and not complete sites or applications. That is asking for unnecessary trouble.
In transactional platforms with product-oriented content, we often see batch processes in many shapes and forms. You can do much damage there, in any system. Proper design and implementation are key here.
It sounds like an open door, but skipping it can get you into trouble. Take the time to work with your CMS vendor to review what you’re about to build to ensure best practices with your chosen product. You’re building a foundation for years to come.
Your CMS vendor has seen hundreds and thousands of use cases with their product. Leverage that knowledge throughout the build of your platform, and avoid frustration and risks later on in the process. Remember, a digital experience platform runs anywhere between 5-10 years, so get it right from the start.
Plan carefully in your project to run representative load tests and give your team at least a month to fine-tune your platform concerning performance. I guarantee you that your first test results will scare the &*^% out of you. Testing a week before go-live is not a good idea and will put everybody in escalation mode immediately. Remember, the CMS is only one of many components in your platform, so there are many potential points of failure. Test. Refine. Repeat.
Go-live with outstanding performance
Following the previous steps, the go-live should be pretty boring. It works, and nothing breaks. From an operations management point of view, make sure that you maxed out on monitoring at the right levels (API for sure, but maybe page level too?), and leverage all DevOps capabilities to drive resilience and availability in your underlying infrastructure.