At Progress, we use Sitefinity CMS for our company websites. As a result, many of the features that have been released as part of Sitefinity during the years were developed in response to the problems we faced when using the product ourselves.
This article aims to give an inside look at the environment setup, practices for development, and release automation that are result of years of collaboration between marketing, DevOps, Sitefinity engineering and the web team.
Let’s look at some details around our websites first, to better picture the challenges that we’re facing. Our websites are subject to hundreds of content edits and hundreds of thousands of pageviews every single day. In addition,online purchases form a large part of our business in developer tools and data connectivity, to give two examples. This makes it crucial to have high availability and great performance. We have 21 developers and QAs, constantly delivering new features and bugfixes to production - there are around 15 fully automated deployments per month. In addition, there are ~100 active content editors that use the CMS.
You can find some more statistics below:
Website |
Monthly Page Views (all sites) |
16,000,000+ |
|
Website |
Uptime |
99,97% |
|
100% |
|
>99,99% |
|
>99,99% |
How did we accomplish high uptime with so many page views, content edits, and deployments happening at the same time? We didn’t at first – here’s where we started.
All deployments for our websites used to be done manually by an IT administrator. At the end of every sprint, someone from the team would prepare an upload package with the changes for that iteration. Of course, changes were tested on pre-production environments before going live, but the upload package was made manually, so there was no real guarantee the changes were identical.
On the “Day of the Upload,” the more experienced team members would gather early in the morning, review every change in the package and then the IT administrator would proceed to upload the new package on all website nodes. In case of issues, however, the changes from the deployed package could not always be reverted – for example due to complex DB changes. The team would then quickly try to upload a fix, sometimes more than one.
Cases like that lead to evermore increasing rules and processes that everyone needed to follow, to guarantee higher quality uploads. As you can imagine, this increased the frustration with the upload process over time.
As the business continued to grow, so did the need to make changes to our production websites quickly, with higher quality and in a sustainable way. Simply automating the deployment process would not cut it, we needed something different altogether.We defined our needs for website development as follows (not too different from what you might want in your typical organization):
We found the answer to our needs in the form of Continuous Delivery – it gave us the ability to get changes into production, safely and quickly in a sustainable way. This meant two important changes in terms of how we approached feature development and the infrastructure to support it.
The setup of environments used for every company website can be visualized as a pipeline. Each environment in the pipeline serves a purpose. When a change is committed to the source control, an automatic build is triggered. The build produces a deployment package for the given website that is then automatically deployed to the first environment in the pipeline. The package then gets promoted from environment to environment until it reaches production. Below you can find a graphic of the process and a description of the purpose of each environment.
When creating or modifying Dynamic Modules, Sitefinity updates the DB schema. You then need to deploy those changes to all environments one by one. This is where “Export for Deployment” comes into play. This feature was developed specifically for this use case.
When a developer makes the changes on a local dev environment, he/she can then use the “Export for Deployment” (Administration -> Import/Export) feature to export the changes to files that can be committed to the source control system. When a new package is created, it will include these files. When the package is deployed on a given environment, upon startup Sitefinity is going to detect the files, make a comparison between the DB schema and what the files contains as information, and apply all necessary changes automatically.
Note that “Export for Deployment” does not export content, only DB schema changes. Content is created on production, code is created on developer environments. We don’t need developer test data on LIVE and we don’t want content editors to create/modify modules on LIVE. That is why only Code is deployed from DEV to PRO and only content (DB backups) is restored from PRO to DEV.
Here’s an actual screenshot from our deployment tool (developed internally).
We’re currently in the process of migrating to VSTS CI/CD (https://www.visualstudio.com/team-services/continuous-integration/). If you’d like to see another blog post with the technical details around that setup, let us know in the comments.
These are the three important prerequisites for a successful deployment:
A package gets deployed to an environment using the following automated steps. These steps are executed node by node, until the deployment is complete.
Below you can find all the tools that we currently use to monitor our websites and troubleshoot issues.
Heartbeat tests are created with Test Studio and executed as part of automated builds in Jenkins. These tests are executed frequently to guarantee that all critical workflows are operational. If any of the tests fail, a notification is sent to every person on the web team.
We use Pingdom to make automated checks for important pages. These checks do not have any complex steps like the UI heartbeat tests. They only request the page and expect a 200 OK response. If any of the checks fail, a notification is sent to every person on the web team.
We track user interaction with the website Realtime using Pingdom RUM (Real User Monitoring). This tool helps us see what requests are being made to the website, what the average response time is, what parts of the world the users are from, and the types of browsers and devices they use to open the website.
We use Zabbix to monitor the processor time, available memory, network traffic and disk usage on all nodes. Below is a screenshot, showing the data for the four nodes (columns) of TelerikCom.
We use the Elastic Stack to store and visualize data from Sitefinity logs, Event logs, and IIS logs. We use Kibana dashboards to plot the data into histograms and charts. This is very helpful to detect issues on production and troubleshoot them.
These are some of the most successful performance improvements that we implemented over the years.
Sitefinity has a built-in feature that supports the use for CDN for content in the libraries module. This is very handy for optimizing request time when serving images, documents, or videos around the world.
We use a separate deployment for CSS and JS – a pipeline of environments like the one for each website, but dedicated to CSS and JS assets only. This allows us to deploy styling changes much faster, without needing to deploy the whole website. In addition, these types of assets are hosted on a separate domain. This is done because the sites share a lot of assets, so it makes sense to serve them from one place – this way for example, if a user browses www.telerik.com and then goes to www.progress.com, a lot of the CSS and JS files will be already cached in his browser.
In addition to this setup, we use CloudFront as a CDN front for the web assets domain. This way JS and CSS files for our websites are served as fast as possible around the world.
The Sitefinity “Warm Up” feature is very useful, especially after the deployment or restart of a website on a given node. The website requests pages on given criteria on start-up and caches them on all levels, including IIS output cache.
Since content editors do not pay attention to image size all the time, in some cases we had blog posts with tons of images, several megabytes each. This weighs a lot on a page performance, regardless of the use of CDN. To tackle this challenge, we did a custom module implementation in Sitefinity that uses Kraken.io to automatically optimize all images after they get uploaded. The productization of this module is planned for future releases, so stay tuned.
Making changes to our production websites quickly, with high quality and in a sustainable way, is crucial given the number of page views, purchases, content edits, and deployments happening at the same time. We found the answer to our needs in the form of Continuous Delivery. With the support of key Sitefinity features like “NLB”, “Export for Deployment”, and “Warm up” we managed to create a pipeline of environments that serve as quality gates, and an automatic deployment process, both of which allow us to get changes into production, safely and quickly in a sustainable way. Adding a set of sophisticated monitoring and troubleshooting tools on top, we achieved 100% uptime over a year.
All together, these changes have delivered a boost to the business while helping to make the lives of members across teams easier. We hope this guide helps you do the same in your organizations too.
Yasen Yankov currently leads one of the Sitefinity engineering teams. Prior to that, he spent 5 years developing and maintaining Sitefinity applications as part of the Progress web team. He has worked on complex web projects like telerik.com, progress.com, sitefinity.com and nativescript.org
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites