At Progress, we use Sitefinity CMS for our company websites. As a result, many of the features that have been released as part of Sitefinity during the years were developed in response to the problems we faced when using the product ourselves.
This article aims to give an inside look at the environment setup, practices for development, and release automation that are result of years of collaboration between marketing, DevOps, Sitefinity engineering and the web team.
Progress Websites
Let’s look at some details around our websites first, to better picture the challenges that we’re facing. Our websites are subject to hundreds of content edits and hundreds of thousands of pageviews every single day. In addition,online purchases form a large part of our business in developer tools and data connectivity, to give two examples. This makes it crucial to have high availability and great performance. We have 21 developers and QAs, constantly delivering new features and bugfixes to production - there are around 15 fully automated deployments per month. In addition, there are ~100 active content editors that use the CMS.
You can find some more statistics below:
Monthly Page Views
Website |
Monthly Page Views (all sites) |
16,000,000+ |
|
Uptime for the past Year
Website |
Uptime |
99,97% |
|
100% |
|
>99,99% |
|
>99,99% |
How did we accomplish high uptime with so many page views, content edits, and deployments happening at the same time? We didn’t at first – here’s where we started.
A long time ago…
All deployments for our websites used to be done manually by an IT administrator. At the end of every sprint, someone from the team would prepare an upload package with the changes for that iteration. Of course, changes were tested on pre-production environments before going live, but the upload package was made manually, so there was no real guarantee the changes were identical.
On the “Day of the Upload,” the more experienced team members would gather early in the morning, review every change in the package and then the IT administrator would proceed to upload the new package on all website nodes. In case of issues, however, the changes from the deployed package could not always be reverted – for example due to complex DB changes. The team would then quickly try to upload a fix, sometimes more than one.
Cases like that lead to evermore increasing rules and processes that everyone needed to follow, to guarantee higher quality uploads. As you can imagine, this increased the frustration with the upload process over time.
Continuous Delivery
We defined our needs for website development as follows (not too different from what you might want in your typical organization):
- Low risk releases
- Faster time to market
- Higher quality
- Lower costs
- Happier teams
We found the answer to our needs in the form of Continuous Delivery – it gave us the ability to get changes into production, safely and quickly in a sustainable way. This meant two important changes in terms of how we approached feature development and the infrastructure to support it.
- Our code had to be ready for production, at all times – in practice, this meant embracing feature toggling and quality gates.
- We had to start working in small batches, delivering to production as frequently as possible.
Environments
The setup of environments used for every company website can be visualized as a pipeline. Each environment in the pipeline serves a purpose. When a change is committed to the source control, an automatic build is triggered. The build produces a deployment package for the given website that is then automatically deployed to the first environment in the pipeline. The package then gets promoted from environment to environment until it reaches production. Below you can find a graphic of the process and a description of the purpose of each environment.
- DEV – the local machine setup that every web team member has. It is used for development, manual testing and writing automation tests.
- SIT – short for System Integration Testing, is the first environment that mimics the setup for production. On this environment, the given website is integrated with all internal and external services that it uses. In addition, the website is hosted on 2 nodes, using a Load Balancer. This environment is used for manual testing of features, as well as for running UI automation tests. There are nightly runs that execute all UI tests and additional runs of tests that cover the most important functionality each time a package gets deployed. This acts like a quality gate that ensures each deployed package fulfills the quality requirements before going to the next environment.
- UAT – short for User Acceptance Testing, this environment is used by stakeholders to approve a feature before it goes live. In addition, there is a set of automated tests executed each time a package gets deployed. The tests need to pass and the package needs to be approved, before it can be promoted to production.
- PRO – short for Production, this is the actual LIVE environment of the given website. We use load balancing to ensure high uptime and run a suite of heartbeat tests every 5 minutes to ensure that all critical workflows are operational (ex. product downloads, purchases, registration, etc.).
What about deploying DB changes?
When creating or modifying Dynamic Modules, Sitefinity updates the DB schema. You then need to deploy those changes to all environments one by one. This is where “Export for Deployment” comes into play. This feature was developed specifically for this use case.
When a developer makes the changes on a local dev environment, he/she can then use the “Export for Deployment” (Administration -> Import/Export) feature to export the changes to files that can be committed to the source control system. When a new package is created, it will include these files. When the package is deployed on a given environment, upon startup Sitefinity is going to detect the files, make a comparison between the DB schema and what the files contains as information, and apply all necessary changes automatically.
Note that “Export for Deployment” does not export content, only DB schema changes. Content is created on production, code is created on developer environments. We don’t need developer test data on LIVE and we don’t want content editors to create/modify modules on LIVE. That is why only Code is deployed from DEV to PRO and only content (DB backups) is restored from PRO to DEV.
- Deployed code is identical on all environments. Config files are modified using transformations.
- Production DB is restored regularly on all environments. This ensures consistency during new feature development.
Here’s an actual screenshot from our deployment tool (developed internally).
We’re currently in the process of migrating to VSTS CI/CD (https://www.visualstudio.com/team-services/continuous-integration/). If you’d like to see another blog post with the technical details around that setup, let us know in the comments.
Deployment Process
These are the three important prerequisites for a successful deployment:
- No downtime during deployment – we guarantee this by using a load balancing setup. Sitefinity has a built- in NLB feature that supports this.
- Great performance after deployment – we guarantee this by warming up the website after deployment. Sitefinity has a built-in feature called “Warm Up” that automates this process. Basically, the website self-requests all important pages. This way, the pages get cached on all levels, including IIS output cache.
- Automatically transfer DB changes – we use the “Export for Deployment” feature of Sitefinity to transfer DB changes between environments and apply them automatically on start-up.
A package gets deployed to an environment using the following automated steps. These steps are executed node by node, until the deployment is complete.
- The traffic to the node is cut off – it stops serving requests.
- The package is uploaded, replacing the existing web app files.
- Sitefinity startup is triggered, because the website on the node receives a request.
- Sitefinity checks for changes from “Export for Deployment” and applies them.
- Sitefinity “Warm up” is triggered.
- Traffic to the node is resumed – it starts serving requests again.
Monitoring & Troubleshooting
Below you can find all the tools that we currently use to monitor our websites and troubleshoot issues.
Heartbeat tests
Heartbeat tests are created with Test Studio and executed as part of automated builds in Jenkins. These tests are executed frequently to guarantee that all critical workflows are operational. If any of the tests fail, a notification is sent to every person on the web team.
Uptime Checks
We use Pingdom to make automated checks for important pages. These checks do not have any complex steps like the UI heartbeat tests. They only request the page and expect a 200 OK response. If any of the checks fail, a notification is sent to every person on the web team.
Real User Monitoring
We track user interaction with the website Realtime using Pingdom RUM (Real User Monitoring). This tool helps us see what requests are being made to the website, what the average response time is, what parts of the world the users are from, and the types of browsers and devices they use to open the website.
Node Performance Monitoring
We use Zabbix to monitor the processor time, available memory, network traffic and disk usage on all nodes. Below is a screenshot, showing the data for the four nodes (columns) of TelerikCom.
Logs
We use the Elastic Stack to store and visualize data from Sitefinity logs, Event logs, and IIS logs. We use Kibana dashboards to plot the data into histograms and charts. This is very helpful to detect issues on production and troubleshoot them.
Performance Optimizations
These are some of the most successful performance improvements that we implemented over the years.
Using CDN for libraries
Sitefinity has a built-in feature that supports the use for CDN for content in the libraries module. This is very handy for optimizing request time when serving images, documents, or videos around the world.
Using CDN for CSS and JS
We use a separate deployment for CSS and JS – a pipeline of environments like the one for each website, but dedicated to CSS and JS assets only. This allows us to deploy styling changes much faster, without needing to deploy the whole website. In addition, these types of assets are hosted on a separate domain. This is done because the sites share a lot of assets, so it makes sense to serve them from one place – this way for example, if a user browses www.telerik.com and then goes to www.progress.com, a lot of the CSS and JS files will be already cached in his browser.
In addition to this setup, we use CloudFront as a CDN front for the web assets domain. This way JS and CSS files for our websites are served as fast as possible around the world.
Pages warm-up after upload or restart
The Sitefinity “Warm Up” feature is very useful, especially after the deployment or restart of a website on a given node. The website requests pages on given criteria on start-up and caches them on all levels, including IIS output cache.
Automatic image optimization
Since content editors do not pay attention to image size all the time, in some cases we had blog posts with tons of images, several megabytes each. This weighs a lot on a page performance, regardless of the use of CDN. To tackle this challenge, we did a custom module implementation in Sitefinity that uses Kraken.io to automatically optimize all images after they get uploaded. The productization of this module is planned for future releases, so stay tuned.
In Conclusion
Making changes to our production websites quickly, with high quality and in a sustainable way, is crucial given the number of page views, purchases, content edits, and deployments happening at the same time. We found the answer to our needs in the form of Continuous Delivery. With the support of key Sitefinity features like “NLB”, “Export for Deployment”, and “Warm up” we managed to create a pipeline of environments that serve as quality gates, and an automatic deployment process, both of which allow us to get changes into production, safely and quickly in a sustainable way. Adding a set of sophisticated monitoring and troubleshooting tools on top, we achieved 100% uptime over a year.
All together, these changes have delivered a boost to the business while helping to make the lives of members across teams easier. We hope this guide helps you do the same in your organizations too.
Yasen Yankov
Yasen Yankov currently leads one of the Sitefinity engineering teams. Prior to that, he spent 5 years developing and maintaining Sitefinity applications as part of the Progress web team. He has worked on complex web projects like telerik.com, progress.com, sitefinity.com and nativescript.org