Applications are at the center of business. The IT systems that underpin the modern economic landscape are used via applications that staff and customers expect to be available 24x7. Keeping these applications online and responsive is vital to all organizations' financial, reputational and operational health. Maximizing application availability is foundational to delivering a good user experience for staff and customers.
Applications can go offline for several reasons, such as software issues, hardware failure, human error, or cyberattacks. Irrespective of the reason for application downtime, the effects can be catastrophic. Some of the impacts of unplanned downtime include:
If staff cannot perform their job function due to applications being offline, then there is an obvious hit on productivity. This downtime will often have additional knock-on effects for others in the organization or business partners when workflows get blocked by an offline application. In extreme cases, application downtime can lead to shutdowns on the factory floor in manufacturing businesses. Incidents like this are very costly both in direct lost production, and there can be additional costs restarting a production line.
Downtime to an application that clients or business partners rely on can result in significant reputational damage. People using applications expect a seamless experience, and any interruption to services can (and will) drive them to other services and competitors. This is especially true if outages are frequent. Reputational damage gets compounded if the downtime is due to a cyber attack that also results in a data breach.
A direct result of the statement above about people moving to other services when they don't get the experience they expect is lost sales in an online shopping application. Information shows that 50% of customers will abandon their online shopping carts if web pages don't load in six seconds. This shows that an application doesn't have to be offline completely to have an adverse impact. It can also lead to lost sales and revenue if the application is slow to respond. If an application is slow, that's equivalent to downtime.
In addition to direct loss of sales, an offline or slow application with a poor user experience will lead to missed business opportunities. Potential clients using your web application as part of their product evaluation process will move on to competitors if their user experience is not top-notch. The same is true for potential B2B partners using your online applications to see if they could work with your organization.
On top of lost sales, there are many other costs that accrue when there is unplanned application downtime. One cost often overlooked is the IT team's time and resources to analyze, troubleshoot, and fix any issue. When doing this in response to an emergency, they are not doing other planned activities and projects designed to improve the business. Downtime often results in staff working in manual ways via paper records for the duration of the outage, and then having to revisit activities that they recorded on paper to input them into the applications when they are available again. The costs of a downtime incident in dollars can range from low five figures up to high six-figure sums for larger organizations if the outage is severe. For Fortune 1000 companies, downtime costs can even be millions of dollars per hour.
The deployment of high availability can mitigate the issues outlined above. But what is high availability, and how is that achieved?
IT teams can deliver high availability in various ways. Before the use of virtual machines became the norm for on-premise infrastructure deployments, it was common to deploy backend servers in clusters of two or more machines. These clusters operated in configurations where services continued to be available even if a fault took a node in the cluster offline. After adopting virtualization, they delivered the same via multiple virtual instances running on resilient hardware platforms, with features like auto migration of virtual machines between hardware as required.
In the modern application landscape, the use of load balancers to distribute requests to multiple backend application servers in a pool is the norm. It doesn't matter if the backend application servers are physical, virtual, or even running as microservices in containers. Load balancers (multiple instances to provide redundancy) sit in front of the backend application servers, take incoming access requests from clients, and distribute them across the backend server pool.
The load balancing software uses various intelligent algorithms and network traffic monitoring to share the incoming client requests across the server pool. Load balancers also monitor server health and status in the pool to ensure that client requests don't get sent to servers that are too busy to handle them. Unplanned server downtime is also detected, and access requests get rerouted away from offline servers.
We have used a store checkout analogy in previous articles to illustrate what load balancers do at a network traffic level. We'll use it again. When the number of shoppers in a store is low, having a few checkouts open is fine. However, as the evening rush hour arrives and people visit the store to shop for items they need, the number of people at the checkouts goes up, and if only a few are operating, long queues build up. Opening additional checkouts adds capacity and allows for an efficient flow of people through the store. They get spread out over the multiple checkouts available in the same way a load balancer spreads out client requests over the available servers.
We can extend the checkout analogy to highlight how load balancing also deals with a server issue that would cause application downtime. If someone drops a bottle of tomato juice at a checkout, then that single checkout will need to be closed, and the customers will get routed to other checkouts until cleaning finishes and the checkout reopened. Similarly, the load balancers will redirect client traffic to others if a server or service is unavailable.
By deploying Progress Kemp LoadMaster load balancers to manage access to your application servers, you can essentially eliminate the risk of unplanned downtime for on-premise applications. Also, by deploying Global Server Load Balancing (GSLB), you can load balance applications across geographically spread data centers to mitigate the risk of a disaster taking a complete location offline.
Third-party examples of organizations benefitting from a particular technology are always helpful. LoadMaster has been deployed in over 200,000 locations worldwide. You can read multiple examples of LoadMaster helping organizations deliver enhanced application user experiences on our Case Studies and Customer Success Stories page.
I’ll highlight two here.
Contact us to learn more about Kemp Loadmaster.
Doug Barney was the founding editor of Redmond Magazine, Redmond Channel Partner, Redmond Developer News and Virtualization Review. Doug also served as Executive Editor of Network World, Editor in Chief of AmigaWorld, and Editor in Chief of Network Computing.
Let our experts teach you how to use Sitefinity's best-in-class features to deliver compelling digital experiences.
Learn MoreSubscribe to get all the news, info and tutorials you need to build better business apps and sites