Sales/Support: 0800 040 7228

Core Infrastructure Considerations for High Application Availability

24 September 2015

Business processes are increasingly reliant on software, so keeping your software available is crucial to business continuity. The cost of unscheduled downtime for your important applications can be substantial, for both your business’s finances and reputation.

So how do you select an infrastructure that supports the reliability of your application?

There are three factors that determine the hosting strategy you implement and thus the availability of your application:

The budget-downtime risk trade-off is a continuous influence to your application’s infrastructure, but the way your application is built is instrumental too.

Data centre infrastructure

Fundamentally, your applications should be hosted in at least a Tier 2 data centre – but preferably Tiers 3 or 4. The Tier system is a guide for the infrastructure design of data centres. As you move up each Tier you can expect more redundancy:

  • Tier 2 data centres: redundant capacity components
  • Tier 3 data centres: meet or exceed Tier 2 requirements; multiple independent distribution paths that serve IT equipment; hardware is dual powered
  • Tier 4 data centres: meet or exceed Tier 3 requirements; facility is fault-tolerant through electrical, storage and distribution networks; cooling equipment is dual powered.

The Tier system can be used as a rough indicator of how much downtime you can expect from your application’s data centre. As Tier level corresponds to cost, you need to decide what level of redundancy you can afford and potentially how much application downtime you can accept. Moving from Tier 3 to Tier 4, for example, will increase your hosting costs significantly.

Server infrastructure

The next infrastructure consideration is your hosting strategy – colocation, managed servers, public cloud, private cloud etc.

As your application matures and becomes increasingly critical to your business processes, you’ll tend to transition ‘up the hosting ladder’, from perhaps a basic colocation solution or a single managed server, to public cloud solutions such as AWS and Azure and private cloud services that offer guaranteed high redundancy.

Many businesses believe putting their application in the cloud should be the logical first port of call to achieve cost-effective availability, but in reality, this isn’t always the best option. Firstly, although it often bestows efficiencies, the cloud isn’t always cost-effective. If demand for your application is consistent over time, a dedicated server environment can often offer cheaper computing power.

Also, the architecture of your application comes into play – it can be problematic to reconfigure your application to run in a cloud environment. In contrast to traditional environments, applications in the cloud need to be built for elasticity, erratic loads and potentially an enormous number of machines. Taking your application from a physical infrastructure to the cloud can be a costly process, with the risk of downtime.

It’s important to develop a plan to understand how you can transition your server infrastructure as your application grows in its size and importance to your business. This includes planning the changes that are needed to make to the configuration of your application and operational arrangements during the period of migration.

Disaster recovery infrastructure

As your application becomes more important to your business you should look at adding further protective levels to reduce the risks of downtime and data loss. Even with a highly redundant technological infrastructure, incidents of flooding, fire and theft can cause major disruption to your application.

Disaster Recovery planning enables you to maintain application availability in such an event. This is achieved by duplicating your data and storing it at a geographically separate site. Like the previous infrastructure decisions, you need to consider how much downtime you’re willing to risk, i.e. your Recovery Time Objective (RTO), plus how much data is acceptable to lose – your Recovery Point Objective (RPO).

Your RTO and RPOs can be reduced by increasing the frequency of backups, utilising replication to have another copy of your live data set and expanding how much of your application’s infrastructure is duplicated off-site.

Of course, the cost of operating a copy of your infrastructure can be significant, so again the budget-downtime risk trade-off determines your disaster recovery strategy.

It’s important that you plan for the future – as your application develops in its value to your business, your budget and the level of downtime you’re willing to accept will change too. Get your hosting provider to help you understand your infrastructure options to ensure you can rely on your software into the future.

By Paul K Jeffrey, Technical Account Director, iomart

New Call-to-action

Tags: Our Thoughts