Why owning your own data centre is no help in time of crisis

What a devastating but essentially predictable natural event tells us about business continuity and what can be done about it

The devastation of Hurricane Sandy thankfully has now gone, leaving behind it some red faces in the world of the web. Several well-known New York sites were downed including Gawker and Gizmodo, with the Huffington Post suffering the humiliation of having its reporters post their photos and updates via Twitter and Facebook.

It’s puzzling, though, that so many went offline given how frequently storms batter the US Eastern and Southern coast at this time of year. Autumn data centre outages and downed websites are becoming something of an annual ritual. And it’s not just the fact that these storms happen every year, when they do happen there is usually plenty of notice (some might say too much notice from hysterical and apocalyptic TV coverage), which might have prompted the Huffington Post or Gawker to think about a plan B a week or two before the hurricane hit Manhattan. In this era of being able to spin up virtual machines in fifteen or so minutes it seems puzzling that the sites were unable to move production somewhere safer.

Now I can only speculate about what was going on at Gawker and the other sites during the lead up to the hurricane, but what I do know for a fact is one week is far too short notice to stand up a failover site in another data centre. Ordering new hardware, provisioning circuits and signing contracts with a new colocation provider takes months. True, the cycle with a dedicated hoster is quicker, but a typical build will take weeks. It seems that all the affected sites owned their own equipment, which hobbled them at the moment of crisis.

In this era of being able to spin up virtual machines in fifteen or so minutes it seems puzzling that the sites were unable to move production somewhere safer

Another issue the sites could have run into could be the lack of redundancy in their topology. Data centres and networks are very loose with the truth when it comes to the resilience of their infrastructure. Two power feeds from two different substations sounds great; but what if, as is the case with a well-known London carrier hotel, they both come in via the same conduit. Similarly with networks, a well known City law firm suffered a network outage in London a few years ago because the two circuits it had ordered from two different telcos actually ran over the same fibre leased from a third provider. It takes the skills of Sherlock Holmes and the tenacity of the Spanish Inquisition to find the truth behind those glossy leaflets.

All the same it’s remarkable how few hosted deployments run a business continuity environment. It’s probably less than ten percent of the estate out there. It’s all the more puzzling if you consider that after an outage the most a provider will give its customers is a couple of days worth of credits, not great if the business has lost tens of millions in revenues. How can enterprises be taking such risks with their core functions? The answer is cost – few CFOs will stomach the money and man hours it takes to run a duplicate environment.

This is where the cloud can add real benefit, and not just in the well-understood sense of being able to stand up instances in a short period of time and across a wide geography, thus lowering the probability of black-out. More and more end users are using DNS management and load balancing in tandem with cloud to create a business continuity environment.

As with a content delivery network, traffic can be failed over instantly to another location a long way away from the hurricane. But with the load balancing aspects of DNS management, two sites can be run active/active, and spreading the servers across locations means the solution is considerably cheaper than the traditional 2x set up. 451Research’s Infopro survey shows that business continuity is one of the cloud’s fastest growing use cases, and it’s not hard to see why it’s turning into a trusted business tool.

 Daniel Beazer, Director of Strategy, FireHost

 

 

 

Cloud Industry Forum

CIF.jpeg

The Cloud Industry Forum provides transparency through certification to a Code of Practice for credible online Cloud service providers and to assist end users in determining core information necessary to enable them to adopt these services. Every month, a CIF member provides comment on a hot cloud topic.