Coping with traffic: how to scale your cloud effectively
One of the advantages of cloud computing is the ability to scale for peaks and troughs - but there are some rules to follow
One of the biggest reasons for moving to the cloud is that it can scale up and it can scale out. But the trouble is scaling of any variety is much easier to say in a soundbite than it actually is to do in real life.
For many organisations, scaling means achieving massive economies from managing large pools of compute power in the most cost-effective and efficient manner. These economies should result in compute resources available to the end-user organisation at a far lesser premium than what could be achieved using traditional infrastructure.
There are two ways that cloud computing infrastructures can scale (incidentally, this is not fundamentally different from how traditional infrastructure scales). Scale-out means adding more nodes (virtual machines or instances) to a distributed system. Scaling up requires the addition of more memory or processors to a virtual machine or instance.
A good cloud provider should have a management console or APIs available to ensure this can be done quickly and easily. It should also allow capacity to be decreased quickly to avoid over-charging for redundant cloud infrastructure.
These consoles or APIs should also enable the organisation to quickly scale up an instance by adding more memory or processors to handle increased workloads within a single server. If you have a single application server running in the cloud and this server can cope with 100 concurrent sessions, traffic bursts from a product launch or media coverage may mean adding more servers to handle the increase in traffic alongside a load balancer to distribute the load.
The major difference between provisioning physical hardware and cloud compute resources is time. Cloud is a lot quicker (minutes and hours rather than weeks and months), but the catch is to know when you do need the resources and when you no longer need it and to plan for both accordingly.
Any cloud provider should be able to scale your cloud service up as and when you need it, according to Charles Barratt, solutions development manager at IT provider Equanet. But he says it is important that this is controlled by the IT organisation.
“Orchestration and automation of systems is exceptionally important and where possible this should be bound to an SLA of the workload so that service utilisation such as seasonal peaks do not impact end user experience,” he says. “A key requirement of automation is the monitoring of usage so there is no dispute when billing occurs.”
Companies scaling up their clouds can still make the mistake of provisioning servers and resources for peak demand and then pay the price for under-utilisation during low workload demand. This is much like in traditional infrastructure where face with peaks in demand, organisations over-provisioned servers to cope with spikes then were lefts with hardware idling around until the next time they were needed, often months away, wasting time and money as well as not being the greenest of options as well.
Companies can make the mistake of provisioning resources for peak demand and then pay the price for under-utilisation during low workloads
"Cloud provides the ability to match workload demand with capacity supply in a ‘pay for what you consume’ model," says Randy Clarke, chief marketing officer at automation software provider UC4. Automation is needed to ensure the capacity curve tracks closely to the workload demand curve, effectively eliminating the costs associated with idle assets between workload bursts.
When scaling in the cloud, the meter starts running as soon as services are provisioned and deployed – and continues to tick over even if resources are no longer required for the purpose for which they were created.
"As such, companies need to be careful only to use what they need and remember to ‘turn off’ or ‘de-provision’ resources when not required,” says Ryan Rubin, director at global risk consultancy Protiviti.