CIO

Too big to fail: How organisations can navigate hybrid storm-clouds

The cloud and its suppliers have simply become too big to fail and we can’t afford for that to happen.

The hyperscale public cloud and its suppliers have, in a way, followed the evolution of the banking system. We rely on banks in ways we often take for granted: they store what we find valuable, withdrawing those valuables is as easy as the click of a button, and we rarely see any people when dealing with our bank. With most interactions done via an app or browser, the days of visiting a branch are almost long gone.

And much like the banking system, the cloud and its suppliers have simply become too big to fail – we can’t afford for that to happen. In the event of a cloud outage or full-blown crash, our systems would be crippled and we would find it near impossible to go about much of our daily lives. Accountability in the cloud game is therefore an integral selling point.

However, occasional failure is not out of the question, even for the big guns in the business and although large-scale outages are rare, the possibility of them happening – to any of the hyperscale cloud services – cannot be dismissed.

Simply deciding to put a service into the cloud is just the beginning of the story. Organisations – and their service providers – need to review available cloud architectures, service availability and automation process management among other critical considerations, and how they would fit their bespoke needs. Planning is crucial.

Service availability

Service level agreements are all well and good, but they only come into effect after an outage has occurred. Outages are simply a fact of life for those who rely on cloud, in much the same way outages can and do occur with electricity.  It’s imperative that organisations select the provider based not only on the guarantees within their SLA, but on their history in responding to outages, average downtimes if any, what redundancies are built-in and more. Boasting “we offer availability 99.999 per cent of the time” may sound impressive – but more important is their response to the 0.001 per cent that they’re not.

Cloud architecture

Cloud is not simply plug-and-play. Any organisation will have bespoke needs that can only be met with bespoke architectures. It’s why storing absolutely everything in the cloud is considered a no-no, and hybrid has become the go-to architecture for most organisations.

Automation

We all like to think that automation is the panacea to workflow issues, access and more. Take human error and process out of mundane day-to-day actions and, in theory, you not only increase the likelihood of those processes being error free, you also free up staff to operate the less mundane, and more critical tasks within the business.

When it comes to cloud, automation should come into its own, but often we overlook the sheer breadth of what is required to get the most out of a cloud deployment. It’s not simply about basic deploy-to-cloud scripts. Security, monitoring, auto-scaling and legacy asset migration are just as, if not more, important.

Equally, organisations are increasingly seeking ways to automate IT functions without getting locked into tools that may limit their options five years down the track. Cloud organisations that can offer a more flexible approach to IT functions will go a long way to providing their clients with the ideal outcome for a cloud deployment.

The IT landscape is littered with cloud shifts that haven’t gone to script, and that’s because no two businesses have the same needs. Equally, cloud outages are going to happen – no-one offers 100 per cent SLAs for a reason.

It’s therefore imperative that organisations, including cloud service providers, do their due diligence ahead of time to best set their business up to weather the 0.001 per cent of time an outage hits.

David Hanrahan is general manager of Dimension Data’s Cloud Services Business.