There’s an increasingly prevalent hybrid cloud strategy popping up when I speak to CIOs across the country – that is, to blend public and private cloud services from more than one provider.
This can sometimes be a clear direction to avoid vendor lock-in, but more often is simply driven by the desire and/or need to consume a particular service, since what is available from provider A may not be available from provider B.
While the vendor lock-in concern can be quite valid, a result of this multi-provider strategy manifesting in the market is that we are now seeing organisations that have applications, and even data, that span two, three or sometimes many more clouds.
This is creating untold complexity due to the need to administer multiple applications, multiple APIs, different security and configuration management and complex networking requirements. Yet, many don’t consider this complexity during their planning phase as it’s hidden behind the guise of a quasi-hybrid cloud “strategy”.
Simply having multiple clouds does not mean you have an effective hybrid cloud strategy. And thinking that multi-cloud management tools or cloud management platforms will eliminate this complexity is being optimistic at best.
Organisations which have gone down this path quickly discover that while these tools provide some relief for underlying native cloud services, the cost to implement often outweighs any value gained.
Yes, from an operations perspective, automated workflows are a wise investment. Whether presented through a CMP or triggered from your tool of choice, they can minimise errors, manage performance and allow the use of lower-skilled resources to execute what was previously a complex set of management tasks.
A simple example of this in the infrastructure space is that a service desk or a project team may be able to spin up to modify the resources they need using an automated workflow – perhaps triggered directly from the approval of a business process in another tool like ServiceNow, or a CMP to automatically deploy or modify the requested servers, change configurations, turn on load balancing and add public IP addresses.
On the other hand, the person responsible for designing and deploying the initial underlying network and security domains is likely to use native tools to deploy networks, setup the standard security rules, and configure load-balancing pools or other complex tasks where the functionality between cloud providers can differ markedly.
Further, I have also seen IT admins and DevOps teams bypass automated processes to deal with native clouds more often than anyone expects. The tools these teams choose to use also seem to change regularly based on the skills or preferences of the person – unless the organisation clearly defines the standards for the tools to be used.
There are a multitude of tools and even more “cloud broker services” that provide some relief for underlying native cloud services, but the cost to implement these often outweighs the value gained while the tool’s functionality may not keep up with the rate of change in cloud vendors’ products.
I’d also be concerned about building out a multi-platform strategy around a tool only to see it ‘acquired and retired’ – something that has occurred often in recent times.
In a multi-cloud environment, a business can choose which applications to send to which cloud and this seems to make a multi-cloud strategy an appealing option.
But as we have seen, the management tasks multiply, the security risks expand and end-to-end service availability may suffer. An effective hybrid cloud strategy must be much more comprehensive than simply saying “we are multi-cloud” or we risk creating so much complexity that it becomes unmanageable.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.