CIO

Cloud essentials: Migrate in haste, repent at leisure

Go ahead and migrate that behemoth analytics application to the public cloud, but get ready for a financial wake-up, shake-up or take-down.

There are many fine examples of how a well-executed cloud computing strategy has helped businesses rapidly deploy new systems and accelerate innovative digital transformation initiatives.

On the other hand, and before basking in their glory, we should also be mindful of an indisputable truth – not every workload (or workforce) is designed and ready to best exploit it.

But it’s hard to not get drawn into the spin. With the promise of massively cost-effective compute grunt and an unlimited supply of visualised goodness, why not take the plunge?

Why persist in building and maintaining your own high-performance applications when specialised or hyper-scale providers have the savvy and scale do it better, faster and cheaper? Simple economics, right?

Well, perhaps that’s true for cloud-native applications built from the ground-up and architected to exploit cool stuff like auto-scaling, DevOps and continuous delivery.

However, and as many organisations can attest, the promise of cloud services as a quick fix for decades of accumulated legacies is also extremely tempting. Especially for those monolithic application “skeletons” we’d just love to house in someone else’s closet.

But however well-intentioned our cloud motives, a badly planned and uninformed strategy can have dire consequences for business. Ironically, it’s the very nature of cloud models that bite the hardest when scant attention is given to whether it’s actually the right approach for certain workloads and use-cases.

Watch out for after-sale ‘sticker shock’

Go ahead and migrate that behemoth analytics application to the public cloud, but get ready for a financial wake-up, shake-up or take-down. Memory intensive applications like these require some serious memory horsepower, meaning the organisation could end up having to use the largest and most expensive virtualised instances.

In such cases it might be more cost-effective to acquire and maintain on-premise commodity kit, understanding of course (especially as prices continue to fall), that eventually there’ll be public cloud financial “sweet-spot”.

This, however, isn’t a one-off exercise. As good as organisations need to be at determining which cloud-model is fit for purpose, more attention should be given to continuously exploiting financial nuances and opportunities as they arise across multiple providers – perhaps using brokerage and arbitrage models.

Don’t let the dogs out

If you think you’ll have giddy sub-second response times by migrating every system to the cloud, think again. Application performance problems can be the death-knoll of many a cloud service, especially those that demand low-latency and throughput.

While housing that old, but trusted SQL database and transactional workhorse in a new-fangled cloud-hosted container might seem like a good idea, it might actually run like a dog in the cloud – especially if careful consideration isn’t paid into the performance implications of geographic location and network connectivity.

Remember too, monolithic applications almost certainly require far more rigour around redundancy and failover – all potentially adding extra cost, complexity and management overhead.

Avoid building architectural slums

Without doubt, cloud-based application architectures constitute the essential future-proofing fabric of a modern digital business, but housing old problems in new wrappers is criminal.

Analogous to town-planning mistakes of the past, many organisations create their own equivalent high-rise “housing slums”; taking every legacy application problem and migrating en masse to the cloud. Some even mandating that everything should be virtualised, containerised, and now even serverless. But this can only exacerbate problems.

Older systems will probably need significant overhaul and service decoupling to fully exploit aspects like auto-scaling and container immutability, while runtime independence and isolation can unwittingly extend the life of a problem system – the “application slums” we should have cleared years ago.

Design for failure or prepare to fail

Despite the undoubted scale and elasticity benefits of cloud, it will fail – sometimes spectacularly. Consider for example the Amazon Web Services outage in Australia in 2016.

Torrential rains in Sydney took out an availability zone, which together with API call issues led to unreliable failover. Naturally, some businesses suffered, while others (ahem) weathered the storm.

They were the ones that designed systems with a multi-availability zone failure approach and the savvy use of hybrid-cloud. Once again this illustrates the importance of well-designed cloud architecture and engineering smarts – aspects many businesses fail to appreciate.

Yes, cloud can deliver business muscle, but over time that muscle can atrophy without constant workouts and pain.

All suggesting that any cloud strategy should carefully consider requisite improvements in supporting elements that don’t necessarily come cheap or easy – such as changes to automation and change management practices, thorny cloud service interoperability issues, new tooling requirements, and last but not least - workforce capability.

With all the hype and spin around cloud it’s tempting to go feet first. But take heed. No cloud investment should be an all-or-nothing approach. Smart organisations recognise that benefits only accrue through a carefully considered approach. Those that adopt, shape and align a variety cloud models according to business goals and outcomes will succeed.

Miriam Waterhouse heads up the strategy function in the Chief Information Officer group in Australian Federal government. Miriam’s professional experience includes time as a Commonwealth Chief Information Officer, Victorian government Information Technology Strategist and numerous other technologist roles.