Menu
How DevOps Can Accelerate the Cloud Application Lifecycle

How DevOps Can Accelerate the Cloud Application Lifecycle

As I wrote in my last column, it's clear that cloud computing enables, and imposes, enormous change in applications. In that article, I focused on the technical changes cloud computing is forcing on application architecture - all designed to support the increased scale and load variability, higher performance expectation and changed pricing that cloud computing imposes.

What I didn't address was another traditional application assumption that cloud computing is upending: The application lifecycle. Specifically, the cloud requires a significantly faster rhythm of application management, which will impose change on IT organizations.

On the face of it, it may not be obvious why the technical capability of cloud computing would transform IT organizations and their processes. However, the critical foundation for the technical capability that cloud computing offers, automation, also requires accelerating the application lifecycle.

Cloud Computing Puts Onus on Organization, Not Its Infrastructure

In the past - that is, before cloud computing - a leisurely pace of application feature creation and roll out wasn't a big issue. There was tremendous friction in the underlying resource infrastructure process, such that it was impossible to imagine quickly improving application functionality and making frequent updates to production environments. It took so long to obtain, install and configure infrastructure that slow-moving application development and deployment wasn't perceived as the biggest problem IT faced. Put another way, infrastructure friction outweighed application friction.

Cloud computing's automation removes all that infrastructure friction. Today it's trivial to obtain new computing resources in minutes, enormously faster than the weeks (or months) it used to take to get new infrastructure resources available. Today it's clear that lengthy infrastructure provisioning timeframes are imposed by organizational process, not the underlying infrastructure resource itself.

At the same time, the increasing integration of digital information into mainstream business offerings, such as the Internet-connected Nest thermostat, means that operating parts of the business demand faster application functionality availability. With the removal of infrastructure friction, the primary impediment to this is the application lifecycle itself. Consequently, we can expect the next dislocation caused by cloud computing to be within the application development and deployment process. Simply put, IT must make the application lifecycle faster.

[ Analysis: SDN Adoption Puts DevOps Pros in High Demand ][ Commentary: How Cloud Computing Changes Enterprise IT Economics ]

That's where DevOps comes in. A portmanteau that mashes up "development" and "operations," DevOps symbolizes the combination of formerly separate organizations and processes. The vision standing behind the word is one of streamlined, integrated organizations that speed application updates through a joined-up process, enabling changes to be rolled out in hours or days, instead of weeks (or months).

But how does that magic DevOps stuff happen?

Culture Change Necessary, But It's Not Enough

Culture change is one solution that's often bruited. In this view, getting developers and system administrators working together in joint teams will improve cooperation, thereby accelerating the application lifecycle.

I'm not much of a fan of this approach. Yes, there's often friction between development and operations, and being part of the same team would undoubtedly improve personal relationships. This approach might incrementally improve the current state by fostering cooperation and less finger-pointing. However, it's not clear that better personal interaction would significantly quicken application functionality release cycles.

In any case, incremental improvement is insufficient. Businesses can't succeed in the future digital business environment they confront with 20 percent faster functionality rollouts. That only trims from 21-day projects down to 18 days or, more commonly, six-month deployments to five months). Todays businesses require 200 percent more speed, if not more.

The culture argument is misdirected, then. It focuses on people, not process - and it's the process that's broken. The faster process, moreover, must extend throughout the application lifecycle.

That said, we've made much progress in software engineering. Development practices, spurred by agile work practices, frequent check-ins and automated build/test via automation, have accelerated dramatically over the past 10 years. This revolution, called "continuous integration," has resulted in faster development, fewer project disasters and better predictability of project process.

[ Case Study: Continuous Deployment Done In Unique Fashion at Etsy.com ][ Also: When Agile Isn't Working: Bringing Common Sense to Agile Principles ]

For too many IT organizations, however, the rapid pace of the application lifecycle grinds to a halt when it comes time to place new code into production. In the realm of operations, it's all too often a shift back to manual processes performed by multiple, redundant efforts as each downstream group accomplishes its task with its chosen tools.

Until IT organizations achieve continuous deployment, in which code changes move through a fully automated application lifecycle, business requirements in the form of rapid response to unsettled business conditions and frequent functionality rollouts will go unaddressed. Worse still, business units will see all the pride in continuous integration as yet another example of IT self-involvement with little relevance to the needs of the application sponsor.

I've heard many arguments against continuous deployment. They center on the consequence of releasing bad code into production environments and the increased likelihood that bad code will be released if end-to-end automation is implemented. At bottom, this argument is one that asserts that, without human intervention, things are going to go wrong.

In my experience, manual intervention is just as likely to cause problems as to avoid them, so I don't buy the argument. In any case, any justification for lengthy manual processes is going to be overridden by user demands. Clinging to established approaches is going to be fruitless. Just ask the VMware sysadmin who's being bypassed by developers, fed up by the weeks-long request cycle required for in-house resources, who go straight to Amazon Web Services to obtain VMs.

Continuous Deployment Needs Automation, Integration, Aligned Incentives

To achieve continuous deployment, four things are necessary.

A streamlined, automated process. If the process includes milestone reviews by a committee, process halts while someone approves further progress, or any kind of manual oversight for routine releases, then nothing is going to make the system move faster. Asserting that occasional changes shouldn't go through automatically, whether due to complexity or some other reason, isn't the same as proving that every change needs manual intervention. It just means you need to implement a process that supports intervention as required while enabling automatic pass-through for routine changes.

An integrated, end-to-end toolchain. Obviously, unless there are supporting tools underlying the process, the automated workflow is pointless. Essentially, the output of the tools in one phase has to be handed off to tools in the next phase, which means the tools have work together. Today, your choices for a DevOps toolchain tend to be either expensive proprietary offerings from large vendors or homegrown stitched-together collections of open source components. Each approach has its strengths as well as drawbacks. With the interest in the area, and its increasing importance, one can be sure additional flexible, affordable solutions will come to the marketplace shortly.

[ Case Study: Rapid Application Development the Zappos Way ][ How-to: Change Software Testing for New Cloud Configurations ]

Shared application artifacts. Those joined-up tools have to pass along artifacts that are common to all groups using them. Re-creating executables and configurations, even from an automated runbook, creates additional work, presents the opportunity for mistakes to creep in and is bound to impede rapid functionality availability. It's far better to use a single set of artifacts and pass them among groups, adding and subtracting permissions to enforce organizational partitioning as appropriate.

Aligned metrics and incentives across the organization. With incentives, we return to the topic of culture. As indicated, I don't really believe that putting different groups together in the hope that, by developing personal relationships, friction and mistakes will vanish. For groups to work together productively, they must share a single set of metrics and incentives. If one group is measured by how frequently it releases updated functionality, and another group is measured by operational stability, there will be conflict over what gets done.

You need to implement a single set of measures so that everyone can pull together. Don't think that this can be accomplished by creating the union of all previous metrics, as combining a requirement for frequent updates with the need for operational stability will just create conflict within one group rather than across multiple groups.

The obvious objections to these recommendations is that they're hard and will cause a great deal of disruption. That's absolutely true - and if we were living in the IT world of a decade ago, there would be no need and no point in implementing such disruptive measures.

We don't live in that world, though. We live in today's world, where the accelerating pace of business change necessitates a complete rework of how IT has operated since its earliest days. Manufacturing went through a painful 30-year reengineering phase that upended century-old practices in the pursuit of efficiency and lower costs. Things are moving faster now, and IT won't have anywhere near 30 years to accomplish the same task - but accomplish the task it must.

Bernard Golden is senior director of Cloud Computing Enterprise Solutions group at Dell. Prior to that, he was vice president of Enterprise Solutions for Enstratius Networks, a cloud management software company, which Dell acquired in May 2013. He is the author of three books on virtualization and cloud computing, includingVirtualization for Dummies. Follow Bernard Golden on Twitter @bernardgolden.

Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.

Read more about cloud computing in CIO's Cloud Computing Drilldown.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computinginternetTechnology TopicsTechnology Topics | Cloud Computingcontinuous deploymentcloud application developmentcontinuous integrationsoftware development lifecycle

More about AgileAmazon Web ServicesAmazon Web ServicesBillionDellFacebookGoogleNestParadigmVMware Australia

Show Comments
[]