CIO

Cloud Computing: It's the Economics Stupid

The question I have is whether agility trumps cost permanently

The question of costs associated with cloud computing continue to be controversial. You may recognize in this blog's title, an homage to the motto of Bill Clinton's 1992 Presidential campaign: "It's the Economy, Stupid." The motto referred to the decision by the Clinton campaign to focus relentlessly on how the U.S. economy was doing in 1992, sidestepping other issues and always, always circling back to the economic outlook for the US. I was reminded of this by some recent discussions on Twitter that discussed the importance of economics in terms of cloud adoption.

This question of cloud economics arises especially in the context of the endless discussions about private vs. public clouds (private usually being thought of as referring to a cloud environment inside a company's own data center). Some people assert that private clouds obviously must be less expensive, because one owns the equipment and is not paying what is, in effect, a rental fee. The obvious analogy is buying a car vs. renting a car. If one uses a car every day, it's clearly less expensive to own rather than pay a daily rental fee to, say, Hertz. Sometimes this argument is made stronger by noting that public cloud providers also are profit-seeking enterprises, so an extra tranche of end-user cost is present, representing the profit margin of the public offering.

The proponents of public cloud computing cost advantages point to the economies of scale large providers realize. At a recent "AWS in the Enterprise" event, Werner Vogels, CTO of Amazon, noted that Amazon buys "10s of racks of servers at a time" and gets big discounts because of this. Also, AWS buys custom-designed equipment that leave out unneeded, power-using features like USB ports. Moreover, the public cloud providers implement operations automation to an extreme degree and thereby drop the labor cost factor in their clouds.

[For timely cloud computing news and expert analysis, see CIO.com's Cloud Computing Drilldown section. ]

There is yet a third approach about cloud economics that calls for blend of private and public (sometimes referred to as hybrid) which marries the putative financial advantages of self-owned private clouds and the resource availability of highly elastic public clouds; this can be summarized as "own the base and rent the peak."

What is interesting about Twitter discussion around this topic is that people point to surveys about private cloud interest indicating the real motive behind the move to cloud computing is agility, i.e., the ability to obtain computing resources very quickly and in an on-demand fashion. Cost savings of cloud computing were considered secondary to rapid resource availability.

I'm of two minds about this orientation toward agility rather than cost savings.

For sure, agility is magic, particularly when compared to the drawn-out procurement and provisioning cycles common to many organizations. Once you've seen that compute resources can be available in a matter of minutes, the old way seems antediluvian. We got a sense of this thirst for agility in the recent Ponemon Institute survey which notes that 73% of the respondents said that cloud computing allows business units to circumvent IT; moreover, business units control access to IT resources in 37% of he respondents' companies (the title of the article, by the way, is "Cloud computing makes IT access governance messier," which gives a feel for the findings of the survey!). Those decisions to circumvent IT are borne of need to respond stat to changed business conditions or opportunities: "If I can't get it done the official way, I'll do it myself." I have heard a number of anecdotes in which business units have put an app up on Amazon and, when called to task, have overridden the demand that the apps be moved back inside by pointing to the financial results associated with the app. Profit bests policy.

And it's clear that the agility of public cloud providers has provided an example of how it can be done, so internal IT groups recognize the need to meet the new benchmark for resource availability. It's not tenable to maintain that everything should take months when someone else can do it in minutes. The question then becomes "why can't you do it in minutes?" With the overhanging threat of business units choosing to go elsewhere for resources, implementing a private cloud that provides the benefits available from public providers becomes paramount. Of course, a private cloud also avoids the alleged shortcomings of public clouds in terms of security, SLAs, etc.

The question I have is whether agility trumps cost permanently. In other words, once the agility benchmark is met, will the cost of the internal cloud versus public providers be immaterial, or will it become the next benchmark. The question is somewhat reminiscent of performance benchmarking, where a common situation is that once one bottleneck is removed, another performance bottleneck manifests and becomes the problem.

I have to believe that cost effectiveness will become a new benchmark after the agility requirement is met. Once the ability to respond quickly to changing business conditions is possible, optimization along the "how much does it cost me to respond" will be an issue.

Put another way, if the agility issue arose because public cloud providers demonstrated that much better responsiveness is possible, why wouldn't cost become an issue, i.e., if ABC public provider can deliver computing resource for X cents per hour, what can you deliver computing resource for, per hour? It's hard to believe that the competitive pressure will end at "respond as quickly to compute resource requests as Amazon." I know if I were running a business unit, measured on my overall margin, I'd be looking to see how to reduce my costs and definitely would use external comparisons to demand similar costs from my captive supplier.

We are in the midst of a transition in how computing services are delivered. We are moving from a custom, manually-intensive, expensive approach to a standardized, automated, inexpensive approach. It's unrealistic to think that the expectation placed upon internal computing service providers will change along only one dimension, that of speed of response, while leaving all the other existing expectations unaffected. Change doesn't work like that. Anyone who's watched James Burke's blog entry about newspapers-especially regarding his discussion about the analogous rise of the printing press (search for the section that discusses Elizabeth Eisenstein). His point is that, retrospectively, the transition to the printing press looks inevitable and even fairly structured. No one can believe that people at the time felt that printing would become the basis of how knowledge is transmitted. At the time, however, it seemed chaotic and confused. And today, newspapers are in the midst of a whirlwind as the basis for their existence. The expensive printing press gives way to cheap electrons.

We're in the middle of that kind of transition with regard to what the future of computing is going to look like. The one thing I can confidently predict is that, 10 or 20 years in the future, we'll look back on the way things are typically done today the way we look at an old black and white movie in which someone wants to make a long distance telephone call and picks up the receiver and talks to an operator.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline

Read more about virtualization in CIO's Virtualization Drilldown.