Menu
Why private cloud will make IT think like Wal-Mart

Why private cloud will make IT think like Wal-Mart

Expert analysis and advice on server virtualization technologies, deployments and management.

I think this is going to pose some challenges to IT operations, and, by extension, to IT organizations that want to implement a private cloud.

Here are a few of the interesting challenges this resources-in-a-jiffy pose:

Reduced demand signal insight In today's processes, an application's group's need for compute resources is signaled well ahead of time, allowing the Operations group sufficient time to procure and provision resources. While manual and slow, these processes enable overall demand to be assessed, prioritized, compared against available funding and implemented (and, by the way, did you see that Gartner and Forrester noted that IT spend was being further revised downward for 2009? Bet there's going to be plenty of angst about this in provisioning meetings!)

By contrast, the "submit and get" cloud provisioning model totally removes those processes, allowing applications to make immediate demand upon Operations infrastructure and-crucially-removing insight about likely demand patterns previously discerned via the manual processes. Wal-Mart forecasts demand via real-time checkout data, married to insights gleaned from examining historical consumption patterns: e.g., it sells a lot of candy around October 15, year-in and year-out. IT operations has less historical data, and demand for compute resources is likely to be more unpredictable, especially over the next few years as initial private clouds are implemented.

Increased demand As resource provisioning becomes easier, more resources will be demanded. Classical economics focuses on price and asserts that reduced costs are associated with increased demand (called price elasticity). Cloud computing is often characterized as less expensive than traditional provisioning models, although there is some controversy about that. No controversy exists, however, about the fact that cloud computing makes it easier to obtain compute resources. The manual processes outlined earlier have the effect of introducing friction into the provisioning process; with cloud computing the process is much smoother and easier. When it becomes easier to do something, people generally do more of it. This translates into increased demand for compute resources in private clouds, over and above migrating the established resource consumption patterns already established in the previous manual domain.

Need for rationing mechanism Given increased demand and a fixed amount of resources, how will those resources be allocated among competing demands? This verges on the concept of chargeback, which is rather controversial within cloud computing. Some assert that the billing arrangements of how cloud computing is paid for is a mere detail; others asset that the concept of charging for resource use if fundamental to the concept. If one accepts that demand will increase and that capacity is fixed (at least in the short term), it's likely demand will outstrip supply and some mechanism is needed to mediate demand. Those who maintain that actual monetary chargeback is not necessary tend to support usage reports, indicating how much compute capacity has been consumed. My belief is that there will be a lot of pressure to move to actual chargeback, not least because the public cloud providers offer it; application groups will maintain that the internal cloud should offer as much transparency and immediate feedback as a public cloud. A different tack would be to place limits within the provisioning process, preventing people from requesting too high a level of resources or forcing them to get a signoff, etc., but doesn't that sort of negate the whole purpose of cloud computing?

Pressure on IT Operations capacity planning Nobody wants a "404" when they press the "submit" button requesting resources. This pressure is the logical companion to reduced demand signaling and increased overall demand: Operations is going to have to make sure that they manage the underlying compute infrastructure so that resources are always available when requested. Even if a rationing mechanism is in place, the consumption signals it sends (an invoice or a usage report) are retrospective (i.e., they report on previous month's use) and don't help at the moment of truth when someone requests resources. IT Operations will come under significant pressure to always have sufficient compute resources available. This doesn't get discussed much, but expect it to be a hot topic in the future regarding private clouds. Of course, this is a challenge for all cloud providers, as they all promise what the UC Berkeley RAD Lab Report calls "the illusion of infinite resources." Amazon is characterized as having trouble with this issue, so it's not unique to private clouds; however, this problem may be more acute for private clouds, since this skill is, today, not that prepared.

Changed budget practices So where is the CAPEX? Much of the enthusiasm for cloud computing focuses on the shift of payment structures for cloud users. Instead of having to pay for a capital asset (so-called CAPEX), users instead pay only for the compute services they consume (so-called OPEX). This is great, but neglects to recognize that someone, somewhere, has to spend the CAPEX so that cloud users can enjoy paying via OPEX. In the case of internal clouds, this means that the IT organization will have to make the CAPEX investment. One difficulty with this is that, today, common budget practices are that application projects fund capital expenditure; when application groups need only pay for actual usage, capital will not be allocated to apps nor transferred to Operations to fund equipment acquisition. This means that capital investment will need to be directed away from applications and toward IT Operations. Obviously, this represents changed practices and the next couple of years should be interesting as the implications of the budget shifts are explored.

Overall, one might say that the functionality of a cloud environment will need to be matched by changed processes. This means that not only are products (e.g., vSphere) involved, but so is the dread word: re-engineering. It makes sense, because every innovation that is disruptive, as cloud computing is so often characterized, well, disrupts things. And, as I noted in a recent posting, disruption is messy thing with lots of organizational angst. The disruption is inevitable and necessary; usually, the only question is whether one embraces it as a so-called early adopter, or impedes it (laggard). In the end, however, disruption arrives, whether bidden or unbidden. The IT supply chain is under pressure to adapt and adapt it will.

Bernard Golden is CEO of consulting firm HyperStratus, which specializes in virtualization, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualization to date.

Follow everything from CIO.com on Twitter @CIOonline

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingprivate cloudBernard GoldenHyperStratus

More about Amazon Web ServicesetworkEvans DataGartnerHewlett-Packard AustraliaHPHTCStratusWal-MartWikipedia

Show Comments
[]