Facing strong concerns about control and security, the cloud-computing trend has drifted somewhat-- away from the notion that all computing resources can be had from outside, and toward a vision of a data center magically transformed for easy connections to internal and external IT resources.
Sales of cloud-related technology are growing at 26 percent per year--six times the rate of IT spending overall, though they made up only about 5 percent of total IT revenue this year, according to IDC's Cloud Services Overview report. Defining what constitutes cloud-related spending is difficult, the report acknowledges, though it estimates global spending of $17.5 billion on cloud technologies in 2009 will grow to $44.2 billion by 2013.
Hybrid or internal clouds will be the rule, however; even in 2013, only about 10 percent of that spending will go specifically to public clouds, IDC predicts.
[For expert advice on proving the value of your cloud plan to the business, see CIO.com's recent article 8 Ways to Measure Cloud ROI. ]
Hybrid cloud infrastructure isn't radically different from existing data-center best practices, except that all the pieces are supposed to fit neatly together using Internet-age interoperability standards rather than homegrown kludge, according to according to Chris Wolf, analyst at The Burton Group.
As you prepare spending plans that line up with a move to the cloud, consider these four items as key for your list.
1. Application Integration
Surprise: Software integration isn't the first thing most companies think about when building a cloud, but it's the most important one, according to Bernard Golden, CEO at cloud consulting firm HyperStratus, and CIO.com blogger.
Integration means more than just batch-processing chunks of data being traded between applications once or twice per day the way that was done on mainframes, according to Tom Fisher, vice president of cloud computing at SuccessFactors.com, a business-application SaaS provider in San Mateo, Calif.
Being able to provision and manage user identities from a single location across a range of applications is critical, especially for companies that have never been in the software-providing business before and don't view their IT as a primary product, he says.
"What you're looking for is take your schema and map it to PeopleSoft or another application so you can get more functional integration," Fisher says. "You're passing messages back and forth to each other with proper error-handling agreement so you can be more responsive. It's still not real time integration, but in most cases you don't really need that."
The second critical factor in building a useful cloud is the ability to federate--securely connect without completely merging--two networks, Golden says.
That requires layers of security, including multifactor authentication, identity brokers, access management and, in some cases, an external service provider who can provide that high a level of administrative control, according to Nico Popp, VP of product development at Verisign, which is considering adding a cloud-based cloud security service.
What it really requires is technology that doesn't yet exist, according to Wolf: an Information Authority that can act as a central repository for security data and control of applications, data and platforms within the cloud. Today, it's possible to assemble that function out of some of the pieces Popp mentions, but there is no single technology able to span all the platforms necessary to provide real control of even an internally housed cloud environment, Wolf says.
3. Virtual I/O
Having to squeeze data for a dozen VMs through a couple of NICs will keep you from scaling your VM cluster to cloud proportions, according to Bill Welty, Manager, IT Enterprise Architecture and Unix Operations at a large digital mapping firm (which sells high-quality satellite Earth images to customers including Google, NASA and the National Geospatial-Intelligence Agency as well as in oil, real-estate and other industries.)
"When you're in the dev/test stage, having eight or 10 [Gigabit Ethernet] cables per box is an incredible labeling issue; beyond that, forget it," Welty says. "Moving to virtual I/O is a concept shift--you can't touch most of the connections anymore--but you're moving stuff across a high-bandwidth backplane and you can reconfigure the SAN connections or the LANs without having to change cables."
Virtual I/O servers, such as the Xsiigo I/O Director servers Welty uses, can run 20Gbit/sec through a single cord and as many as 64 cords to a single server, connecting to a backplane with a total of 1,560Gbit/sec of bandwidth.
Concentrating so much bandwidth in one device saves space, power and cabling, Welty says, keeps network performance high and ultimately saves money on network gear.
"It becomes cost effective pretty quickly," Welty says of the Xsigo servers, which start around $28,000 through resellers such as Dell. "You end up getting three, four times the bandwidth at a quarter the price."
We mentioned it before, but storage continues to be the weak point--the hole into which one pours money--of both the virtualization and cloud-building world.
"We've got about 2.5 petabytes of spinning disk right now, and plan to bring it to three in the short term," Welty says. "That's mostly because our business rides on data, but VMs take up a lot of space."
"Storage is going to continue to be one of the big costs of virtualization," Golden says. "Even if you turn 90 percent of your servers into images, you still have to store them somewhere."
Follow everything from CIO.com on Twitter @CIOonline.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.