CIO

Cloud Mea Culpa: Nick Carr Was Right and I Was Wrong

While it is tempting to forecast The End of Computing, it's unlikely that IT development will stop at Amazon-hosted (or Microsoft- or Google-hosted, for that matter) centralised computing

Nick Carr was right and I was wrong. Sort of, anyway.

I confess to having roughed up Nick Carr quite a bit (in print, of course) over the past couple of years. I thought he was plain wrong in his book Does IT Matter? His argument there could be summed up as the spread of IT throughout the economy makes it impossible for any company to achieve competitive advantage through IT; therefore, companies should settle for a commodity IT perspective, spending no more than the minimum necessary to perform basic functionality.

I feel that his characterisation of competitive advantage is a straw man; he seems to imply competitive advantage is a permanent advantage, immune to attack by other market participants. Information technology, just like every other form of competitive advantage, is subject to erosion over time, whether through mimicry by competitors or being supplanted by newer, better forms of competitive advantage. Moreover, with the spread of IT throughout the economy, IT is more important to companies, not less, since it infuses whatever a company puts forth as its competitive advantage. To take an example, certain high-end hotel chains emphasise their personal service, remembering that you like early am wakeup calls, or always want to use a particular limousine service, etc. Those chains don't depend upon individual staff to remember those details; they use IT to support the service offering-behind every smiling staff member who seems to know what you want before you do stands a sophisticated database system which stores information on every guest, carefully graded by frequency of visit, typical spend patterns, etc, etc. So one could argue that IT isn't their competitive advantage; on the other hand, one could argue that IT is fundamental to their competitive advantage.

Therefore, I felt Carr was well off the mark with that book.

His more recent book, The Big Switch , however, is a very different proposition. In that book, a discussion of the emergence of cloud computing, he outlines a very real trend and trenchantly observes that it is transforming the way IT is done. One of Carr's real strengths as a thinker and writer is his ability to examine historical developments with a fresh eye and discern important lessons germane to the topic at hand. So, for example, in Does IT Matter?, he discussed the rise and role of railroads, observing that they initially gave huge competitive advantage, but once they were built out, there was little competitive advantage available from railroad availability; nevertheless, despite the reduced advantage, enormous sums continued to be poured into railroad development, destroying everyone's margins as a result. Likewise, he concluded, IT was suffering from the same trend: initial advantage eroded by wide availability, with excess investment nevertheless being wasted by shortsighted corporations.

In The Big Switch, Carr reaches to another 19th century development to help understand today's IT-electric power. In the early days of industrial electricity use, companies had to generate their own power. Each company built its own powerplant, arranging for fuel delivery, generating capacity design, and so on. Eventually, electrical utilities were formed, making it possible to gain the benefits of electricity without needing to self-generate. Because the utilities had access to more capital, could manage load factors across a larger pool of users, and could bring more expertise to bear, company reliance on self-generation dwindled, replaced by public grid power.

Carr analogises from this historical trend the inevitable end of corporate data centers. Whereas it was necessary for companies to run their own computing infrastructures in the early days of computing, today utility computing companies like Amazon and Google have sprung up. With cheap capital, ability to manage load across multitudes of customers, and specialised expertise in data center design, Carr forecasts that company-owned data centers will fade away, replaced by access to a public computing grid.

I was dazzled by Carr's historical analogy (and, in fact, was enthralled by his description of the rise of electricity). However, on my original reading of his book, I was put off by his, to my mind, over-glib description of how cloud computing works. He described how a friend of his put together a website, snapping together individual applications like blogging, photos, etc. From this, he concluded that corporations would therefore move to the cloud, getting rid of all those pesky developers and extra IT headcount. I felt (and feel) that Carr does not grasp the complexity of typical corporation computing infrastructures and production applications. Moreover, I felt (and feel) that he underestimates the rapid pace of technology change in IT, all thanks to Moore's Law (and also thanks, perhaps more directly, to Moore's company.)

The topology of application delivery has changed several times throughout the (roughly) 50 year history of computing. Originally, everything was centralised in an on-premise mainframe. Then timeshare, supported by primitive public networking, came along (in fact, nearly every time I speak on cloud computing, someone asks "Isn't this just like timesharing in the old days?"). Then, with the dropped cost of computers, came centralised minicomputers. Distributed PC-based client/server followed that. Then came remote computing via the Internet. Now comes cloud computing.

Page Break

The point of these changes is they were not merely fashionable initiatives. Each represented a significant improvement in computing capability, driven by the advance of technology. And each resulted in a massive change in infrastructure, protocols, connectivity, and hardware. This is the point, I think, where Carr's analogy of the electricity grid breaks down. Unlike computing, the fundamentals of electricity settled down quite early and have remained pretty stable subsequently. Otherwise, how could Livermore's famous 100-year lightbulb remain burning to this very day? If electricity were like computing, the City of Livermore would have been forced to discard the lightbulb due to changes in power frequencies, socket obsolescence, incompatible hardware, or the like.

While it is tempting to, in a bit of a paraphrase, forecast The End of Computing, it's unlikely that IT development will stop at Amazon-hosted (or Microsoft- or Google-hosted, for that matter) centralised computing. To extrapolate one current trend, look at the explosion of computing-capable portable devices like smartphones and music players. What will happen to computing when, thanks to Moore's Law, these become as powerful as today's most powerful desktop computers, with just as much storage as well? Surely computing infrastructures will evolve to integrate that new distributed computing capacity.

Just so will data centers. I doubt that today's best practices in data centers, as evinced by cloud providers, will remain static.

Carr's more general point, however, is well-taken. What is the role of corporate data centers in this still-developing IT landscape? Notwithstanding the continuing rapid evolution of infrastructure, does it make sense for companies to run their own data centers going forward? Putting the question another way, is computing so changeable that there is still the potential for competitive advantage in running one's own infrastructure? As a corollary, is the variety of corporate applications so profound-and the differences in their architecture, hardware requirements, and so on so important-that it precludes moving to a standardised environment, a la Microsoft's San Antonio data center.

It's clear that the nature of how data centers are run is moving away from manual toward automation. As that piece discussing cloud data center cost structures noted, the automation of these centers is a key reason their cost structure is so much lower than standard data centers. For an internal data center to remain competitive from a cost benchmark perspective, it must attain those same automation capabilities. Much of the discussion about "internal clouds" posits that companies can match the same agility, automation, and economies of scale that external cloud providers achieve.

I sometimes hear advocates of internal clouds say that they want to provide a way for companies to leverage their existing infrastructure, but make it cloud-capable by layering some additional hardware and software on top. It's an attractive argument, but is it attainable or is does it merely attempt to justify remaining committed to the sunk cost represented by current data centers? More specifically, can real cloud capability be accomplished with the current infrastructure, representing as it does the variety of hardware and software purchased to implement specific application initiatives? My sense is that much of existing infrastructure cannot be moved into an automated environment, and a significant part of it will need to be scrapped or replaced with new automation-friendly technology.

To draw another historical analogy, a complement to the analogy I drew at the end of the cloud center cost posting alluded to above, consider how mass production evolved. When Henry Ford finally realised he needed to replace the Model T with a newer design, he found that his highly-automated factory lacked flexibility-it was designed for one thing: making Model Ts. It took an enormous redesign (lasting 18 months) and required replacing 40% of the machine tools in the factory, which were unable to manufacture anything but the T, before he could begin manufacture of the Model A. One might say he was "locked in" to manufacturing Model Ts. Ever since then, auto factories have been designed to be much more flexible in terms of car manufacture, enabling general automation no matter what specific type of car is being built.

Today's CIOs are likely to confront the same issue: if they want to move to a fully agile, fully flexible infrastructure, they'll need to do a general redesign of systems and processes-put bluntly, it will require significant investment to achieve "internal cloud" automation. Should they do so, that might put them on a level playing field, cost-wise, with commercial cloud providers. But it still won't answer the question as to whether "self-generation" of IT infrastructure is worthwhile when public options are available. Or, given the ever-increasing cost and complexity of infrastructure, does it make better sense to focus on applications-which, remember is where IT capability offers support for competitive advantage-and let someone else deal with the plumbing?

Bernard Golden is CEO of consulting firm HyperStratus, which specialises in virtualisation, cloud computing and related issues. He is also the author of "Virtualsation for Dummies," the best-selling book on virtualisation to date.