Menu
Cloud Computing: IT Operations Changes Are Mandatory

Cloud Computing: IT Operations Changes Are Mandatory

Structured data storage requirements will double while unstructured data storage requirements will increase seven-fold

Just before the holidays I had a really interesting conversation with my friend Bill Takacs, who works at Gear6. It is a company that offers memcached appliances, used in applications that have very high data loads that preclude using a database as the primary means of data access. He shared with me a common pattern he sees in companies that are heavy users of memcached, which, after some thought, I concluded offers a vision of the future of cloud computing operations.

What he said is that they are seeing companies put together applications which appear to be standard web apps, but in fact are something far more complex. Rather than a web page being built by accessing a data source, resulting in a displayed page, these companies applications are web pages constructed on the fly from a number of different mini-applications--widgets, if you will--custom constructed per user, based upon the user's history, immediate interactions, and common patterns of usage discerned by analysis of aggregated user interactions--and most of the widgets are, in themselves, heavily loaded, memcached-enabled applications.

In other words, a web page is built from a portfolio of high-volume applications, some proportion of which are assembled to create that individual web page. Bill uses the phrase "composed apps" to describe these constructed-on-the-fly applications. As you can imagine, constructing and operating these applications is complex, but they will represent an increasingly large percentage of future "enterprise" applications.

Huge Data is the Future

One of the things we discuss in our cloud computing presentations is that the scale of data is exploding. According to a study IDC did last year about enterprise storage needs, over the next five years structured data (the traditional row-and-column information contained in relational databases) will grow at a 20+% rate over the next five years.

However, unstructured data will grow at a 60% compounded rate during the same timeframe. This results in structured data storage requirements doubling, while unstructured data storage requirements will increase seven-fold. Seven-fold! In other words, application scale is increasing--dramatically so.

At large scale, variations in system load that, in traditional, smaller applications, would have been managed within the context of the unutilized capacity of a single server become major swings in resource needs--to the point where load variation can result in needing to be able to dynamically add (and subtract!) virtual machines.

Moreover, this variability is going to be common--even the norm--in the future, so the ability to respond to dynamic app load by rapidly altering application topology will be a fundamental IT skill. More to the point, the demand for dynamic scaling will outstrip the established practices of most IT organizations, based as they are on stable application environments and occasional topology modification through manual intervention by sys admins.

A different way to put this is that, with scale growth, the standard deviation of average application workload with respect to common resource allocations will increase, dramatically. As an analogy, if the local restaurant experiences a short-term 10% growth in demand, it can typically respond by ordering a few more foodstuffs from the local restaurant supply company.

If McDonald's experiences a short-term bump in demand, accommodating it has repercussions throughout an extended supply chain. At large scale, change in demand can't be met by throwing a little more memory in a machine or sticking another server in the rack. It requires adding tens or hundreds of systems and terabytes of storage. And when the demands shifts to the other side of the average load, the large standard deviation necessitates releasing just as many servers or just as much storage.

Dynamic vs. Orchestrated

Obviously, the scenario I've just laid out is what cloud computing is designed for. The UC Berkeley RAD Lab Cloud Computing report identifies "illusion of infinite scalability" and "no long-term commitment" as key characteristics of cloud computing, which address the challenges outlined in the previous section of this post.

However, there is a difference between having a characteristic and being able to efficiently take advantage of that characteristic.

One of the capabilities many vendors tout with regard to their cloud management offerings is "orchestration." By this, they mean the ability to define a desired set of compute capacity in a single transaction, with the underlying infrastructure (i.e., the orchestration software) obtaining the necessary individual resources that, combined, make up that capacity. And there's no question that orchestration is useful, even necessary, as far as it goes. However, it goes only part of the way to addressing the future application management needs of cloud computing.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags cloud computingSOAscalabilitymemcached

More about BillIDC AustraliaMcDonald'sStratus

Show Comments
[]