Menu
How to track cost allocation for cloud apps

How to track cost allocation for cloud apps

Some organisations fail to grasp the implications of this costing model

One of the most interesting aspects of cloud computing is the way it changes cost allocation over the lifetime of an application. Many people understand that pay-as-you go is an attractive cost model, but fail to understand the implications that the new cost allocation model imposes on IT organisations.

The pay-as-you-go model addresses several obvious and painful limitations of the previous model, which was based on asset purchase; in other words, prior to application deployment, a significant capital investment had to be made to purchase computing equipment (i.e., servers, switches, storage, and so on).

The shortcomings of the asset purchase approach

  • It requires a large capital investment, which displaces other investment that the organisation might make (that is, it forces a tradeoff between this application and other, potentially useful capital investments like new offices, factories, and so on).

  • The capital investment must be made before it may be clear just how much computing resource will be needed when the application is operating; perhaps the application will experience much more use and there won't be enough equipment, but perhaps the application won't be used as forecast, and some or much of the investment will be wasted.

  • Requiring a large investment upfront makes organisations more conservative, not wanting to invest in applications that may not be adopted; this has the inevitable effect of hindering innovation, as innovative applications are by definition difficult to forecast and therefore more likely to result in poor adoption.

However, there is one big advantage of this approach: once the investment is made, the financial decision is over. Assuming the application obtains the necessary capital, no further financial commitment will be needed.

Of course, this has led to utilisation issues, as applications commonly only used single-digit percentages of the computing resource assigned to them, but there were no ongoing bills or invoices for the application resources.

[Related: What the Cloud Really Costs: Do You Know?]

Many people are excited about cloud computing because it uses a different cost allocation model over the lifetime of an application. Instead of a large upfront payment, you pay throughout the lifetime of the application; moreover, you have to pay only for actual resource used, thereby avoiding the underutilised capital investment situation typical of the previous approach.

[Related: Calculating Virtualization and Cloud Costs: 4 Approaches]

The advantages of the cloud computing approach

  • Little investment is required upfront. This means that cloud-based applications can be pursued without worrying about whether other, useful capital investment will be displaced by the decision.
  • This approach fosters innovation. Because little investment is at risk, innovative applications can be rolled out with less concern about predicting outcomes. If the application is successful, more resources can be easily added without requiring more investment; if the application is poorly adopted, it can be terminated and the resources returned to the cloud provider, with no ongoing payment needed.

  • It can enhance agility, because no lengthy capital investment decision processes are needed prior to beginning work. The cliche is that all that is necessary to get started is a credit card and within 10 minutes you're up and running. Anyone who has suffered through a capital investment decision process knows how long and miserable an experience it can be. Certainly the 10 minute approach is extremely attractive.

The challenges of the pay-as-you-go approach

The first one is obvious: instead of a one-time payment, users receive a monthly invoice or credit card charge. Every month there is reminder that there is a cost associated with running the application. The meter is always running.

The costs are unpredictable. I was talking to the CIO of a large media company and he said that his organisation loved the ease of access to resources that Amazon Web Services (AWS) makes possible, but one of his projects experienced this: the first month of working on the application was great -- immediate access and only US$400 of cost; the second month, however, the fee came to US$10,000. He noted that his firm could afford the US$10,000, but wanted to understand what caused such a dramatic change in cost.

Low resource utilisation imposes ongoing wasted costs. While poor utilisation in the previous model evinced inefficiency, at least there was no ongoing wasted money. When cloud resources are not actually used, but continue running, every month a bill comes for little productive work. It's like running your air conditioner with the front door wide open. And the poor habits of previous computing regimens continue in the new world of cloud computing.

At the recent Cloud Connect conference, a new cost tracking service called Cloudyn noted that its research shows that AWS resources are commonly used at 17 percent utilisation (conference presentation on slideshare here).

That's a lot of wasted money. Add in the likelihood that these resources are not being tracked and spun up instances are often started and then forgotten about, organisations could easily experience months or years of extra costs.

What is the right approach?

So what is the right approach for IT organisations to realise the benefits of cloud computing, but avoid the unfortunate cost effects outlined above?

The five critical items you need to pay attention to include:

1. Design: Your application must be designed so that the appropriate level of resources can be assigned and used. Think of this as a "just-in-time computing resource." This implies that the application must be designed as a collection of small. finely grained resources that can be added or subtracted as application load dictates.

Instead of one very large instance, the right design approach is to use multiple smaller instances that can grow or shrink in number as appropriate. Of course, this requires the application be facile with respect to adding or subtracting resources while in operation.

There are implications in terms of state and session management, load balancing, and application monitoring and management and these must be taken into account to ensure the application can respond to changing workloads.

2. Operations: Monitor utilisation and terminate unneeded resources. As previously mentioned, lots of bad habits from the previous, upfront investment approach remain in cloud computing users. Probably the worst one is the habit of starting resources and never shutting them off, or, indeed, never monitoring them to determine whether they're being used or not.

In the pay-as-you-go world, every unused or underused resource is a hole down which you're pouring money. Your financial tracking needs to be married to an operational tracking in which developers and system administrators are constantly monitoring resources to evaluate use, use levels, and potential design optimisations to reduce cost while still maintaining operational efficiency and required performance levels.

3. Finance: Evaluate total spend. I've worked with a number of companies that have many, many AWS accounts and don't realise that by centralising the spend, they would achieve greater discounts. While within some companies that decentralised approach is deliberate (aka "shadow IT"), everyone will benefit from lower prices, so it makes sense to move to a collective bill.

4. Procurement: Negotiate pricing. While AWS posts its prices, if there is sufficient spend, it will demonstrate flexibility. Certainly every other cloud service provider (CSP) out there is very flexible on pricing, especially in a situation in which the account would be moving from AWS. Of course, it's critical to ensure other critical elements of the application, like availability and security, can be achieved in another cloud environment.

Also, the application design must be such that it can be transferred from one cloud environment to another. Even if one is "locked in" via application requirements or design, it's amazing just what pricing flexibility can be generated by even the threat of potential provider switching.

5. Management: Recognise that cloud computing is a new operation mode and cost tracking and application utilisation monitoring are critical IT skills. Set up a group that examines ongoing financial performance to ensure maximum cost/benefit outcomes.

Don't staff the group with only finance people either. Technical skills are required as well to enable a full 360 degree evaluation of application financial and technical performance. Above all, realise that IT is now in the service provider business, and service providers pay attention to operational costs all the time.

Bernard Golden is CEO of consulting firm HyperStratus, which specialises in virtualisation, cloud computing and related issues. He is also the author of "Virtualization for Dummies," the best-selling book on virtualisation to date.

Follow Bernard Golden on Twitter @bernardgolden. Follow everything from CIO.com on Twitter @CIOonline

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments
[]