CIO

Are We Happy Yet?

Despite compliance with the most stringent service level agreements (SLAs) and hard-nosed key performance indicators, there often remains the nagging doubt that you're really not getting everything you really should - or at least wanted - out of the outsourcing relationship.

Even when an outsourcing project goes as planned, many CIOs still feel like they're not getting everything they paid for.

Everyone has an outsourcing horror story. And whether it is your own or that of the friend of a friend of a colleague, they make you wonder if the course of any outsourcing project runs smoothly. You would be right to wonder.

Opinion varies, but suggestions are that anywhere from 30 to 90 percent of outsourcing relationships are problematical. As far as the issues go, you have probably heard the war stories: constant downtime, unavailable help desks, soaring add-on costs and so on, ad nauseam.

But what happens when things seem to go as planned or, at least, as contracted? Are we happy then? The answer is still no.

Despite compliance with the most stringent service level agreements (SLAs) and hard-nosed key performance indicators, there often remains the nagging doubt that you're really not getting everything you really should - or at least wanted - out of the outsourcing relationship.

A PA Consulting survey last year revealed what it described as a paradox: "companies are satisfied with their outsourcing suppliers' performance, yet there is a feeling that outsourcing is failing to deliver anticipated business benefits".

Jim Longwood, a research director at Gartner, predicts that "50 percent of CXOs will feel that their outsourcing deal hasn't achieved the success/performance they feel it should, while 60 percent of CIOs are happy".

You can dispute the figures, but the fact that PA and Gartner can measure such "feelings" is a sign of things to come.

All of the industry analysts agree that outsourcing is broadening its range from the nuts and bolts of utility or infrastructure practice to more innovation or transformational activities. (The fact that this is where the suppliers' greatest profit margins lie might be purely coincidental. Suggest to most CIOs that utility deals are on the way out and you might understand otherwise.)

Presuming this shift is real, then the relationships, the outcomes and associated metrics to judge success or otherwise by necessity become increasingly intangible. How do you measure innovation? By the number of new products or processes implemented; the shifts in market share? How do you measure transformation? By the number of people replaced; new markets entered?

Welcome to the age of measuring the unmeasurable.

Page Break

Actually that is a bit unfair.

While measuring intangibles - such as feelings, happiness and comfort levels - might be akin to juggling jelly, there are valiant efforts to do just that. In the age of transformational relationships, this becomes even more pressing, especially when these measures impact on supplier remuneration.

Tony Clasquin, CIO of Colonial First State, expresses a long-term interest in the subject of intangible measures, but admits he's unaware of any formal industry process. "Most of my SLAs are focused on behaviour rather than performance," he says. "The way you set an SLA is to first define what is the behaviour you want. Once you have the target behaviour clear in your head, then - and only then - do you set the SLA with a view to drive the behaviour. The SLA also needs to try and motivate the service provider to perform, rather than punishing non-performance.

"You can achieve significant improvement in performance by setting competing targets - better, faster, cheaper. These look mutually exclusive, but if you put pressure on all three at the same time, you encourage breakthrough innovation," Clasquin says. "I also find that there is an extraordinary power in just asking for reports. These might not be attached to any SLA, but few people give you a report that highlights a problem without doing something about it themselves. Again, this drives the right behaviour."

Another CIO listed "proactive value add, communication, trust, commitment to work with us to change, and probably 10 or a dozen sort of phrases and words that are values of a working relationship - a really good working relationship" as behavioural intangibles that could be measured.

Andrew Switala of Hewlett-Packard Managed Services provides a hit list of possible intangible measures that can be broken up into management satisfaction and end-user satisfaction measures (see "Measuring the Warm Fuzzies", page 86). "Generally, management is focused on the bottom line and anything that will impact it. End users are focused on how the delivery of services will hinder or enhance their ability to do their day-to-day jobs. Interestingly, 'wow factors' seem to be of greater interest to 'tech heads' in vendor organizations than customers," he says.

Satisfaction Surveys

Notwithstanding the doubt some CIOs expressed of the existence of satisfaction metrics, one supplier who has instituted surveys of intangibles for several years and claims to have developed a working and effective model is EDS.

"We instituted our service excellence client dashboard in 1999," says Catherine O'Gorman, EDS's Asia Pacific service excellence director.

The dashboard, a sort of balanced scorecard, comprises an online client survey using seven key and 13 more specific questions, which are designed to elicit responses along the lines of excellent, good, average, fair or poor. EDS regards average as a negative response. "If you're sitting on the fence," O'Gorman says, "you're not doing well."

Any number of participants can take part in the survey, and surveys can be instigated by clients at any time, although EDS recommends at least once a year. A minimum number of responses is three, although some clients have as many as 20 or 25 respondents. EDS asks that at least one respondent is a CXO. O'Gorman says, allowing for individual respondents' characters, there is generally little disparity in the responses within a client survey.

The survey questions, which are common to all clients, are reviewed every year. The current seven key questions cover: overall response, references, renewability, value, competitive advantage (that is, EDS's competitors), innovation and thought leadership. A typical question runs: "Innovation is the successful introduction of new ideas. The extent which EDS provides innovation that adds value to your business . . . "

While EDS is loath to give too much detail for its proprietary system, the other questions cover such areas as: "doing things in a timely manner, do we listen to them [clients], do we act upon their input, are we collaborative, do we collaborate with other suppliers, EDS as a long-term partner, and so on".

Negative responses raise the alarm bells, prompting follow-up interviews and discussions. Negative responses also impact on EDS employees' bonuses. As their at-risk component is about 20 percent of total remuneration, this can amount to a significant punitive measure. The loss also impacts on the entire EDS client team, encouraging a sort of EVA-style team effort in ensuring good results.

EDS is looking at revising its remuneration scheme - "making it more sophisticated, O'Gorman says" - as well as the operations of the dashboard itself. While the latter is largely a look and feel exercise and will leave the operating principles largely intact, when launched (it was slated for June) it will provide greater detail and some improved functionality, especially for internal EDS use.

Page Break

So Who Is Happy?

The big question is, if metrics exist and they are valid, then who is using them?

Let's be frank. Many of the CIOs we spoke to for this article had either never heard of satisfaction metrics, or were not even outsourcing at the moment, and therefore ruled themselves out of any comment. Even the vendors had trouble coming up with working case studies.

The CIO of a large state-based public sector agency said that during deliberations for renegotiated outsourcing contracts, satisfaction metrics were considered, with an eye to hopefully playing a part in supplier remuneration. However, they had trouble finding a working model, and the idea was ultimately scuttled.

EDS claims to have 1500 clients across the world using its dashboard as standard practice. But whatever the number, the question remains whether intangible "happiness" metrics are a decisive measure of outsourcing relationships.

Glenn Hickey, CIO of Zurich Financial Services, says "intangible measures are vital when looking at outsourcing". He says that protecting his company's intellectual capital is paramount to the organization, so trust and reliability are important measures of a potential outsourcing partner. But then he adds a twist. "We don't do a lot of outsourcing, primarily because we have difficulty finding a cost-effective solution . . . The cost benefit is hard to find."

Hickey suggests that outsourcers are finding the market very difficult at the moment. "Most IT departments are run as a very tight ship, and there's little room for profit for outsourcers."

Which might be why vendors emphasize the intangible benefits of outsourcing, the value-add, the relationship-builders. Gartner suggests that cost is not even in the top five reasons for outsourcing.

Hickey disagrees. "People are outsourcing based on cost," he says.

Getting a Handle on Intangibles

There is a view, often held by CFOs and others used to dealing with numbers and calculations on a regular basis, that if it cannot be measured it is not worth considering. And if it is intangible, then by nature it cannot be measured. This means that whatever the system and its heritage, any measure that smacks of warm and fuzzy qualities is akin to guesswork, and should be ignored.

Jack Keen, founder and president of IT consultancy The Deciding Factor, in a column published in this magazine last November ("Wanted: Metric Sceptics") criticizes this very attitude and "the misuse of numbers . . . when management refuses to accept guesstimates (informed estimates) as legitimate metrics.

"This problem occurs whenever someone says: 'If you can't prove it, we won't use it.' In this mistaken world view, only hard metrics from factual situations are valid. The clear message is that guesses don't count.

"The reality is that informed guesses are what makes the world work. If business investment strategies were 100 percent fact-based, we wouldn't need high-price executives to guide the company. Computers could probably do the job via fancy calculations. The whole discipline of risk management is ultimately grounded in probabilities, which themselves are estimates. The medical profession begins and ends with the reliability of doctor diagnoses, none of which are 100 percent correct. But they get enough right that we don't ban their profession. A similar situation exists with our economy's reliance on meteorologists. They don't hit the mark 100 percent of the time. But their track record is good enough for planes to fly safely through troubling storms, not to mention properly attiring our families for school and work.

"The key to accepting informed guesses as valid metrics is to make sure they: (a) come from knowledgeable, experienced and dependable people; and (b) are accompanied by clear explanations of the guesstimate's premises, assumptions and logic."

In an article published last year on the Technology Evaluation Web site ("If software is a commodity . . . then what?"), Olin Thompson, a principal of US-based Process ERP Partners, talks about the criteria involved in selecting a software partner when most software is commoditized, and his views apply as much to outsourcers in an often commoditized environment as they do to software.

"[In a mature market and less so in a newer market] we find that most products yield the same results and the differences are in the 'How' they do it instead of the 'What' they do. The relatively small percent of features that are different (usually less than 20 percent) are often in the 'nice to have' category, or are not features that most end users need."

Page Break

Although function is still important because that is what develops the value, Thompson says that "Function becomes less important in a commodity category . . . The non-product issues become the decision criteria".

This leads him through a discussion of services offered, followed by tangible differentiators such as speed of implementation, internal effort required, TCO, flexibility, scalability, ease and speed of modification, and availability of other modules that may be needed in the future. He then moves on to the intangibles, which include people, trust, vendor stability, vendor credibility, vendor focus, vendor knowledge, risk and end-user company politics.

Some might suggest that even his tangibles are verging on the warm and fuzzy. Flexibility might very well be in the eye of the beholder.

Longwood says that, in the current outsourcing evolution, "it is important for both parties to invest in the relationship", and while admitting that there is a lot of hype about relationships ("90 percent codswallop"), he says they are critical to a collaborative working environment.

That being the case, gauging the success of the relationship via some meaningful and understandable metrics becomes more important, and not just to the IT department.

"Customer satisfaction is the best measure of happiness. How happy is the sponsor indicates the business value of IT; how happy is the CFO indicates value for money," Longwood says.

"Best practice is to do customer satisfaction surveys on a regular basis, covering both the operations and the management level," he says. And this means undertaking such surveys before the outsourcing begins, as well as during, to give a working benchmark on the success or failure of the relationship.

SIDEBAR: Measuring the Warm Fuzzies

If you're looking for some intangible satisfaction measures these are worth a try

Management Satisfaction

Invoice accuracy: Customers want to have confidence in their supplier. Many contracts are based on variable volume pricing and the customer relies on the supplier to maintain good records on the variations to volumes of service consumed and to accurately reflect those in the invoices as they are presented. If there is a history of getting this wrong, it places a huge impost on the customer to check accuracy. The ability of the customer to check such accuracy is limited and they begrudge having to check in detail.

Satisfaction Indicator: A measure is the number of disputed amounts in a reporting period.

Year-on-year price performance (read "reduction"): Real and tangible decreases in costs for like services and service volumes are pretty much standard these days. This is achieved through a range of measures including reduced frequency of performing tasks, consolidation of technologies to reduce hardware populations and their associated costs of ownership, labour saving through better integration of technologies or automation of tasks, and price performance improvement for hardware and software.

Satisfaction Indicator: A measure is the number of initiatives proposed by the vendor that result in actual cost reductions.

Cost predictability: This refers to the level of detail and transparency in pricing models embedded within outsourcing contracts. Customers get very grumpy when they cannot work out how costs to them are related to the activities they perceive they have engaged in over an invoicing cycle. They also get grumpy when they are unable to assess the likely cost impact of business decisions such as acquisitions or divestitures, new projects arising from business change, and so on.

Satisfaction Indicator: This is a hard one to measure. Typically it is covered by the number of disputes/queries that are raised on invoices or the number of times the customer feels compelled to go to market to seek alternative quotes.

Thought leadership: This is a fuzzy one. Often in the passion of bidding, large claims are made relating to providing thought leadership. The example that demonstrates this leadership is the level of engagement that the vendor has with the customer, particularly in strategy and standards setting. This obviously is dependent on the customer providing opportunities to engage (usually at no additional cost to the customer). Other areas include exclusive/special access to new developments within the vendor's research and development activities, tours of facilities, invitation/introduction to specialist forums focused on areas of interest to the customer and sharing of business improvement strategies adopted by the vendor itself.

Satisfaction Indicator: Hard to quantify since benefits are in the eye of the beholder.

Technology innovation alignment with customer business objectives: This is all about making sure that the technology innovations implemented actually deliver improved business outcomes rather than just making the tech heads feel good about being at the cutting edge.

Satisfaction Indicator: This is also a difficult one to measure as most technology innovation relies on business practices/behaviours to deliver ultimate benefits to the bottom line. These are usually measured through agreed review processes up front and also in annual review processes as part of service planning for the following year.

Quality, comprehensiveness, comprehensibility and timeliness of reporting: This is about providing information that is useful to the customer on how well the services were delivered. By nature, SLAs measure technical performance (system availability, disk utilization, number of server outages, total calls answered within 30 seconds, and so on). Customers are looking to be able to make sense of those numbers in terms of impact on their business.

Satisfaction Indicator: Measures are translated into impacts through such approaches as an outage on a server supporting a system that manages dispatch of delivery trucks being expressed as the average delay experienced by customers in receiving their deliveries. This is clearly a complex measure, but it provides the customer with a real focus on service priorities and an opportunity to focus investment where the greatest benefit can be derived.

Page Break

End-User Satisfaction

Help desk call responsiveness: This is about how easy and fruitful an experience calling a help desk is for an end user. This includes such things as how long it takes for the call to be answered and whether or not the initial interaction allows the end user to get to the nub of their problem quickly (good systems will have the user details flash up through cross-referencing the incoming call number with a customer database). If IVR is used how many steps before the caller can talk to a real person and whether or not the help desk systems allow for effective information pass-on or whether the end user has to explain their problem multiple times to multiple support staff as the problem is handed off.

Satisfaction Indicator: This is usually measured by questionnaires on user satisfaction that are randomly presented to users as they log on to systems.

Help desk knowledge (of customer business and customer technology platform) This is about whether or not support staff are sufficiently well briefed to be able to answer the majority of questions that might arise.

Satisfaction Indicator: Typically this is measured by the number of calls that are resolved at the first call.

Speed to repair: This covers both physical repairs and problem solving. Correlation between the severity of the problem (in the customer's eyes) and the speed to repair are key. If the majority of problems are resolved in a short time frame, but the population of problems resolved includes a handful of high-profile problems that were not resolved in a timely fashion, then the overall impression will be that the performance was not up to scratch, even though the empirical measures met benchmarks. This requires good communication with end users and the ability to effectively set expectations.

Satisfaction Indicator: This measure is usually covered in customer satisfaction surveys.

Going the extra yard: This refers to doing the extraordinary and delighting the end user. However last year's delight is this year's routine.

Satisfaction Indicator: Mechanisms for identifying and advertising these actions are key to demonstrating to customers that the value they obtain is over and above the strict obligations embodied in an outsourcing agreement.

Demeanour (how phones are answered, and the like): Front-line vendor staff cannot have bad days. Customers want to see that vendors establish work environments that are conducive to happy employees. While, as a rule, they are not prepared to pay a premium, they want vendors to share their strategies and plans for establishing good working environments for staff.

Satisfaction Indicator: This is usually assessed at annual reviews between the customer and the vendor.

Changes work the first time: This is related to the level of confidence and comfort that the customer has in the vendor. The knowledge that their IT is in safe hands is reinforced or threatened based on the number of times that work has to be repeated or repaired.

Satisfaction Indicator: This is usually measured by the number of instances in which these events occur. Other measures include delays experienced and the amount of variation in time/budget/scope that arises during the course of implementing a change.

Follow up: This relates to the timing frequency and quality of follow up with end users who interact with front-line support staff of the vendor.

Satisfaction Indicator: This is usually measured through customer satisfaction surveys.

Familiarity with and continuity of front-line support people: With outsourcing services increasingly being delivered using remote services models, there is great value placed by end users on being able to put a face to a name. Customers are seeking to institutionalize into contracts obligations for front-line staff (especially help desk staff) to visit customer sites and meet/mingle with end users. This is also seen as an advantage by vendors but represents real cost in respect to the time lost to the activity.

Satisfaction Indicator: This is measured by counting the number of individual visits that may occur during a reporting period.

Source: Andrew Switala, Hewlett-Packard Managed Services