CIO

No demand for Windows 8 tablets? Or don't trust predictive surveys

Analyst firms often use surveys to make predictions sound more believable. However, there generally is no connection whatsoever between what a survey predicts and what the outcome eventually is. For example, according to the surveys the U.S. President should be Hillary Clinton and the person most likely to win the Republican nomination were a bunch of folks that aren't even in the race anymore.

Steve Jobs wouldn't use focus groups because he believed, and was right, that people have no idea what they are likely to do in a future event. Now surveys can be good at explaining why someone did something, but they aren't so hot at trying to predict what folks will do.

The most recent annoying survey is the one that predicts Windows 8 Tablets will fail because there is no demand for the product. I'm sure a similar survey made a year before the iPhone or iPad would have likely concluded the same thing about those products.

Let's start out this year by talking about why you can't trust surveys.

My Experience With Surveys

When I first left IBM to join Dataquest my goal was to create the most accurate predictive operating system forecast ever created. Based on my graduate work on marketing studies, coupled with a number of years as a competitive analyst, and then as lead for the spinout of the IBM software business. Over the years I'd purchased millions of dollars' worth of predictive research and virtually all of it had been unreliable. This was because the typical projection was made based largely on historical data, or if a new offering, based on how much a critical vendor was contributing to the related service.

If something had been growing at 10 percent it likely would be projected to continue growing and, surprisingly, the most generous vendors would be predicted to succeed and the least generous predicted to fail.

Granted some of the folks doing this had a lot of influence and so the predictions became self-fulfilling at times. But the real problem was that upcoming problems, because no one wanted to a paying vendor angry, were often not reported even when known.

So in 1995 I published what turned out to be my most accurate five-year forecast, it also became the reason I eventually stopped trying to do highly accurate forecasts. It started with a complex model that looked at historic trends, factored in known future events, and even took into account environmental changes and, modestly, became the benchmark for accuracy with one exception.

It accurately predicted the decline of UNIX, MacOS, OS/2 and Netware but it was overoptimistic of Windows 95. This last prediction was due to a bad assumption. We'll get to that in a moment. However, on these other declines it not only was accurate it showcased the downside to negative predictions. Because, suddenly a lot of executives appeared focused on making me an ex-analyst and the Dataquest CEO wasn't exactly my best friend (in fact that entire first year I seemed to constantly be on the bleeding edge of getting fired. The only thing that saved me is that this survey was getting a lot of press.)

One of the funniest was Novell where their then CMO told my boss that unless I changed the forecast she would cancel her subscription to our service. Upon being told that Novell had never subscribed, she sent over a check for $75K to subscribe and I never did change the survey.

As I mentioned above, where I missed however, was on Windows 95 sales projections and I missed by a lot. This was because after Windows 95 shipped Microsoft stopped demand generation, something we had not anticipated, and the negative news on problems with the product filled the marketing gap and folks stopped buying. Sold out stores with lines became unsold piles of Windows 95 boxes.

So even in a case where we tried to predict every element of a future world (and this took weeks of work) missing one assumption (granted none of us thought any company would be this stupid) resulted in a miss. Had we been aware of this variable we could have predicted more accurately the outcome and Microsoft might have avoided it.

However this brings up the second problem with surveys.

Predictive Surveys Can Change the Outcome

Remember Y2K? A lot of folks, including me, surveyed the world months before the year 2000 and said systems weren't ready and the result would be a disaster. One critical test at a nuclear power plant indicated that power plants like it that were unprepared would immediately shut down (granted far better than going critical). We even found instances where folks were reporting readiness only because they knew they would be retired when the event occurred and things went south.

We scared the crap out of everyone, evidently so well, that things actually got fixed and, as a result there was little real breakage. This caused folks to conclude we were full of crap and subsequent warnings about inadequate security were ignored and the first part of the last decade was partially defined by an increasing number of security breaches.

People act on predictions and their actions can change the outcome. Analysts aren't oracles but even if they were, being told that you would die on a drive would likely keep you from driving and invalidate the prediction.

Windows 8 Survey or a Coin Toss

The Windows 8 Survey is interesting, but it has about as much chance of being accurate as flipping a coin. That doesn't mean it will be successful either, it just means it's a guess and it does serve as a warning to Microsoft that they'll need to generate demand for their offerings like Apple did for the iPod, iPhone and iPad. Microsoft is rumored to have a $750M launch budget for this product and it's not planning to repeat the Windows 95 mistake.

Once Windows 8 hits beta we all should have a much better idea of how well it will do and once we see the marketing behind it should be able to make a more accurate prediction. But my primary point is that surveys are only as good as their source and given people don't yet really know what choices they'll have at a future point, they can't accurately predict their own behavior and simply aggregating these inaccuracies doesn't make them more predictive. So take predictive surveys on technology or politics for its entertainment value or as a potential early warning, but more for their entertainment value than their accuracy.

Rob is president and principal analyst of the Enderle Group. Previously, he was the Senior Research Fellow for Forrester Research and the Giga Information Group. Prior to that he worked for IBM and held positions in Internal Audit, Competitive Analysis, Marketing, Finance, and Security. Currently, Rob writes on emerging technology, security, and Linux for a wide variety of publications and appears on national news TV shows that include CNBC, FOX, Bloomberg and NPR. Rob also does a semi weekly radio spot for Wall Street Journal radio on consumer technology.

Read more about it organization in CIO's IT Organization Drilldown.