CIO

How transparent is your AI? And are 'black box' systems better?

Dr Rob Walker from Pega on the grey area of AI’s ‘black box’

According to a growing number of researchers and concerned citizens, artificial intelligence systems are fast becoming a black box, unable to be scrutinised by even the most talented computer scientists.

Given such systems are already helping make life-altering decisions from medical diagnoses to loan limits, it is increasingly important we are able to understand how they reach their conclusions.

And with the introduction of the European Union's General Data Protection Regulation (GDPR) requiring automated decisions with legal significance be forced to explain how they are reached, there is now a business imperative to make AI more transparent.

Dr Rob Walker, vice president, decision management and analytics at Pegasystems speaks to CIO Australia about the distinction and dangers of black box AI.

What is the difference between opaque and transparent AI?

Opaque AI uses algorithms that are not explainable to humans. Not just because, in some cases, it’s like ‘alien’ thinking, but also because of the sheer complexity of the resulting prediction, classification, or decision models.

Examples of opaque models are the multi-layered neural networks used for deep learning, which roughly mimic the inner workings of the human brain; or some of the results of genetic algorithms, a technique that evolves solutions to problems using variants of a simulated survival of the fittest context.

In contrast, transparent AI relies on techniques that can be successfully explained. Examples are modestly sized scorecards or decision trees that explicitly show how they use data to come to a prediction, classification, or decision.

How far are AI systems a ‘black box’ today?

That depends in part on the algorithm, but also on the data. Even theoretically transparent techniques like decision trees or scorecards can become opaque through sheer complexity.

One rule stating that applicants with an income of less than $45,000 will not be approved for a particular credit card is transparent (even if likely naïve); but hundreds of thousands of interdependent rules are as opaque as a multi-layered neural network.

I’d say that AI systems are already more black box than many would think, but at the same time humans are more black box than most would appreciate as well. It’s not like companies are strangers to opacity and they know how to deal with it, using review processes and supervision. The challenge is that the need for checks and balances on AI is less understood than around human decision-making.

Key requirements are one; a consistent metric to determine opacity; and two controls to allow opaque algorithms in some areas of the business and insist on transparency in others.

Why don’t we just make all AI systems transparent?

Insisting on transparency is a severe constraint on the algorithms used and, as a consequence, opaque AI can be expected to be more powerful, for instance, make better predictions.

The need for transparency limits the degree of freedom. For instance, the formats in which insights are expressed – regression model, neural network, and so on – are limited to those that humans understand or that can be expressed in formats that humans understand.

In addition, transparency is also a function of complexity. A spreadsheet, for instance, is a format designed to be something that humans can understand. But millions of formulas in cells that in turn reference many others make for a spreadsheet that becomes intractable as well.

Not all opaque algorithms beat any transparent algorithm, but opaque algorithms have more freedom to find better solutions. In terms of faster, there’s no meaningful distinction (especially with dedicated neural processors that execute some families of opaque algorithms extremely fast).

So which should companies use?

Opaque versus transparent is not a delineation between good and bad (after all, human thinking is often quite opaque as well), it’s a matter of organisations choosing where understanding trumps performance and where it’s the other way around.

Do companies have many transparent AI options?

Many. Even though opaque algorithms like genetic algorithms and neural networks have been around for decades, historically, AI categories like predictive analytics and machine learning tended to be more transparent.

Are AI providers just making the systems opaque for competitive reasons?

Although vendors and providers may like to protect their IP as well, the main challenge here is the inherent complexity of the models generated by modern AI. Not even Google knows exactly – perhaps not even approximately – how their AlphaGo Zero plays Go.

Are consumers bothered either way?

They will be when their mortgage application gets rejected and the only explanation is “because our AI said so.”   

Or when their self-driving car veers off-road to chase pedestrians with a certain profile. Or when they are forced to keep listening to music while the voice response system keeps them at the top of the queue because their accent puts them in a high/low likelihood bucket.

But in all seriousness, consumers will care about corporate ethics as much or as little as they do now. But at the moment, there’s at least some form of accountability.

What will the effect of the GDPR be on businesses using opaque AI?

Among many other things, GDPR requires an explanation for any automated decision that has ‘legal significance’. Lawsuits will soon define the legal boundaries of ‘significance’ but it’s a safe bet that under GDPR, opaque algorithms are a liability where such decisions are concerned.

And yes, it looks like GDPR will force organizations to be able to explain their AI insofar it’s being used for material customer decisions. Opaque AI will be fine for classifying images, translating speech or text, or other automation tasks.

When should we expect to welcome our new AI overlords?

There’s still nothing close to Artificial General Intelligence or Artificial Sentience. Ask me again when AI is smarter than us, regardless of the task at hand.

Currently, the AI, AlphaGo, that beat the Go world champion couldn’t play chess if its life depended on it, nor can it iron a shirt or drive a car.

I can do all of these things – with varying levels of success – and many other things. AlphaGo won’t walk away from someone approaching its off-switch and more importantly, wouldn’t know to be afraid. If that changes, the game changes. For now, I call it too soon to tell.