Menu
Artificial intelligence can go wrong – but how will we know?

Artificial intelligence can go wrong – but how will we know?

You needn't worry about our robot overlords just yet, but AI can get you into a world of trouble unless we observe some best practices.

Explaining yourself

In future, Gray says machine learning systems may need to show how the data was prepared and why a particular machine learning model was chosen. “You’ll have to explain the performance of the model and its predictive accuracy in specific situations.”

That might mean compromises between how transparent a model is and how powerful it is. “It's not always the case that the more powerful methods are less transparent but we do see those trade-offs,” says Horvitz. “If you push very hard to get transparency, you will typically weaken the system.”

As well as the option of making systems more explainable, it’s also possible to use one machine learning system to explain another. That’s the basis of a system Horvitz worked on called Ask MSR. “When it generated an answer it could say here's the probability it's correct,” he says – and it’s a trick Watson uses too. “At a metalevel, you’re doing machine learning about a complex process you can’t see directly to characterize how well it's going to do.”

Ryan Caplan, CEO of ColdLight, which builds AI-based predictive analytics, suggests systems may ask how much they will need to explain before they give you an answer. “Put the human being in control by asking ‘do you need to legally explain the model or do you need the best result?’ Sometimes it’s more important to have accuracy over explainability. If I’m setting the temperature in different areas of an airport, maybe I don’t need to explain how I decide. But in many industries, like finance, where a human has to be able to explain a decision, that system may have to be curtailed to certain algorithms.”

Accessibility not fragility

Hector Yee, who worked on AI projects at Google before moving to AirBnB, insists that “machine learning should involve humans in the loop somewhere.” When he started work on AirBnB’s predictive systems he asked colleagues if they wanted a simple model they could understand or a stronger model they wouldn’t. “We made the trade-off early on to go human interpretable models,” he says, because it makes dealing with bugs and outliers in the data far easier.

“Even the most perfect neural net doesn’t know what it doesn’t know. We have a feedback loop between humans and machine learning; we can look at what the machine has done and what we need to do to add features that improve the model. We know what data we have available. We can make an informed decision what to do next. When you do that, suddenly your weaker model becomes stronger.”

Patrice Simard of Microsoft Research is convinced that applies beyond today’s PhD-level machine learning experts. His goal is “to democratise machine learning and make it so easy to use my mother could build a classifier with no prior knowledge of machine learning.”

Given the limited number of machine learning experts, he says the best way to improve machine learning systems is to make them easier to develop. “You can build a super smart system that understands everything or you can break it down into a lot of multiple tasks and if each of these tasks can be done in an hour by a person of normal expertise, we can talk about scaling the numbers of contributors instead of making one particular algorithm smarter.”

When he was running Bing Ad Center, he abandoned a complex but powerful algorithm for something far simpler. “It took a week to train 500 million parameters using 20 machines and every time something went wrong people pointed to the algorithm and we had to prove it was computing the right thing – and then a week later, the same thing would happen again. I replaced it with a very simple algorithm that was similar in performance but could train in a matter of minutes or hours.” It was easier to understand, easier to develop and there were no more time-wasting arguments about whether the algorithm was wrong.

Being able to retrain quickly is key to keeping machine learning systems current, because the data feeding into machine learning systems is going to change over time, which will affect the accuracy of the predictions they make. With too complex a system, Simard warns “You’ll be stuck with an algorithm you don’t understand. You won’t know if you can keep the system if no-one has the expertise to tell you whether it still works. Or you might have one system that depends on another and one of those systems gets retrained. Can you still rely on it?”

And if AI is really effective, it’s going to change our world enough that it will have to evolve to keep up, Horvitz points out. A system to identify patients at risk of hospital readmission that keeps them out of the emergency room, will change the mix of patients it has to assess.

On the one hand, AI systems need to know their limitations. “When you take a system and put out in the real open world, there are typically many unforeseen circumstances that come up. How do you design systems that one explicitly understand they're in an open world and explicitly know that the world is bigger than their information?”

But, on the other hand, they also need to know their own impact. “The AI systems themselves as we build them have to understand the influences they make in the world over time, and somehow track them. They have to perform well, even though they’re changing the world they're acting in.”

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Microsoft

More about EUGoogleMicrosoft

Show Comments
[]