Menu
Artificial intelligence can go wrong – but how will we know?

Artificial intelligence can go wrong – but how will we know?

You needn't worry about our robot overlords just yet, but AI can get you into a world of trouble unless we observe some best practices.

Every time we hear that “artificial intelligence” was behind something – from creating images to inventing recipes to writing a description of a photo – we thought was uniquely human, you’ll see someone worrying about the dangers of AI either making humans redundant, or deciding to do away with us altogether. But the real danger isn’t a true artificial intelligence that’s a threat to humanity – because despite all our advances, it isn't likely we’ll create that.

What we need to worry about is creating badly designed AI and relying on it without question, so we end up trusting “smart” computer systems we don't understand, and haven't built to be accountable or even to explain themselves.

Self-taught expert systems

Most of the smart systems you read about use machine learning. It’s just one area of artificial intelligence – but it's what you hear about most, because it's where we're making a lot of progress. That’s thanks to an Internet full of information with metadata; services like Mechanical Turk where you can cheaply employ people to add more metadata and check your results; hardware that's really good at dealing with lots of chunks of data at high speed (your graphics card); cloud computing and storage; and a lot of smart people who've noticed there is money to be made taking their research out of the university and into the marketplace.

Machine learning is ideal for finding patterns and using those to either recognize, categorize or predict things. It's already powering shopping recommendations, financial fraud analysis, predictive analytics, voice recognition and machine translation, weather forecasting and at least parts of dozens of other services you already use.

Outside the lab, machine learning systems don’t teach themselves; there are human designers, telling them what to learn. And despite the impressive results from research projects, machine learning is still just one piece of how computer systems are put together. But it's far more of a black box than most algorithms, even to developers -- especially when you’re using convolutional neural networks, commonly known as “deep learning” systems.

[Related: Are robots really going to steal your job?]

“Deep learning produces rich, multi-layered representations that their developers may not clearly understand,” says Microsoft Distinguished Scientist Eric Horvitz, who is sponsoring a 100-year study at Stanford of how AI will influence people and society, looking at why we aren't already getting more benefits from AI, as well as concerns AI may be difficult to control.

The power of deep learning produces “inscrutable” systems that can’t explain why they made decisions, either to the user working with the system or someone auditing the decision later. It’s also hard to know how to improve them. “Backing up from a poor result to ‘what’s causing the problem, where do I put my effort, where do I make my system better, what really failed, how do I do blame assignments,’ is not a trivial problem,” Horvitz explains; one of his many projects at MSR is looking at this.

In some ways, this is nothing new. “Since the start of the industrial revolution, automated systems have been built where there is an embedded, hard-to-understand reason things are being done,” Horvitz says. “There have always been embedded utility functions, embedded design decisions that have tradeoffs.”

With AI, these can be more explicit. “We can have modules that represent utility functions, so there’s a statement that someone has made a tradeoff about how fast a car should go or when it should slow down or when it should warn you with an alert. Here is my design decision: You can review it and question it.” He envisages self-driving cars warning you about those trade-offs, or let you change them – as long as you accept liability.

Getting easier to understand systems, or ones that can explain themselves, is going to be key to reaping the benefits of AI.

Discrimination and regulation

It’s naïve to expect machines to automatically make more equitable decisions. The decision-making algorithms are designed by humans, and bias can be built in. When the algorithm for a dating site matches men with only women who are shorter than them, it perpetuates opinions and expectations about relationships. With machine learning and big data, you can end up automatically repeating historical bias in the data you’re learning from.

When a CMU studyfound ad-targeting algorithms show ads about high-paying jobs to men more than to women, it might have been economics rather than assumptions; if more ad buyers target women, car companies or beauty products could out-bid recruiters. But unless the system can explain why, it looks like discrimination.

The ACLU has already raised questions about whether online ad tracking breaks the rules of the Equal Credit Opportunity Act and the Fair Housing Act. And Horvitz points out machine learning could sidestep privacy protections for medical information in the American Disability Act and the Genetic Information Non Discrimination Act that prevent it being used in decisions about employment, credit or housing, because it can make “category-jumping inferences about medical conditions from nonmedical data.”

It’s even more of an issue in Europe, he says. “One thread of EU law is that when it comes to automated decisions and automation regarding people, people need to be able to understand decisions and algorithms need to explain themselves. Algorithms need to be transparent.” There are currently exemptions for purely automatic processing, but the forthcoming EU data privacy regulation might require businesses to disclose the logic used for that processing.

The finance industry has already had to start dealing with these issues, says Alex Gray, CTO of machine learning service SkyTree, because it’s been using machine learning for years, especially for credit cards and insurance.

“They've got to the point where it affects human lives, for example by denying someone credit. There are regulations that force credit card companies to explain to the credit applicant why they were denied. So, by law, machine learning has to be explainable to the everyman. The regulation only exists for the financial industry but our prediction is you will see that everywhere, as machine learning inevitably and quickly makes its way into every critical problem of human society.”

Explanations are obviously critical in medicine. IBM Watson CTO Rob High points out “It’s very important we be transparent about the rationale of our reasoning. When we provide answers to a question, we provide supporting evidence for a treatment suggestion and it’s very important for the human who receives those answers to be able to challenge the system to reveal why it believed in the treatment choices it suggested.”

But he believes it’s important to show the original data the system learned from, rather than the specific model it used to make the decision. “The average human being is not well-equipped to understand the nuance of why different algorithms are more or less relevant,” he says, “but they can test them quickly by what they produce. We have to explain in a form the person who is an expert in that field will recognise, not show that it’s justified by the mathematics in the system.”

Medical experts often won’t accept system that don’t make sense to them. Horvitz found this with a system that advised pathologists on what tests to run. The system could be more efficient if it wasn’t constrained to the hierarchies we used to categorise disease but the users disliked it until it was changed to work in a more explicable way. “It wouldn’t be as powerful, it would ask more questions and do more tests but the doctor would say ‘I get it, I can understand this and it can really explain what it’s doing.”

[Related: Instead of robots taking jobs, A.I. may help humans do their jobs better]

Self-driving cars will also bring more regulation to AI, says Gray. “Today, a bunch of that [self-driving system] is neural networks and it’s not explainable. Eventually, when a car hits somebody and there's an investigation, that issue will come up. The same will be true of everywhere that’s high value, which affects people or their businesses; there's going to have to be that kind of explainability.”

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Microsoft

More about EUGoogleMicrosoft

Show Comments
[]