Menu
The hidden risk of blind trust in AI’s ‘black box’

The hidden risk of blind trust in AI’s ‘black box’

Companies intent on weaving AI more tightly into the fabric of their businesses are seeking to better explain how algorithms make their decisions, especially where risk and regulations are involved.

Artificial intelligence is gaining traction in enterprises, with many large organizations exploring algorithms to automate business processes or building bots to field customer inquiries. But while some CIOs see self-learning software as a boon for achieving greater efficiencies, others are leery about entrusting too much of their operations to AI because it remains difficult to ascertain how the algorithms arrive at their conclusions.

CIOs in regulated industries in particular, such as financial services and any sector exploring autonomous vehicles, are grappling with this so-called "black box problem." If a self-driving rig suddenly swerves off of the road during testing, the engineers had darn well better figure out how and why. Similarly, Finservs looking to use it to vet clients for credit risks need to proceed with caution to avoid introducing biases into their qualification scoring. Because of these and similar issues around risk, companies are increasingly seeking ways to vet, or even explain, predictions rendered by their AI tools.

Most software developed today that automates business processes is codified with programmable logic. If it works as intended it does things that its programmers told it to do. But in this second wave of automation, software capable of teaching itself is king. Without a clear understanding of how this software detects patterns and observes outcomes, companies with risk and regulations on the line are left to wonder how strongly they can trust the machines.

[ Find out which which deep learning network is best for your organization. | Get an inside look at 10 real-world digital transformations. | Get the latest insights by signing up for our CIO daily newsletter. ]

Big data fueling AI, challenges

AI spans a wide range of cognitive technologies to enable situational reasoning, planning and learning, aping the natural intelligence that humans and other animal species possess. In has long lived in labs as a tantalizing possibility, but the growth in computing power, the increasing sophistication of algorithms and AI models and the billions of gigabytes of data spewing daily from connected devices has unleashed a Cambrian explosion in self-directing technologies. Self-driving cars can navigate tricky terrain while bots can mimic human speech and businesses have stepped up their investments. Corporate adoption of cognitive systems and AI will drive worldwide revenues from nearly $8 billion in 2016 to more than $47 billion in 2020, according to IDC.

There's no question the technologies and their aptitude to learn are growing rapidly but so is the complexity. At the heart of machine learning and deep learning, two subsets of AI that most businesses employ, are neural networks, interconnected nodes modeled after the network of neurons in the human brain. As these technologies grow more powerful, the sheer volume of connections firing within neural networks of self-learning systems are nearly impossible to track, let alone parse. It begs the question: Can we trust an algorithm to tell us whether an erstwhile homeowner can repay a 30-year mortgage without running afoul of fair lending rules?

Bruce Lee, senior vice president and head of operations and technology, Fannie Mae Fannie Mae

Bruce Lee, senior vice president and head of operations and technology, Fannie Mae

It's a question that has mortgage companies such as Fannie Mae searching for AI whose decisions they can explain more precisely to regulators, says Bruce Lee, the company's head of operations and technology. It might be logical to infer that a homeowner who manages their electricity bills using products such as the Nest thermostat might have more free cash flow to repay their mortgage. But enabling an AI to incorporate such a qualification is problematic in the eyes of regulators, Lee says.

"Could you start offering people with Nest better mortgage rates before you start getting into fair lending issues about how you’re biasing the sample set?" Lee tells CIO.com. "AI in things like credit decisions, which might seem like an obvious area, is actually fraught with a lot of regulatory hurdles to clear. So a lot of what we do has to be thoroughly back-tested to make sure that we’re not introducing bias that’s inappropriate and that it is a net benefit to the housing infrastructure. The AI has to be particularly explainable."

"Explainable AI," as the phrase states, is essentially AI whose decision-making, conclusions and predictions can be qualified in a reasonable way. Lee pointed to software from ThoughtSpot that details broad explanations such as kind of charting and analysis used to represent data right down to how specific words put into queries may inform results. Such capabilities are a requirement for AI in financial services, Lee says. "People need to explain it in the same way that people need to explain how you train people to avoid racial bias and decision-making and how to avoid other bias in human systems," Lee says.

Much ado about explainable AI

Academics are torn over the need for explainable AI. MIT principal research scientist Andrew McAfee, who just co-authored a book on how machine learning systems are driving the new wave of automation, isn't among them.

McAfee, speaking on an AI panel at the MIT CIO Sloan Symposium in May, answered a question about the inability to qualify AI's conclusions, thusly: "A lot of people are freaking out about that but I push back on that because human beings will very quickly offer you an explanation for why they made a decision or prediction that they did and that explanation is usually wrong," McAfee says. "We do not have access to our own knowledge so I’m a little bit less worried about the black box of the computation than other people." He said that putting up regulatory roadblocks to explain AI could "retard progress" in the market economy.

Scott Blandford, Chief Digital Office, TIAA TIAA

Scott Blandford, Chief Digital Office, TIAA

However, MIT Sloan professor Erik Brynjolfsson, McAfee's co-author on the new book Machine, Platform, Crowd: Harnessing Our Digital Future, acknowledged that it makes it harder for humans and machines to work together if the machine can’t explain how it arrived at its reasoning. "There’s still lots of areas where we want to have that kind of leverage," Brynjolfsson said. "But in a machine that's making billions of connections it’s very hard to say this particular weighted sum drove the decision."

Other IT executives, speaking on the sideline of the MIT event, expressed caution about implementing AI technologies though they acknowledged AI’s importance to their businesses.

Scott Blandford, chief digital officer of TIAA, said companies have to worry about AI's black box problem because if "you're making decisions that impact peoples' lives you'd better make sure that everything is 100 percent." He said that while TIAA could use AI to enhance an analytics system it has built to monitor how its digital business operates he isn't ready to travel that road without further testing and validation. "We're sticking with things for now that are provable," Blandford says.

Jim Fowler, CIO, General Electric General Electric

Jim Fowler, CIO, General Electric

General Electric CIO Jim Fowler says explaining AI depends largely on the context in which the technology is being used. For example, self-learning software that helps process accounts receivables more efficiently may not require explanation but GE would need to fully understand how a better algorithm for firing a jet engine works before implementing it.

"You have to have the context of the purpose of how AI is being used and that's going to set how much you care about how to explain it and understand it," Fowler says. "There's a lot of processes that have to be testable, that you've got to be able to show a regulator and to show yourself that you've tested it and proven it. If you've got an algorithm that is constantly changing on something that is related to life and limb it is going to be harder to just trust the black box."

Related articles

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags AI

More about GEGeneral ElectricIDGMITNest

Show Comments
[]