CIO

Twenty years after Deep Blue, what can AI do for us?

IBM built Deep Blue to win at chess -- but since then has taken a collaborative, rather than competitive, approach to artificial intelligence
IDG

IDG

On May 11, 1997, a computer showed that it could outclass a human in that most human of pursuits: playing a game. The human was World Chess Champion Garry Kasparov, and the computer was IBM's Deep Blue, which had begun life at Carnegie Mellon University as a system called ChipTest.

One of Deep Blue's creators, Murray Campbell, talked to the IDG News Service about the other things computers have learned to do as well as, or better than, humans, and what that means for our future. What follows is an edited version of that conversation.

IDGNS: Is it true that you and Deep Blue joined IBM at the same time?

Murray Campbell: Not exactly true. A group of us, including myself, joined IBM from Carnegie-Mellon University in Pittsburgh in 1989, but we didn't come up with the name Deep Blue until about a year later.

IDGNS: And was building Deep Blue a full-time job?

MC: From when I joined until the 1997 match with the world champion that was my job, to build and improve Deep Blue.

IDGNS: It's said that one of the reasons Deep Blue beat Kasparov was that it had a bug that caused it to play a strange move that somehow psyched him out.

MC: I'm not sure if that's a valid theory or not. What happened was that, at the end of the first game of the match, Deep Blue was destined to lose. It had a losing position but the game could have gone on for quite some time. Kasparov would have had to prove that he knew how to win the position, which of course, I'm sure he was capable of doing. But Deep Blue, due to a bug, played a random move, and the random move was a particularly bad move, and so as soon as Kasparov responded we resigned for Deep Blue.

There was some speculation at some point that this caused Kasparov to not have a good picture of what Deep Blue could and could not do in the game of chess. I think it's just speculation.

IDGNS: How did the bug come about? Were you able to figure out what caused it?

MC: Yes, we did figure it out and fixed it, although we didn't fix it until after the second game, so it was in there for game two as well. Of course, it didn't happen then: It was very rare.

It only appeared under certain circumstances: Deep Blue was given an allotment of time to calculate a move and if it ran out of time in a certain way, it could cause it to play a random move. We had seen that bug a few months earlier and thought we'd fixed it. I think we'd fixed four of the five ways it could happen but we missed one of them. So, of course, on a world stage, it popped up again.

IDGNS: Since that match, we've seen DeepMind's Alpha Go take on some of the world's strongest go players and we've seen IBM's Watson take on the Jeopardy champions. What do you think are the next big challenges that AI is ready for?

MC: Board games have served AI very well, both chess and go, but I think board games have more or less had their day, and it's time to move on to more real-world problems, problems that have more complexity to them. Games like chess are very well-defined: Everything is there right in front of you, you've got all the information, you know exactly what moves are possible, you know what checkmate looks like and so on.Â

The real world isn't like that: There's complexity every which way you turn. I think we should add some additional complexity to the challenges and problems that we look at.

There are still interesting challenges in computer games. For example, I saw just recently that a program had beaten a group of human professionals at poker, and that's interesting because it adds this imperfect information as we call it, hidden information where your opponents know their cards but you don't. So that's one way of adding complexity. There are others.

But in the long run, we want to be tackling problems not where we're trying to create a system that can do as well or better than people, what we really want are systems that complement people in an interesting way and help people make decisions.

In chess, obviously, our goal at least initially with Deep Blue was to prove that it was possible to build a system that could play as well as the best players in the world. On the way, we built this system that played chess in a completely different way than the human way of playing chess. It was apparent that the human approach had its strengths and its weaknesses, and the computer approach that we used had its strengths and weaknesses. Combining the two together, in fact, was shown fairly quickly to produce a player that could be better than either a human alone or a computer alone.

Twenty years later that's still true, so I think that lesson we learned is applicable to practically every real-world problem we can think about.

For example, in health care, a physician can look at a patient and make a diagnosis and come up with a treatment. But what if they have an assistant that thinks about the problem differently than they do, has different skills, can look at all the recent medical literature and all the ongoing drug trials, and produce alternative diagnoses or alternative treatments that the human expert, the physician, can consider and accept or reject? It allows them to broaden their thinking and with that, get a higher level of performance than with either one alone.

IDGNS: Is one advantage of this augmented intelligence system, where it's ultimately the physician making the decision, that it makes it clear for legal purposes where the responsibility lies?

MC: Sometimes the problems are not life or death. If you decide to install a system that recommends a movie or a book to somebody, if you make a mistake it's not the end of the world, whereas some kinds of decisions are really important. We have to have, for many decades to come, I would suggest, humans having the final word on those decisions. But the more informed they are about reasonable alternatives and the advantages and disadvantages of those alternatives, I think the better off everybody will be.

IDGNS: You found the bug in Deep Blue, but the latest generation of AIs seem a lot more inscrutable and harder to audit than Deep Blue: You can't look back over the search tree of moves they have considered and figure out whether they're giving us the right answer, particularly to these real-world problems you were talking about.

MC: It's perhaps one of the most critical problems in AI today. We've seen some of the successes based on deep learning, large neural networks that are trained on problems, and they're incredibly useful, but they are large black boxes. They don't explain themselves in any useful way at the moment.

There are research projects trying to change that, but, for example, even Deep Blue, which was not based on a neural network but on a large search through billions of possibilities, had really no useful way of describing exactly why it made the moves that it did. You would have to do a lot of work if you were given a Deep Blue recommendation to figure out why it had recommended it. And I think that's true for modern AI systems too. In the group that I work with, one of the key research problems is interpretability of AI, allowing them to explain themselves so that the augmented intelligence systems that I talked about can be more effective if the system can explain its reasoning to the human decision-maker.

IDGNS: What approaches are you taking to that?

There are approaches that use machine learning to help machine learning describe itself. You have one system that makes decisions or gives you a prediction and then, maybe at the cost of a lot of work, for each of those decisions you figure out what the reasoning is, a human-understandable reason why that's a good decision or why that decision was made. Then you can build a system that, given a bunch of examples of decisions and explanations, can learn to come up with explanations that are useful. That's one approach.

There are some standard machine learning approaches that use more interpretable systems that are based on rules that can be considered, because rules are very intuitive for people. They can work with rules: If the temperature is greater than this and the humidity is such-and-such, you're going to be uncomfortable.

There are limits. Some of those real-world problems are complex enough that a simple set of rules is not sufficient, and I think that's why this is such an important research topic with lots of people looking at it. It's trying to come up with explanations that are true but also useful. That trade-off is what we're looking at.

IDGNS: We alluded to the idea of legal responsibility earlier. Is there any work being done on making these explanations sufficiently reliable and linked to the reasoning of the AI system that they could be relied on in a legal process?

MC: That's a question that's probably beyond my knowledge because it involves systems in the legal framework, but my thinking there is that for decisions that matter, ultimately we need people to be responsible for the coming decades. Recommendations from a computer, along with explanations of those recommendations, are useful tools but they're only tools, and in the end, a human decision-maker needs to take responsibility.

IDGNS: Tell us about the approaches you're working on for helping computers work with people, and the domains they can be used in?

MC: We talked a little about health care. Let me give you a specific example. Using deep learning approaches, we at IBM have developed an image recognition system for skin cancer so, given a photograph of, say, a lesion on the skin, it will be able to classify or identify that lesion with very high accuracy, in fact often higher accuracy than human experts.

But it doesn't understand the full context of the patient, and so that's why this is just one piece of information that needs to be provided to the physician. They see the patient in person, they see their history, and they see the recommendation, say "This lesion has 85 percent chance of being cancer, it should be biopsied." That's the recommendation, but the doctor can say, "Oh, well in this case I know it's not a problem because…." That's one example.

Another I'm quite interested in is what we call "aging in place." You equip the homes of an aging population with sensors, internet of things technologies, and then have AI-based systems monitoring those sensors looking for warning signs of anomalous behaviors that may indicate a problem, bringing that to the attention of the caregivers.

One of the biggest deficits in the world today is we don't know where to focus our attention. There is just so much information that if we had a system that could help us focus our attention on things that are important, that'd be a great way to build a collaborative system.

IDGNS: What's the thing that most excites you about the field of AI at the moment that you think will have the biggest effect on your life?

MC: I hope by the time I'm retired the aging in place system has been worked out in great detail and I can take advantage of it.

As AI advances, some of the systems that exist right now in a limited form are going to become much more useful in the real world.

One of the big problems I see with the whole deep-learning explosion is, they tend to be focused on problems that you could say are more perceptual: You get an image or an audio clip or something and you classify the image or you produce the text that comes from that audio clip.

But real multi-step reasoning and planning and strategic thinking aren't currently a great strength of these AI systems, and that's where people come in. I was saying before how AI systems can help focus people on what matters:; I think people can focus AI systems on looking at problems or looking in directions where there's an intuition that there is something useful there.

That's where I see these systems developing in the next decade, humans bringing their skills to the mix, the machines bringing their skills and working together. I see that happening in practically every field, health care, education, aging in place, you name it.

IDGNS: You're building AI system at work, but do you use AI systems to help you in your work?

MC: As a researcher, I use tools, obviously; I use web searches and I have tools that help me look at the technical papers that come out and try and identify the ones that I should be focusing on. But when it comes to asking the interesting questions, what should I be working on, and identifying the most profitable or most likely directions that I should be doing research, I haven't seen a tool yet that can really help me. Obviously, that would be a great project that might make me as a researcher much more efficient if I could develop the AI tools that would help me do my job better.

IDGNS: Perhaps that will be the next big challenge?

That's certainly one of them, perhaps not the one that everybody out there thinks of because there are so many real-world problems that affect millions or even billions of people every day. But for the community that works in AI, we're inundated just like everybody else with information. The number of technical papers coming out every day is quite amazing even compared to five years ago, and we need help like everybody else.