Menu
AI researcher says amoral robots pose a danger to humanity

AI researcher says amoral robots pose a danger to humanity

Rensselaer Polytechnic professor wants robots to understand good from evil

With robots becoming increasingly powerful, intelligent and autonomous, a scientist at Rensselaer Polytechnic Institute says it's time to start making sure they know the difference between good and evil.

"I'm worried about both whether it's people making machines do evil things or the machines doing evil things on their own," said Selmer Bringsjord, professor of cognitive science, computer science and logic and philosophy at RPI in Troy, N.Y. "The more powerful the robot is, the higher the stakes are. If robots in the future have autonomy..., that's a recipe for disaster.

"If we were to totally ignore this, we would cease to exist," he added.

Bringsjord has been studying artificial intelligence, or AI, since he was a grad student in 1985 and he's been working hand-in-hand with robots for the past 17 years. Now he's trying to figure out how he can code morality into a machine.

That effort, on many levels, is a daunting task.

Robots are only now beginning to act autonomously. A DARPA robotics challenge late last year showed just how much human control robots -- especially, humanoid robots -- still need. The same is true with weaponized autonomous robots, which the U.S. military has said need human controllers for big, and potentially lethal, decisions.

But what happens in 10 or 20 years when robots have advanced exponentially and are working in homes as human aides and care givers? What happens when robots are fully at work in the military or law enforcement, or have control of a nation's missile defense system?

It will be critical that these machines know the difference between a good action and one that is harmful or deadly.

Bringsjord said it may be impossible to give a robot the right answer on how to act in every situation it encounters because there are too many variables. Complicating matters is the question of who will ultimately decide what is right and wrong in a world with so many shades of gray.

Giving robots a sense of good and bad could come down to basic principles. As author, professor and visionary Isaac Asimov noted in writing the The Three Laws of Robotics , a robot will have to be encoded with at least three basic rules.

  1. Don't hurt a human being, or through inaction, allow a human being to be hurt.
  2. A robot must obey the orders a human gives it unless those orders would result in a human being harmed.
  3. A robot must protect its own existence as long as it does not conflict with the first two laws.

"We'd have to agree on the ethical theories that we'd base any rules on," said Bringsjord. "I'm concerned that we're not anticipating these simple ethical decisions that humans have to handle every day. My concern is that there's no work on anticipating these kinds of decisions. We're just going ahead with the technology without thinking about ethical reasoning."

With robots becoming increasingly powerful and autonomous, RPI Professor Selmer Bringsjord says it's important that they know good from evil. These autonomous robots were part of a recent demonstration in Fort Benning, Ga. (The U.S. Army is looking at how robots can help soldiers in the field.)

Even when those needs are anticipated, any rules about right and wrong would have to be built into the machine's operating system so it would be more difficult for a user or hacker to over ride them and put the robot to ill usage.

Mark Bunger, a research director at Lux Research, said it's not crazy to think that robots without a sense of morality could cause a lot of trouble.

"This is a very immature field," said Bunger. "The whole field of ethics spends a lot of time on the conundrums, the trade-offs. Do you save your mother or a drowning girl? There's hundreds of years of philosophy looking at these questions.... We don't even know how to do it. Is there a way to do this in the operating system? Even getting robots to understand the context they're in, not to mention making a decision about it, is very difficult. How do we give a robot an understanding about what it's doing?"

Dan Olds, an analyst with The Gabriel Consulting Group, noted that robots will be the most useful to us when they can act on their own. However the more autonomous they are, the more they need to have a set of rules to guide their actions.

Part of the problem is that robots are advancing without nearly as much thought being given to their guiding principles.

"We want robots that can act on their own," said Olds. "As robots become part of our daily lives, they will have plenty of opportunities to crush and shred us. This may sound like some far off future event, but it's not as distant as some might think.

"We can't build an infant machine and let it grow up in a human environment so it can learn like a human child would learn," said Bringsjord. "We have to figure out the ethics and then figure out how to turn ethics into logical mathematical terms."

He also noted that robots need to be able to make decisions about what they should and shouldn't do -- and make those decisions quickly.

Bringsjord noted, "You don't want a robot that never washes the darned dishes because it's standing there wondering if there's an ethical decision to be made."

Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is sgaudin@computerworld.com.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags roboticsRensselaer Polytechnic Institute

More about GoogleRensselaer Polytechnic Institute

Show Comments
[]