Imagine a robot stacking boxes in a warehouse when it suddenly sees that one box is in the wrong stack. It goes back and puts the container in the right place. How did the machine know it had made a mistake?
The robot's human boss didn't punch any codes into a computer to have the robot correct its mistake. The boss didn't say a word. She simply looked over, recognized the robot's error, and the robot was able to recognize her thoughts and correct its mistake.
This may sound like a scene out of a sci-fi movie but it's close to reality.
Scientists from Boston University and MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have joined efforts to develop brain-controlled robots.
The goal is to make robots a more natural extension of humans, so that the human/robot relationship becomes more of a partnership.
"The process brings us one step closer to seamless interaction between robots and humans," said Frank Guenther, a BU professor of neuroscience. "I think it will allow humans to easily interact with multiple robots."
The project, which has been in the works for about two years, is intended to enable a robot to detect a signal in a human brain that recognizes an error.
The simple act of a human noticing a mistake leads to a fairly robust brain signal that can be detected outside the person's scalp, according to Guenther.
In the MIT/BU project, Guenther said a person in the human/robot partnership wears a skullcap with electroencephalography (EEG) sensors that monitor and record the user's brain activity.
The researchers' machine-learning algorithms enable the "feedback system" to classify brain waves, such as signals created when the person notices a mistake, in the space of 10 to 30 milliseconds.
The robot then picks up on that particular brain signal and corrects its actions because of it.
"It turns out that just the act of noticing the mistake leads to a fairly robust brain signal that we can pick up outside the scalp with these EEG electrodes," said Guenther. "This provides an example of reading the mind of a human, using perfectly safe, non-invasive technology."
In the experiment, the robot is told to pick up one of two cups. A human observer is told which cup the robot is supposed to pick up. If the robot does not pick up the correct cup, it will recognize the human's brain signal indicating that the robot has made a mistake. The robot will then stop, correct itself and reach for the other cup.
"These signals can dramatically improve accuracy, creating a continuous dialogue between human and robot in communicating their choices," said CSAIL research scientist Stephanie Gil, in a statement.
Such an ability could one day be transferred to robots working in the home or in the enterprise.
"With a system that a worker could wear, we can use the information from the user's brain without them having to do anything," Guenther said. "It's an effortless process for the user, and they're simply watching for something to go awry."
Guenther acknowledged that robots being able to read humans' brains might make some people nervous, but he said that's not an issue at this point.
"This system requires a very sophisticated and expensive set of equipment to read these signals," Guenther explained. "We're nowhere near a system that can read the mind of someone who is not a willing participant. The technology isn't about reading random people's minds, but just for people who are wearing equipment that will let that happen."
He added that the researchers are working to strengthen their algorithms and make the system more efficient.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.