Menu
New interfaces challenge touch

New interfaces challenge touch

Innovative interfaces offer flexibility in device interaction

Touchscreens could be extinct if researchers pioneering new human-computer interfaces have anything to say about it. From brain-controlled machines to gesture-driven devices, there's a range of technologies in development that may find their way into everyday electronic devices.

Several conferences this year have given a great glimpse into innovative interfaces and what the future may hold.

Touchscreens are somewhat limited in giving feedback to a user. The screen may vibrate when tapped, but that's just about all it can do. At this year's Computer Human Interaction (CHI) conference in Vancouver in May, a researcher from the University of British Columbia showed a way to completely change the feeling of a screen, at times making it slippery and other times making it sticky.

The prototype screen has four actuators that shake the screen.

"This is actually the same technology used in many cell phones or other devices, but it runs at a higher frequency so you don't feel the vibration itself," said Vincent Levesque, who is a post-doctoral fellow. "It pushes your finger away from the piece of glass, a bit like an air hockey table."

Levesque's team had a demonstration set up with basic file folders on screen. When a folder is selected the screen becomes slippery. When it is dragged over another folder or the trash, the screen became sticky.

The prototype occupied a sizeable section of the table on which it sat. Wires protruded and circuit boards were visible, making it too bulky to integrate into any mobile devices. The system uses lasers to determine the position of the finger. As the team continues work on the project, it hopes to reduce the system's size and replace the lasers with a capacitive touchscreen.

At the CHI conference, university students and research groups dreamt up most of the projects on display and shared them with potential employers who could license the technology and invest in developing it.

Texas A&M University's Interface Ecology Lab favored gestures over touch, creating a gesture-controlled system called ZeroTouch. It looks like an empty picture frame and the edges are lined with a total of 256 infrared sensors pointing toward the center. The frame is connected to a computer and the computer to a digital projector.

"I like to consider it an optical force field," said Jonathan Moeller, a research assistant in the lab.

When the spiderweb of light created by the sensors is broken, the computer interprets the size and depth of the break and displays it as a brushstroke. If just a pencil breaks the beam, the brushstroke will be thin. If an entire arm or head breaks the beam, the stroke will be thick.

While painting on the digital canvas, users hold an iPhone on which they can select the color of the brush.

Drawing in the air is just a proof of concept. When ZeroTouch is placed over a traditional computer screen it becomes a touchscreen. Instead of creating brushstrokes, the system moves a cursor.

Moeller started working on the project in 2009. It was born out of research that used a projection screen and a camera. He said he thought the system was bulky and wanted to reduce its size.

He considers two-dimensional interaction just the beginning.

"You can stack layers [of ZeroTouch] together to get depth sensing," he said.

The system could then sense objects in a 3D space, but also allow users to hover over objects. Typically, hovering isn't available with touch systems because a finger would occlude what it's hovering over, he said.

If ZeroTouch becomes the new technology to create 3D objects, the Snowglobe project could provide a way to view and interact with them.

Snowglobe is a large acrylic ball that has an image projected on its inner walls from a hole in the bottom. Two Microsoft Kinect sensors are pointed at users and when they approach and move around the ball, the object on the inside follows them. If they stretch out their hands, their gestures can control the orientation and size of the object inside the globe. The image is cast by a 3D projector so wearing 3D glasses adds another dimension to the experience.

John Bolton, with the Human Media Lab at Queens University, came up with the idea, on show at CHI 2011, and had been working on it for two years.

"If we nest an object inside we can present all 360 degrees of that object if somebody walks around the display," Bolton explained. "So opposed to just sitting there with a mouse you can walk around and you're presented with the correct view as your position changes."

Bolton said that, as is true for many of the projects at CHI, there were no immediate plans for commercialization.

After showing a project that let users control a music player by moving their eyes, Japanese mobile phone operator NTT DoCoMo said there were no plans to add it to any products. Shown at Ceatec 2009 near Tokyo and again at Mobile World Congress in Barcelona in 2010, it was a crowd pleaser, but in an August 2011 email NTT DoCoMo spokesman Yoshifumi Kuroda said: "The research is ongoing, but there are currently no plans to use this technology in any products."

The prototype includes earbuds that measure the changes in electrical state when a user's eye moves. Those impulses could then be translated into actions like skipping to the next track or turning up the volume.

Germany's Hasso Plattner Institute took a different approach to gesture interaction. Led by Patrick Baudisch, the Berlin-based group has developed what it calls imaginary interfaces that allow users to interact with mobile devices when they're not in front of them. Imagine hearing your phone ring in your pocket, but instead of taking it out, you hold up your palm and swipe your finger across it to ignore the call.

The prototype system won't be portable anytime soon. It uses depth-sensing cameras mounted above the users, or sometimes on the users' shoulder, to locate where their fingers are and what they're touching.

Baudisch credited Apple with replacing styluses with touchscreens, but he and his team wanted to take it one step further.

"Why don't we leave this [stylus] out and retrieve no devices at all for these tiny interactions such as turning off an alarm or picking up a phone call or sending to voicebox," he said during CHI 2011. "People will interact directly on the palm of their hand."

The system could work because users can remember about 70 percent to 80 percent of their 20 home screen icons and where they're located, he said.

Just as there is an acclimation period when switching from a mobile device with a keyboard to one with only a touchscreen, Baudisch imagined that there would be a similar adjustment to using a device users can't see.

Touchscreens have been around for decades and they won't be replaced anytime soon, according to Gartner analyst Ken Dulaney. The real power behind touchscreens is the software with which users can interact, he said.

"Pointing to something is human nature," said Dulaney in an interview. Speech recognition isn't perfect and if a word or two is missed the entire context could be changed, he said.

In the short term, Dulaney said, improving the accuracy of the interfaces and reducing fingerprints will be on the minds of developers. However, he imagines that transparent displays might become popular in the future. Users could simply hold their phones up and content could be overlaid, similar to how today's augmented reality applications use a phone's camera, he said.

At Ceatec 2010 in Japan, TDK showed off transparent screens and according to a May 2011 press release, the company has begun mass production of them. Called electroluminescent displays by TDK, the screens have a resolution of 320-by-240 pixels and are "mainly intended for use as the main display panel in mobile phones and other mobile devices."

Brain control interfaces abandon touch and gesture control and rely solely on the power of thought. Researchers at Riken, Japan's government-run research body, have developed a brain machine interface (BMI) that lets users control a wheelchair using thought. The thought patterns are picked up by electroencephalography or EEG sensors mounted on a user's head. The data is then relayed to a laptop, which interprets it and and sends the control signals to the wheelchair.

The system needs about three hours of training per day for a week to achieve a 95-percent accuracy rate, according to Riken.

Plans to use the technology in rehabilitation and therapy are already under way, according to Andrzej Cichocki, head of the Laboratory for Advanced Brain Signal Processing at Riken.

Based on the same principles, one company showed off a BMI that let users type by just concentrating on letters they want to use. Shown at Cebit 2011, Guger Technologies presented intendiX, a system that consists of a skullcap with electrodes, a pocket-sized brainwave amplifier and a Windows application that analyzes and decodes the brain waves.

To enter a letter, the user must stare at that letter on a virtual keyboard. The software flashes the columns and rows of the keyboard and the system tries to detect a response in the brain when the desired letter is flashed. The system looks for brainwaves that are triggered 300 milliseconds after a stimulus.

"The signal is called P300, it is just a usual signal," said Markus Bruckner, with the company. "For example, when you drive behind a car and it steps on its brakes and the red light flashes you have the same response."

It takes quite a bit of concentration and time to type out just a few letters, but for someone who has no other way of typing, it could bring new opportunities to communicate.

The company hopes to improve the response time and said it's down to one second in the lab.

Many of the prototypes from universities and research groups will never make it into commercial products.

"My job is to prove the concept, not to bring it to market," said Santiago Alfaro, an MIT Media Lab researcher who created Surround Vision.

His project extends the traditional television screen onto an iPad or similar mobile device. When users move the device around, additional content will be displayed. He imagines it being use for sporting events or concerts where there are multiple camera angles.

While Alfaro said there is no link between the two, Nintendo employs similar technology in its Wii U game system.

He said there's a benefit to when large companies commercialize new interface technology. "This proves that people are thinking the same way," he said. "People will become much more comfortable with the new technology."

While it might take a large corporation to commercialize a new technology, mainstream adoption will ultimately rely on whether consumers can become comfortable with it.

(Martyn Williams and Jay Alabaster in Tokyo contributed to this report.)

Nick Barber covers general technology news in both text and video for IDG News Service. E-mail him at Nick_Barber@idg.com and follow him on Twitter at @nickjb.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags popular sciencedisplaysComponentsMIT Media LabInput-OutputTexas A&M UniversityUniversity of British ColumbiaLaboratory for Advanced Brain Signal Processing

More about AppleDoCoMoGartnerIDGMicrosoftMITMIT Media LabNintendo AustraliaNTT AustraliaNTT DoCoMoTDKTSG

Show Comments
[]