CIO

Microsoft's Mundie describes computing shift

The executive says that computers will do more to anticipate human needs

In the future, computers will do more work automatically for people, rather than reacting to human input, Microsoft's head of research and strategy said on Monday.

"I've lately taken to talking about computing more as going from a world where today they work at our command to where they work on our behalf," said Craig Mundie, chief research and strategy officer for Microsoft.

"As powerful as computers are... they're still a tool. If you haven't done an apprenticeship and you don't know how to master the tool, you don't get as much out of it as you might," Mundie said.

Mundie addressed a group of university professors and government officials at the company's annual Faculty Summit, held on Microsoft's headquarters in Redmond, Washington.

This subtle shift at Microsoft comes after 10 or 15 years of work on trying to enhance the user interface for computers. That work has included handwriting, gesture, voice and touch interaction, but largely used in the context of the existing graphical user interface.

About a year ago that work shifted a bit, in anticipation of technology improvements that would allow researchers to begin to apply these different ways of interacting with computers in a new way, beyond simply replacing the keyboard and the mouse.

"The question is, can't we change the way in which people interact with machines such that they are much better to anticipate what you want to do and provide a richer form of interaction," Mundie said.

He compares this shift to a historical one that Nathan Myhrvold, his former boss, once pointed out. Myhrvold noted that video cameras were first used to record plays. Not until a few years later did people realize that they could create something new and glue together pieces of film to make a movie. "That's kind of what we're going through with computing," Mundie said.

As an example of what he envisions, Mundie showed off the latest version of a digital personal assistant. The company showed off the first version about a year ago and the application was one that would let Microsoft employees speak to an image of a person on a computer screen to schedule a shuttle bus on campus.

The latest version, which Mundie demonstrated in a prerecorded demo, shows a monitor placed outside the door of an office. Someone walks up to the office and the face on the screen wakes up, greeting the person and asking if he'd like to talk to Eric, who works in the office. She informs the visitor that Eric is in a meeting and offers to schedule some time for him to meet Eric. After the visitor swipes his badge, she compares his and Eric's schedules and finds a time for them to meet.

Microsoft has learned some things about the requirements of such an application, were it to be commercially offered. When idle, the application uses 40 percent of the compute power of the machine, because it is constantly aware of its context. "That makes it so clear to me that this will have to be built on a hybridized client plus cloud architecture," Mundie said.

Microsoft often talks about combining local computing with Internet based computing. The concept, which works well for Microsoft because of its business model based on software sales, is slightly different from Google's vision which relies more on remote hosted services.

But running an application like the assistant remotely would produce an unusable service, Mundie said. The assistant must respond to people relatively quickly. "That's not likely to be computed in real time if you interpose the latency of a wide area network in the middle of it," he said.

While the digital assistant demo was based on real technology, Mundie showed another demo of a vision for the future that might be possible when applying his vision for computers that anticipate users' needs.

The demo showed an office of the future. In the center is a desk with a large screen like the Surface device set against two walls that show projected images. Mundie used gestures to move documents and files around the wall surfaces and used a virtual keyboard on the screen in his desk.

One wall acted like a digital white board, where Mundie could save the contents of the white board after a meeting. He held up a page of paper with information printed on it and with a tap on the wall, copied the document to the wall. He also dragged a document from his phone to the wall, using gestures.

Mundie also pulled up an image of an architectural model that stretched across both walls. As he walked from one end of the wall to the other, the image moved, as if he was changing his perspective of the image in three dimensions. "Because cameras are tracking my position as I move it computes my eyepoint to be what it would look like from that location," he said.

He called the demo "half smoke and mirrors and part real." Some of the touch and gesture interactions were live technology, but his interactions with a digital assistant and with a person on a video call were prerecorded videos. But all the features are possible, he said. "If we don't know how we're going to make it work, we won't include it" in such a demo, he said.

The rest of the Faculty Summit will include presentations by Microsoft executives and partner researcher presenting information about collaborative projects.