Menu
With machine learning, you can finally make politicians say what you want them to

With machine learning, you can finally make politicians say what you want them to

Researchers have found a way to create (and manipulate) 3D digital models of well-photographed people

Anyone who's watched a political debate has probably wished they could influence the words coming out of a candidate's mouth. Now, machine learning is making that possible -- at least to some extent.

Researchers at the University of Washington have found a way to create fully interactive, 3D digital personas from photos albums and videos of famous people such as Tom Hanks, Barrack Obama, Hillary Clinton and George W. Bush. Equipped with those 3D models, they could then impose another person's voice, expressions and sentiments on them, essentially rendering the models as 3D digital puppets.

“Imagine being able to have a conversation with anyone you can’t actually get to meet in person -- LeBron James, Barack Obama, Charlie Chaplin -- and interact with them,” said Steve Seitz, a UW professor of computer science and engineering.

To construct such personas, the team used machine learning algorithms to mine 200 or so Internet images taken over time of a particular person in various scenarios and poses. They then developed techniques to capture expression-dependent textures  -- small differences that occur when a person smiles or looks puzzled or moves his or her mouth, for example.

By manipulating the lighting conditions across different photographs, they developed a new approach to densely map the differences from one person’s features and expressions onto another person’s face, making it possible to “control” the digital model with a video of another person.

The video below explains more about the research. 

“How do you map one person’s performance onto someone else’s face without losing their identity?” said Seitz. “That’s one of the more interesting aspects of this work. We’ve shown you can have George Bush’s expressions and mouth and movements, but it still looks like George Clooney.”

The technology relies on advances in 3D face reconstruction, tracking, alignment, multi-texture modeling and puppeteering that have been developed over the last five years by a research group led by UW assistant professor of computer science and engineering Ira Kemelmacher-Shlizerman. The results will be presented next week in a paper at the International Conference on Computer Vision in Chile.

The research was funded by Samsung, Google, Intel and the University of Washington.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about BushGoogleIntelSamsung

Show Comments
[]