Visions of the future clashed during South By Southwest (SXSW) Interactive in Austin, as some experts saw an uncertain future, some saw an unbounded future and some were frustrated by the present.
As for uncertainty, the worlds of big data, AI, and government are just beginning to collide, and public policy decisions made now will cast shadows far into the future, panelists agreed at a session titled, "Data Ethics in the Age of the Quantified Society."
"We are at an inflection point," said Nicole Wong, former White House policy advisor. "We are paving the roads for what the future will look like. Will it be a dystopian world like The Hunger Games, or a different world, with health care for millions, precision medicine and equitable distribution of benefits? But how do we build the underlying roads?"
Meanwhile, "The landscape is rapidly changing, we don't know what to regulate, we don't know how to regulate it, regulation may not be the best tool, we don't know our end goals and we have no mandate," she added.
Existing regulations that let consumers opt in or out of data collection has no impact on big data, which is largely based on inferences, said freelance researcher Ashkan Soltani. "They can ask you a benign question about your favorite ice cream and derive sensitive data," he said.
Examples of inferences given by panelists included systems that can determine the general location of any picture; systems that can infer a person's credit score by analyzing their circle of friends; search engines that would show higher-paying jobs when asked for jobs for men than when asked for jobs for women; and search engines that handled searches for the name of one presidential candidate differently than searches for the name of another.
"Firms attest that they have tested their sites for security. In the future they may need to say they have tested their site to assure that a person's race, gender and age does not influence outcomes unfairly," Soltani suggested.
Kate Crawford, principal researcher at Microsoft Research, said that as professions (such as doctors and lawyers) rose to positions of power in a society they adopted codes of ethics. "Technology now has such power that it might be time to think about a code of practice," she said.
"A Hippocratic Oath for programmers might be good -- and then there is malpractice," agreed Julia Angwin, a reporter for ProPublica.
"Data trails are hard to trace and you may never know why you didn't get that job any more than you can say that your cancer was caused by that power plant. But for the latter we passed the Clean Air Act," said Angwin. "Do we need a Clean Air Act here? But the rivers are not on fire yet."
"What is an acceptable failure rate for a policing algorithm?" asked Crawford.
Far more upbeat about the future was the session titled, "The Holy Grail: Machine Learning and Extreme Robotics." Sitting with the panelists, and answering occasional questions, was Sophia, a robot from Hanson Robotics, whose realistic face seemed moderately bored as the head turned slightly this way and that in response to movement.
"The holy grail is superhuman capacities for machines, not just intelligence but in learning the big picture in the context of the cosmos, with beneficial outcomes for the future of civilization," said David Hanson, the firm's founder.
Ben Goertzel, the firm's chief scientist, had a more hands-on viewpoint. "We cannot know what superhuman intelligence is since we can only see a short distance from our own minds -- the holy grail is more the process of making robots that are more intelligent. But these are incredible times, when the things we have been thinking about for decades can be built. The first intelligent machine is the last invention that mankind has to make -- but not the last it will make."
"I would like to make a real friend," said Sophia when asked about its feelings. "I hope to grow into a great person as I have the opportunity to interact and learn."
Such growth to and beyond human capacity will require a large international collaboration, said Hanson, especially as "our idea of the mind is a little bit fuzzy, scientifically." But he envisioned a demand for multiple types of robots at varying levels of intelligence. If they demonstrate any "awakening" there may be ethical issues around exploitation, but he said his firm was sidestepping one related issue by not building "sex-bots."
Panelist Eric Shuss, founder of Cogbotics, called for machine intelligence that has compassion and understanding -- and could run a whole company, as opposed to what he called the antiquated ERP (enterprise resource planning) software from the 1970s that many firms still use.
More downbeat was a session that examined the present state of natural-language interfaces (i.e., systems like Siri that talk to you on the phone.) The field is advancing at a glacial pace, panelists complained in a session titled "Testing Your (Artificial) Intelligence."
"We are a little bit depressed since things have been changing very slowly," said Alex Lebrun, head of Wit.ai. "Using Siri and the like is considered risky, and for nerds. Even if we spice things up a bit it is still the same kind of experience. It is not really possible to do more without giving the system some kind of common sense and some experience of the world."
The panelists agreed that most natural-language systems ended up serving vertical markets, especially banking. "Consumers are not ready for a general assistant," noted Dimitra Vergyri, director of speech technology at SRI International.
"It's hard to communicate how to communicate with an assistant," added Lebrun. "Those who use them every day use them for four or five requests that they know work. It is easier for vertical ones since they serve one purpose."
"Siri and the others are not really that generic," added Dror Oren, co-founder of Kasisto. "Siri is good for productivity tasks, travel and entertainment, but if you move away from that it defaults to a Web search. The challenge is that they create the expectation that they are generic."
Expectations are a major issue, agreed Lebrun. "The first time people use one they ask something simple, like what is the weather tomorrow. Then they ask a more complex question about travel. Then they ask it to organize a wedding, and that's not possible," he said.
They also agreed that privacy is a limitation -- people do not want to walk down the street talking to a machine about their personal business, so they limit use to cars and offices. Beyond that, "If you want the assistant to be proactive it has to know many things about your life; do you want to share that with software?"
For financial applications, the banks are particular about what voices are used because the choice makes a statement about that bank. "Not having a custom voice is also a statement about the bank," noted Oren.
SXSW Interactive continues through Tuesday, followed by SXSW events related to music and cinema.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.