CIO

Intelligent machines part 3: Big data, machine learning – where’s it all heading?

The move towards unsupervised learning and addressing AI concerns

For decades we’ve been teaching machines how to classify and determine things in the world, giving them many examples to draw from. But what if they could teach themselves?

Machine learning, which is a part of artificial intelligence, has long been about supervised learning where machines are given labelled data or examples to learn from when carrying out certain tasks such as classifying an object or predicting future outcomes.

Going a step above that is unsupervised learning, where there are no labels and the machine has to observe the data, make sense of it and provide an outcome. It’s this kind of learning that’s of interest to many data scientists and AI researchers – mostly because in the real world, data doesn’t usually come in neat, clearly labelled packages.

“To really get to the next level of performance in different applications like language and so on, it’s going to be very challenging to use these annotations. Sometimes we just don’t have the luxury of having users label things for us,” says Adam Coates, ‎director of Baidu Silicon Valley AI Lab.

“For example, if I ask you to do something very complicated for me in a very natural language request, it’s often not the case that we can get someone to read that request and then tell us what it means in a simpler way – it’s just too complicated and expensive.

Read part 2 of this series: The expense of deep learning and the difficulty in recommender systems and sentiment analysis
Read part 1 of this series: Why AI is the next wave of disruptive technology

“So one hope in deep learning research is that for some of these situations where we can’t get tags or we can’t get labels, that it will be able to learn unsupervised – what we think humans spend a lot of time doing. The deep learning algorithm can look at a bunch of text or audio or images and learn to understand and to make sense of them without necessarily being told what is the right answer.”

Deep learning is artificial neural networks that are made up of many hidden layers between the input and output. Currently, it can do unsupervised learning but only to the extent of pre-training or detecting good features to assist in supervised learning.

“What machines are currently not really able to do properly is unsupervised learning. A lot of us have been working on this for a long time, but I don’t think we have the solution for it yet,” says Facebook’s director of AI research, Yann LeCun.

“The problem with unsupervised learning is that we don’t even have good principles to base it on. There are a lot of proposals for different ways, theoretical principles when it comes to the underlying mathematics to base unsupervised learning on, but I believe we don’t have the answer yet.”

Tools like Google’s open source Word2Vec, however, is one example of unsupervised learning that works, LeCun says. Developed by Tomas Mikolov, who is now a research scientist at Facebook, and others, the tool doesn’t need labelled data to learn the vector representation of a word for predicting other words in a sentence.

The reason unsupervised learning works in that case, LeCun says, is because there is a small number of words predicted to follow on from other words. Whereas with images and video, the number of possibilities are much greater, making it difficult to apply unsupervised learning.

“What if I take a movie and I show you a couple seconds of that movie and ask you a second from now what is it going to look like? An infinite number of things could happen in a second that you can’t predict. So it’s very difficult to train a system to predict what the next frame is going to be.

“Right now if you ask it to do this what it’ll do is put some sort of average picture of all the things that could happen and that doesn’t look very good. So that’s why I’m talking about the principles.

“Some people claim that, and I kind of agree with this, the kind of principle in which unsupervised learning should be based is prediction. So as humans, we are good prediction machines. You know what the world is going to look like if you move your head a little to the left. You’ve got a good model of the world where you’ve seen this many times and you know what it would look like.

“You’ve trained yourself by observing the world, and by doing that you’ve extracted some sort of underlying structure of it. We would like to develop machines that can do this, just observe and learn a lot about the world.”

The AI research team at Facebook published a paper last month that is a step closer to doing unsupervised learning on image recognition. They built a deep learning system, trained in an unsupervised manner, which can automatically create its own images of objects and animals from photographs. The images created looked realistic to human evaluators 40 per cent of the time.

Baidu’s Coates says in the next few years more progress will be made in developing unsupervised learning to produce better results and further advance artificial intelligence.

“We don’t have really big success stories of this technology yet, but I am hopeful in the next few years we will be able to make progress on that. It’s an open research area,” he says.

“But I think for the next couple of years there is still more room to run with the current deep learning technology. I think we can still make a tonne of impact and make some really amazing technology based on supervised learning over the next couple of years,” he adds.

Next page: Risk of AI turning into a sci-fi nightmare?

Page Break

Risk of AI turning into a sci-fi nightmare?

The short answer: Many scientists and researchers in the field say it’s possible but unlikely.

Even though AI has advanced remarkably over the years and has stunned us with what it can do, it’s going to take a long, long time and a lot of effort to develop sentient machines capable of playing out some kind of Terminator Skynet freak show.

And when we do eventually muster up the ability to fully understand the human brain, consciousness, emotions and so on, majority of those working in the field today have made a public oath to develop the technology responsibly with humanity top of mind.

“I think this is much more science fiction than science reality. We don’t know how the human brain works, we don’t know how consciousness works. So I don’t think that there’s any chance right now that we need to be worried about deep learning taking over the world,” says Coates.

LeCun says there’s always going to be a limit on energy and resources when trying to produce advanced AI in future, meaning the possibility of a technological Singularity is unlikely.

“It can’t just go infinitely fast,” he says.

Toby Walsh, AI researcher at National ICT Australia, says it is unlikely we are going to achieve fully sentient machines in the next 10 years, but that doesn’t mean it’s off the table.

“I certainly believe we will [eventually] have this, I don’t see any insurmountable reasons why we wouldn’t,” he says. “It is an issue we have to think about, but it’s not an issue we have to worry about tonight.

“Technically what they talk about in films like RoboCop are possible, but I’m sure society as a whole can work out where we want to end up. You have to start thinking about what those consequences might be because society has to change.”

LeCun says this could become an issue if we develop machines in future that can not only master a wide spectrum of domains or tasks but also be fully autonomous in the sense they are motivated by something. Today, AI machines are still narrow in intelligence and are not programmed to be emotionally driven.

“We are driven by low level needs and instincts, and that’s what makes us do either good things or bad things. What would be the equivalent for a machine? Why would a machine wake up every day and decide to do something?

“Survival isn’t [a driver] unless we build it into it [the machine]. Social interaction could be, but we would have to build that into it as well. Or maximising pleasure and avoiding pain wouldn’t be a motivation unless we build that into the machine.

“So for machines to be dangerous, we would have to build into them some motivations that makes them do stuff that’s counterproductive to us,” he says.

“But if we can do that, then we can also build them to have motivations that make them beneficial to us,” he adds.

Walsh says some philosophers argue that we won’t get true intelligence without emotion, and that some companies are already starting to think about potentially programming AI machines that deal with customer service to empathise with people.

“It’s, ‘I can understand you are getting a bit upset with processing your insurance claim, maybe we should escalate this.’ So you are going to want them to understand emotion. Emotions are going to be a part of the equation but we are not even baby steps towards giving computers emotions yet.

“An interesting question is: Why do we have emotions? What purpose do they have? They must have some value from a Darwinian/natural selection purpose. They must have value, otherwise why else do we have them and why do they govern our lives? They certainly seem to help [be a driver] for survival, right? So maybe they are also useful to give to computers.”

Another issue to think about is jobs, Walsh says. Many will have to evolve or make the transition into new fields that will require them to have some higher level of knowledge. Going to school and just passing, getting a low-to-medium-skilled job and then living comfortably may not be an option in the near future.

“It’s a revolution like the industrial revolution changed the nature of work; it’s another revolution that will surely change the nature of work,” says Walsh.

“The problem is that computers are cheap, reliable, they don’t need holidays, they don’t get sick, they don’t demand pay rises, and they get faster and cheaper every year. What human worker is like that?

“Unfortunately we are struggling to improve our productivity. It’s not clear, certainly in Australia, that we’ve lifted the game well enough. We’ve rested on being able to dig dirt out of the ground and send it to China. And the rest of the world is a cheaper place; we have very expensive employees.”

Walsh adds that economists have discussed the inequalities in wealth that this technological change could bring, where wealth is concentrated in the hands of the few who have the advanced AI know how or ownership of the technology.

“It’s a question for society to address on how we are going to deal with this. Technology can be used for good and bad, like anything such as nuclear power.

“It [AI] is also so inevitable. If we don’t work on it, the Chinese will, or the Koreans, or the Germans, etc. It’s going to happen. And if Australia is going to compete on the world stage we’re going to have to be part of this.”

Whatever comes out of AI in future, one thing for sure is that we need to always keep humans in the loop, says Alex Zelinsky, chief defence scientist at the Defence Science and Technology Organisation (DSTO). Just like humans have organised themselves into a hierarchical governing system, the same should apply when dealing with advanced AI machines.

“Even when you look at a human being, no human is totally autonomous. You are in the sense that you can get up in the morning, have a shower and do things yourself. But at the end of the day there are boundaries for you – you can’t break the law, in a company you have always got someone you report to.

“Our unmanned systems are also operating in the same way. The machines may do low level automation tasks, but at the end of the day there’s a human supervisor. Machines will have to fit into a general hierarchy just like we do in society, business and government,” he says.