For many people, hearing the words ‘artificial intelligence’ conjures up images of a future world not too dissimilar to the one depicted in the 2002 science fiction film, Minority Report. In this film, police uses psychic technology to catch and convict murderers before they commit a crime.
Nobody has psychic abilities but the film does give us a taste of what our future may look like when artificial intelligence (AI) technologies infiltrate every part of our lives.
Biometrics technologies such as facial, emotion and fingerprint recognition, predictive analytics, invariant analysis, textual entailment recognition, rapid machine learning, and autonomous and adaptive control are all part of an AI fabric that will completely transform industries.
IT chiefs gathered in Perth recently for a luncheon to discuss how these AI technologies can significantly change the way their businesses operate and how they interact with internal and external customers.
The luncheon was sponsored by NEC and included presentations by Dr Nobuhiro Endo, chairman of the board at NEC Corporation; Mike Barber, chief operating officer at NEC Australia; and Gordon Gay, general manager, R&D at NEC Australia.
Dr Endo, Barber and Gay then joined CIO Australia’s editor-in-chief Byron Connolly for an interactive panel discussion about the potential of AI and the ethical and moral challenges faced by organisations implementing these technologies. They also discussed which AI solutions will have the biggest impact on business and society as a whole in coming years.
To kick off proceedings, the panellists were asked about their views on a recent facial biometric matching agreement signed by the Council of Australian Governments. The agreement gives federal and state police immediate access to individual’s visa, passport, citizenship and driver’s licence images to support criminal investigations, including identifying terror suspects. Some Australians fear their privacy is being eroded and the increasing use of biometrics technology will not make us safer.
“Artificial intelligence introduces some moral and ethical issues and you often hear the words ‘Big Brother’, so how far do you go?” said Barber.
“Given the events [terrorist attacks] that have been taking place over the past few years I think people in general would feel safer if they knew that the data that is being shared is being used for the right reasons.”
“But one of the challenges from a political point of view is the ability to change legislation across various jurisdictions within a particularly country and across the globe,” he said.
Dr Endo added that there is a need to decide as a society if and how we want to apply biometrics and other AI technologies. He said communicating the importance and value of these technologies in these scenarios as well as how to address potential privacy issues was vital.
AI’s future impact
The eventual societal impact of the ubiquitous use of AI, automation and robotics technologies in the coming years is anyone’s guess, but what is certain is that they will fundamentally change the way humans live and work.
The panellists were asked if it is the responsibility of technology companies to think about the ethical consequences of automation in a future economy where we no longer need warehouse packers, farmers, and in some cases, even certain types of doctors.
The theoretical answer, according to some philosophers, is that if we all have an equal input into the decision making process, then humans can adapt to make decisions about these potential consequences as a collective, said Gay.
“It’s a question around where the power is concentrated,” he said. “If the power is concentrated in the hands of a few individuals, are we as a society making decisions or is it Google or the other big companies making those decisions for us?”
“You hear this discussion around philosophy groups at universities – that we will adapt because we have been doing this for years. But the question is, ‘is this power to adapt shared equally or is that adaption then skewed because it’s in the hands of a few? That’s the challenge for us – having equal share of power in decision making,” he said.
What is for certain is that if we stop sharing information, we will create an unequal society, added Gay.
“For example, I was talking to the former CTO of SpaceX who said how we can, as a collective between all companies, start sharing our cyber data and not be siloed and only use it for our economic benefit. He is trying to get people to agree that we will start sharing cyber data – so people are thinking about these things.”
On the question of potential job losses from automation, Barber was adamant that despite the threat it presents to traditional roles, there is a willingness throughout society to push forward into new areas.
“Many years ago, when computing came on the scene, we all thought we were going to lose our jobs and be a paperless society but we still use printers. Jobs in the future will be completely different but will they [the machines] take over? They may in some areas but the complex things will be for humans to decide.”
“I believe we need to keep the strategic thinking to ourselves and not let that go to the machine,” he says.
Dr Endo added that technology companies like NEC shouldn’t hesitate to develop new AI technologies but how they are applied in certain areas will ultimately be decided by society.
“We need to improve technology but how to apply it will be decided by humans and some consensus will be required. We need to communicate with each other and decide how technology is best used,” he said.
At the same time, service providers need to ensure that they use data about individuals in the most ethical way possible, he added.
AI access for everyone
As organisations such as Google and Microsoft start democratising access to artificial intelligence technologies, will they ultimately become a human right?
Gay believes like smartphones, AI technologies will become affordable and more accessible to everyone in the coming years.
“Think about where we are today with the smartphone and how much data we have access to today compared to the year 2000,” he said, citing the availability of mobile devices to developing communities across countries like India.
“People are working in India on building very cheap smartphones as a way to help people access technology. So the reality is that by the forces of capitalism, AI technologies will become more accessible.”
Barber adds that when it comes to the availability of new technologies, including AI, ‘no-one really cares’ about the underlying platform that is driving a particular service.
“As long as you are getting the service that you want, then you’re going to be happy with that. Technology is becoming an essential utility … and no-one worries about IT until it stops and in some cases, IT still doesn’t have the respect that it should have within companies, particularly at board level.”
He added that there is a divide across the community in relation to technology literacy and access.
“A lot of people are still unconnected and there’s a long way to go around that when you start talking about different countries that are at different stages of technology adoption.”'