Public fear of AI job disruption outstrips expert opinion
- 18 June, 2018 16:16
The expectation that artificial intelligence and automation will replace a significant number of human workers in the next decade is undisputed. What kinds of jobs and how soon, however, is still being worked out.
In 2015, a CEDA report suggested 40 per cent of jobs in Australia were highly “susceptible to computerisation” in the next 15 years. Last year consultancy AlphaBeta said three million Australian jobs (around a third of all jobs) were at risk by 2030.
The issue is a global one. In 2013 the Oxford Martin School predicted that 47 per cent of jobs in the US were under threat. The percentage figure for developing nations was even higher.
“Ironically, the study used machine learning to predict occupations at risk. Even the occupation of predicting occupations at risk from automation has been partially automated,” says UNSW Professor of AI Toby Walsh.
Many jobs have been cited as ‘most at risk’. An Adzuna Australia study in March pointed the red marker pen at those in the healthcare industry, butchers, labourers, drivers, cashiers and machine operators.
But is AI really that good? Is ‘high-level machine intelligence’ (HLMI) – the point when a computer can carry out most professions at least as well as a typical human – that close?
The answer depends on who you ask, and more to the point, how much they know about AI.
More knowledge, less fear
Walsh’s latest study Expert and Non-expert Opinion About Technological Unemployment, published this month in the International Journal of Automation and Computing, found that an individual’s prediction about the number of occupations at risk of automation and how soon they could be automated depends on how much they know about AI.
At the beginning of last year Walsh and colleagues surveyed 200 authors from two leading AI conferences, and 101 from a leading robotics conference.
They also collected responses from 548 readers of a The Conversation article about poker playing AI.
Each was asked about which from a choice of 70 jobs they thought most and least at risk from automation, and an estimate of when HLMI would arrive.
The AI and robotics boffins predicted 31 and 29 occupations respectively were most at risk of automation, while the non-experts put that number at 37.
When it came to the arrival of HLMI, the non-experts believed it would arrive sooner than the experts, by several decades.
“For a 90 per cent probability of HLMI, the median prediction of the experts in robotics was 2118, and 2109 for the experts in AI. By comparison, the median prediction of the non-experts for a 90 per cent probability of HLMI was just 2060, around half a century earlier,” Walsh writes.
It’s the economist, stupid
The largest difference in opinion on jobs most at risk between expert and non-experts was for the occupation of economist. Only 12 per cent of experts predicted the job would be automated in the next couple of decades, compared with 39 per cent of non-experts.
“Even if some parts of an economist′s job can be automated in the next two decades, we doubt that economists should be too worried about their own technological unemployment,” Walsh writes.
The next biggest differences in expert versus non-expert opinion were for electrical engineer, technical writer, civil engineer, law clerk, market research analyst, marketing specialist and lawyer.
However, Walsh adds, “despite being more cautious, both groups of experts still predicted a large fraction of occupations were at risk of automation in the next couple of decades”.
Nevertheless, more should be done to manage the public’s expectation about the rate of progress in AI and robotics, and allay their fears about displacement.
“Even in occupations where humans look set to be displaced, our survey holds out some hope. Whilst the potential disruptions may be large, there could be more time to adapt to them than the public fear,” Walsh writes.
On the flipside, previously when AI technologies failed to live up to expectation, investors were inevitably disappointed and scaled back funding research – a so called ‘AI winter’.
“We should be careful to avoid this in the future,” Walsh concludes.