Menu
Why artificial intelligence is a human right

Why artificial intelligence is a human right

And how these rights drive breakthroughs in innovation through co-design

2018 is a monumental year regarding human rights; it’s the 70th anniversary of the Universal Declaration of Human Rights, a remarkable document to be read in conjunction with the UN Convention on the Rights of Persons with Disabilities. This year also marks the Convention’s 10th anniversary.

As the artificial intelligence (AI) era inexorably unfolds across every dimension of our life, the principles enshrined in these two human rights documents can steer this great innovation in a direction that will benefit all humanity.

It is essential that society reflects upon these documents and the opportunities – and some possible challenges – that AI presents to human rights, dignity and the advancement of society.

The Convention (on the Rights of Persons with Disabilities)… takes to a new height the movement from viewing persons with disabilities as ‘objects’ of charity, medical treatment and social protection towards viewing persons with disabilities as ‘subjects’ with rights, who are capable of claiming those rights and making decisions for their lives based on their free and informed consent as well as being active members of society...”

The impact of technology innovation on inclusion and accessibility is well known: humans have always sought to augment their own capabilities. The UN Convention on the Rights of Persons with Disabilities is remarkable drafting and deeply perceptive, because it pushes innovation into the realms of each person’s individual expression of our shared humanity.

The Convention calls out the right of freedom of expression and opinion, and access to information – including by accepting and facilitating “augmentative and alternative communication” so that people with disability can “receive and impart information and ideas on an equal basis”.

When closely read, the Declaration and the Convention open the mind to AI as an inherently exponential innovation that both shapes, and is shaped by, humanity. It is this visceral connection to our humanity that establishes AI as a human right, because without it, one’s opportunities are diminished or not provided with the potential for advancement.

For many people, accessing government and commercial services, including of course healthcare, can range from just plain difficult to frightening and isolating. As a mother and grandmother of family members with disability and chronic health problems, I know this first hand.

As a technologist, I also know that AI is not a ‘tool’, an ‘enabler’ nor a ‘platform’ – it is neither ‘IT’ nor ‘UX’. Every day we ask, explain, analyse, understand and create. AI’s role is to help everyone, regardless of capability, to perform these basic communication and cognitive functions with dignity on an equal basis.

The structured systems and rigid processes of past decades created barriers for most people: the website, forms, call centre paradigm significantly impacts and disadvantages people with different needs and abilities. And this impacts us all as we age.

Governments and many other organisations send letters and forms to people who physically cannot open them; to people who cannot comprehend the bureaucratic language. Letters, forms and brochures point to complex websites and over-burdened call centres which cannot meet the needs of people who are non-verbal or have cognitive impairment.

A self-feeding maze of complexity. Regardless of social status, education or ability, when vulnerable, our humanity yearns for empathy and conversation.

Yet most organisations, health sector organisations and governments alike, have told us that we can no longer afford conversations. Driven by budget and rationing philosophies, the first two decades of the ‘online century’ simply pasted an electronic veneer over the existing byzantine structures.

This forced people to interact through a maze of complex websites, portals, understaffed call centres and thousands of online forms, none accessible. Only the wealthy had the means to avoid these barriers.

And even with the multi-billions of dollars invested in technology and systems, the experience of people with disability is traumatic to the point of systemic discrimination.

No amount of fiddling with website structures, apps, so-called ‘digital standards’ and outsourcing call centres, changes that experience. This situation will not improve or change simply by using new technologies to repeat the same patterns of service delivery.

Nor will it improve by reverting to the manual patterns of the past by simply creating more opportunities for direct face to face communication in the belief that this will provide the disadvantaged with a better service experience.

From a practical perspective there is no way that the millions of people involved in everyday service delivery can be trained to deal with the myriad of communication difficulties they might encounter every day.

As an example, modern pharmacy in most western countries already provides such an ‘enriched’ manual experience: skilled and empathetic professionals who understand medical and psychological conditions providing face to face interaction but still struggling to provide medications assistance to just one patient with say, dementia, on a busy morning.

The insight is that this formerly intractable problem can be solved by combining these two approaches: by using new technologies to provide a human to human style empathetic experience but in a style and pace that enhances communication, and in a location that ensures dignity.

A question of human rights, first

Early in 2015, I led a small but highly capable team that began to investigate what it would take to achieve what the Convention describes as ‘augmentative and alternative communication’ and the ability for people to be able to ‘receive and impart information and ideas on an equal basis.’

Their work eventually led to the creation of Nadia, the first digital human for service delivery and co-created with people with disability. Few people outside the team are aware that Nadia’s origins and it’s very purpose was in the Convention: it did not start as various technologies looking to solve a problem. Of course, we had research into cognitive systems, virtual reality, second life, omni-channel and avatars – including public discussions with the community – but what would people with disability actually imagine and want? Only through co-design could this imagination be unlocked and made real.

How could it be that people with disabilities, including those with an intellectual disability, could receive and impart information in their own context, and independently?

Had anyone ever asked or involved them? Had anyone ever acknowledged that the unique insights, skills and experience of people with disability could be imbedded as determinants of design? And that these new design determinants could quickly become mainstream universal design and benefit everyone?

The Convention is remarkable drafting. It calls out this paternalistic view of treating people with disabilities as “objects of charity” to “subjects with rights” based on informed consent. The realisation of augmentative and alternative communications could only be achieved through the imagination and co-design of people with disability, as demonstrated by the following images.

The image on the left is a co-designed sketch of what people with disability imagined; drawn on paper and coloured with crayons well before any of the technologies were brought together.

The experience depicted in this sketch, was that people did not want to deal with confusing websites or call centres; they simply wanted to have a face to face conversation and not necessarily with another human person who might be impatient, judgmental or not available. This was many months before the Nadia face was identified: the face in the sketch was a composite face whose features were chosen through co-design.

Next to the sketched image is the final co-designed and tested Nadia interface. This human rights inspired co-design process established the blueprint through which the component technologies – including the AI system and expressive digital human – were brought to life.

This co-design blueprint encompassed personality, gestures, conversational model, knowledge and market research. University psychology faculty were deeply involved in supporting the co-design with people with intellectual disability, so that the words, expressions and conversational tempo was empathetic and natural.

Importantly, this supported co-design process ensured that information conveyed through the conversation was understood by people with intellectual disability in their context.

For the first time, instead of people having to adapt to systems – this was a vision to have systems adapt to people and so go some way to achieving the objectives of the Convention.

What this human rights inspired co-design process further envisaged, was conversational input and output in non-spoken and non-typed format. A digital human to converse in any language, including signing, and display additional information as videos, pictures and text.

Soon, haptics will enable communication with people who are deaf and blind and for this conversation to occur in parallel formats – such as haptic and spoken – so that the deaf/blind person could interact with the digital human in a multi-party interaction including their sighted/hearing family members.

Co-design had also envisaged that a person who was ‘locked-in’ and could only communicate with their brain activity via a NeuroSwitch – that this NeuroSwitch input could be transmitted and de-coded, with the digital human responding with a natural empathetic spoken response.

And who hasn’t experienced the nightmare of filling out government forms, or endless requests for condition and medications information on paper forms at medical specialist and clinician practices? Just imagine how confronting or impossible this is for people with disability.

Through co-design, people with disability imagined a different experience: a conversation with a digital human to replace the whole concept of forms.

And this significant innovation challenge – the achievement of an adaptive and expressive cognitive system that is inclusive with people with an intellectual disability could only be achieve through co-design.

What we had started upon was a change in the way in which all people and the systems of society and systems of service would interact.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags AIaugmented realityUN Convention on the Rights of Persons with Disabilities

More about AIIAAustraliainventorTechnologyW3CWorld Wide Web Consortium

Show Comments
[]