Machines are getting better at writing. They can finish our sentences. They can reply to our emails. They can write news reports and even novels. But just because they can doesn’t mean they should.
Artificial intelligence (AI) is launching a technology revolution that will transform business over the next decade. The most powerful use for AI may be in the area of decision support, where algorithms feed us streams of knowledge and advice as we go about our work. Gartner says AI augmentation alone will create $2.9 trillion (with a “t”) in business value in 2021.
AI evolution is necessary for enterprise security, too, if only because the cybercriminals will use it to build better malware. We’ll benefit from AI in manufacturing, design, transportation and in countless other areas.
In short, AI will prove to be a huge boost to business.
But as we embark on this partnership with artificial intelligence, it’s important that we safeguard human intelligence. And the biggest threat to human intelligence is software that writes.
The creeping takeover of business writing
The mainstreaming of AI business writing began with Google Smart Reply four years ago. Google Inbox users were offered a few colorless options for a reply to most emails. The feature still exists in Gmail, and with a single click you can respond with “Thanks!” or “I’ll send it to you” or “Let’s do Friday!”
Last year Google added Smart Compose, which finishes the sentences you start. You can choose Google’s words by pressing the tab key.
Using Smart Reply and Smart Compose saves time but makes replies dull. They’re dull because Google makes sure the replies are generic and designed to not annoy or offend anyone (for example, Google’s AI never uses gendered pronouns like “he” or “she”), and also because millions of other Gmail users are using the exact same wording for their replies. We all sound the same in our replies.
Google is not alone. Lightkey makes a Windows application that works like Google’s Smart Compose.
Quillbot is a cloud-based tool that can rephrase what you write (or what you copy and paste from others’ writing). It typically produces awkward prose. Machines have no ear for language.
StoryAI is a tool based on OpenAI that will write a whole story if you write the beginning. I tried it by pasting in the opening paragraph of this column. You can read the column StoryAI wrote and decide who did a better job.
We find ourselves in the tragicomic place where AI writes financial news stories mainly for human consumption, but other AI also reads those stories to provide input for automated trading systems. AI does the writing. AI does the reading. And at some point AI is just going to cut humans out of the trade and keep all the money.
Automated writing will not only get better, it will be increasingly built into the tools we use to write things. The temptation to just let the machines do the writing will only grow. What’s wrong with that?
Here’s what’s wrong with that
The main problem with letting AI write for us is that writing isn’t just writing. Writing is one component of literacy, which includes reading, writing and thinking.
Writing involves revision, which clarifies thinking. We think. We write what we think. Then by reading what we write we realize the errors in our thinking, or at least in the way we have expressed our thinking. We rewrite until our thoughts are clearly and accurately and fully expressed. This practice is at the core of our ability to analyze, create, make good decisions and make progress in our lives and in our work.
Literacy and thinking are connected. This was the point of George Orwell’s Newspeak idea in the novel 1984. The totalitarian government in that book used restrictions on language to make complex thought impossible. Its purpose was “to diminish the range of thought” in order to pacify and enervate the public.
Writing, even writing business emails, forces us to confront our own thoughts in black and white. And this makes us cultivate our ability to think clearly.
It’s also the foundation of our ability to talk with logic and coherence. You’ll notice that good writers tend to speak well.
And writing aids memory. Just letting AI communicate for us, even if we choose from a menu of options, makes it easier to forget what “we” said.
More to the point, writing ability is a use-it-or-lose-it proposition, and AI systems that write for us could make us gradually lose it.
By allowing writing tools to do the writing for us, our literacy fades, and we begin to base our decisions on superficial impressions, rather than critical or analytical thinking.
The critical faculty is already under siege with conventions like emoji. By using cartoons in place of words, we communicate vague impressions rather than specific thoughts. As such, it’s not necessary to think specific thoughts in the first place. Textspeak, SMS abbreviations, autocorrect, emojis —- we’re slouching toward idiocracy. AI that writes our business communication for us is the professional version of all that.
Our relationship with literacy exists on a spectrum.
At one end, we’re fully realized humans with language and literacy to think and communicate better. At the other end we’re less human. We’re “echoborgs” — meat puppets who mindlessly regurgitate the words fed to us by AI.
We should be trying to move toward the good end of that spectrum, not the bad one.
Fearmongering over AI is common nowadays — AI, we’re told, will take our jobs and ultimately have no use for us, other than to keep us as pets. This AI technopanic is based on the knowledge that the machines will just keep getting smarter. We should be more worried that AI will make us all dumber.
The most efficient way for AI to make us dumber is to take the task of writing away from us. Our critical and creative faculties will atrophy. Our minds will become dull. And we’ll all become so boring that the machines may not even want us around as pets.
If you’re concerned about AI making us all redundant, you can do something about it today and every day: Don’t let AI put words in your mouth. Reject automated writing in all its forms. Do your own writing. Think for yourself.
The risk isn’t that machines will get smarter. It’s that humans will get dumber.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.