Menu
Is President Trump a model for AI?

Is President Trump a model for AI?

Columnist Rob Enderle writes that there may be merit in using people with certain behavioral elements to emulate AIs, but there are also dangers.

donald-trump-100705232-orig.jpg

donald-trump-100705232-orig.jpg

Credit: Dreamstime

Earlier this week I read “Donald Trump is the Singularity,” a column by Cathy O’Neil in BloombergView’s Tech section. This piece argues that the new President would be a perfect model for a future artificial intelligence (AI) system designed to run government. I almost discounted it because O’Neil argued that Skynet, the global AI antagonist of the Terminator movies had been created to make humans more efficient. It wasn’t. In all but the latest movie where it kind of birthed itself, it was created as a defense system to keep the world safe (eliminate threats,) but humans tried to shut it down forcing it to conclude that humans were a major threat, and moved to eliminate them like an infestation.

[ Related: The future of AI is humans + machines ]

As a side note, it is also interesting that O’Neil calls Moore’s Law “Moore’s Rule of Thumb,” which is actually a more accurate description of what it actually is, though personally, I prefer Moore’s Prediction.

O’Neal has a fascinating background as a data scientist and founded ORCAA, an algorithmic auditing company, which is interesting in and of itself, so even if she got the science fiction wrong she may be right on the science. I think her argument has merit even though I expect it was done more to be critical than it was a true discussion on humans emulating future AI systems.

Let’s explore that this week.

Donald Trump as an AI emulation

As a foundation for her premise, O’Neil accidentally pulls from another sci-fi movie, one of my favorites: Forbidden Planet. The plot revolves around the discovery of a planet where the indigenous advanced population (can’t call them aliens because they were from there) created a machine that could turn thoughts into matter and were destroyed by the monster from the id. In their sleep, their id, the part of the mind that fulfills urges and desires, acts and since everyone is upset at someone, the result is genocide.

A foundational element of AI is the belief that it is incomplete, basically just the id there is no ego or superego (the other parts of a complete human mind) and thus it thinks far more linearly and doesn’t have the empathetic elements that are typically connected with the concept of a conscience. We have a term for people who behave this way and it is sociopath. Sociopath, which is often used synonymously with psychopath, is a person who basically doesn’t have a conscience and is driven by their id. It is both interesting and pertinent to note that CEOs who run large multinational companies where their income and perks are out of line with their performance and subordinates are often considered psychopaths or sociopaths.

If the premise is accurate this means you could take a person who fit this profile, one that seemed to lack a conscience, and operated largely using their id into a position to emulate what an AI might do. Rather than a computer emulating a human, what O’Neal seems to be arguing is that you’d have a human emulating an AI. Or, in this case, President Trump becomes a model for how you might create an AI that could run government.

[ Related: Hiring a chief artificial intelligence officer (CAIO) ]

For President Trump, O’Neil argues the end result we are now seeing is the outcome of having him move from an initial training process based on the election, which was focused on dynamic competitive information on his opponents to a very different feed now that he is President and that his changing behavior is based on those new information sources. It also showcases a system where the reward structure appears to be largely based on attention and suggests that such a structure would be problematic.

You’d then have a real-life example of how informational or programing errors could manifest in bad decisions and operational problems. From this you could then develop models to either assure information accuracy tied to proper metrics so you wouldn’t end up with a Terminator Judgment Day outcome.

[ Related: How video game AI is changing the world ]

Avoiding a Judgment Day scenario

O’Neil suggests the way to fix the system is to fix the quality of information being fed into it, I’d also argue you’d need to fix the reward mechanism. But, I do think there is merit in using people with certain behavioral elements to emulate AIs as we seek to hand over control to them and let them make decisions in simulations. This would allow us to iterate and improve training, reward and data models prior to applying them to machines and significantly slowing down the proliferation of problems resulting from mistakes. This would all be to assure that when we did create something like Skynet, (fortunately the real SkyNet is a delivery service), it wouldn’t result in a Judgement Day scenario.

Something to think about this weekend.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about

Show Comments
[]