Menu
Ask Watson or Siri: Artificial intelligence is as elusive as ever

Ask Watson or Siri: Artificial intelligence is as elusive as ever

At O'Reilly Strata + Hadoop World, the Salesforce.com expert on AI assessed the state of machine smarts

In 1966, some Massachusetts Institute of Technology researchers reckoned that they could develop computer vision as a summer project, perhaps even get a few smart undergrads to complete the task.

The world has been working on the problem ever since.

Computer vision is where computers recognize objects like people do. That's a tree. He's Carlos. And so on. It's one of a number of tasks we consider essential for generalized artificial intelligence, in which machines can act and reason as humans do.

While we've been making some considerable headway in computer vision, especially in recent years, that it has taken 50 years longer than expected shows why AI (artificial intelligence) is such as difficult and elusive goal.

"How much progress is being made? It's really hard to get a handle on that," said Beau Cronin, a Salesforce.com product manager currently working on some AI-influenced technologies for the company. Cronin spoke Friday at the O'Reilly Strata + Hadoop World conference, in New York.

The main theme of the conference was big data. The need for big data analytics has given AI research a shot in the arm. Today the titans of the Internet industry -- Apple, Google, Facebook, Microsoft, IBM -- are putting AI research into the driver's seat, pushing forward the state of the art for seemingly routine tasks such as ad targeting and personalized assistance.

But in many ways, we are no closer to achieving an overall general artificial intelligence, in the sense that a computer can behave like a human, Cronin observed. Systems that use AI technologies, such as machine learning, are defined to execute very narrowly defined tasks.

The state of AI has always been hard to assess, Cronin said. AI systems are hard to evaluate: They may excel in one area but fall short in another, similar task. Many projects, even sometimes very well-funded ones, go nowhere.

Even basic definitions of AI are still not locked down. When two people talk about AI, one may be referring to a specific machine learning algorithm while the other may be talking about autonomous robots. AI still attracts oddballs, lone wolves working in their basements 10 hours a week hoping to solve the AI problem once and for all.

The overambitious "Summer of Vision" MIT project in the 60s pointed out one of the major stumbling blocks for AI research, called Moravec's Paradox.

Moravec's Paradox asserts basically that things that are easy for people to do -- object recognition and perception -- are extremely difficult for computers to do, while simple tasks for computers -- proving complex theorems -- are extremely difficult if not impossible for people to do (some present readers excluded, no doubt).

The waves of hype around getting machines to think, and the subsequent disillusions borne of the marginal results, led the field to go through a number of what have been calledAI Winters, in which research funding dries up, and progress slows.

We probably will not see another AI Winter, if only because too many large companies, notably Google and Facebook, are basing their business models on using intelligence computing to better intuit what their users are looking for, Cronin said. Other companies offer AI-assisted technologies, such as such as Apple with Siri, and IBM with Watson.

In many ways, today's AI systems are a direct lineage of the first AI systems built in the 1960s, such as the Eliza -- the psychiatric advice-dispensing program still used for some Twitterbots today -- and Perceptron, one of the first precursors to deep-learning neural networks.

Such early AI systems were "deeply flaw and limited. They were just very basic in their capabilities," Cronin said. Nonetheless, "you can draw a direct lines from those early systems to the work we're doing today in AI," he observed. "Watson is what we wished Eliza would be."

After years of very little progress, though, we are increasingly becoming awash in ever-more astounding forms of AI-like assistance for specific tasks. The pace of advance is happening at a rate that have surprised "even people who have been in the field for a long time," Cronin said.

Self-driving vehicles, on the precipice of becoming commercially available, were considered to be almost an unachievable technology as little as 10 years ago.

Perhaps this is due to the change in funding for AI research. Governments with research money to spare have always invested in researchers with grand ambitions. And for many years, small commercial research organizations such as SRI International and Cycorp moved forward the state of the art.

These days, AI research has benefactors across most of the major IT and Internet companies, such as Google, Facebook and Microsoft Research. Many smaller startups, flush with venture capital, are also pushing the envelope.

"The work is increasingly applied to commercial [projects] rather than academic" ones, Cronin said. As a result, AI technologies are now operating at larger scales than they ever did in the academic days. "Deep learning on its own, done in academia, doesn't have the [same] impact as when it is brought into Google, scaled and built into a new product."

As a result, AI methods, such as machine learning, are now being integrated into commercial services and products, at a speedier pace than ever before. Cronin noted that Watson and Siri are more notable as "big integration projects" than for pioneering new forms of intelligence.

The growing influx of big data has helped the field, as well, introducing inferencing and other statistical methods that few would have predicted would play a such powerful role in technology, Cronin said. In the olden days of academic AI research, the amount of data that could be used to reason against was relatively sparse, compared to the mountains of stuff we have today.

Google has made bank from its massive set of data on its users, which it collected first and figured out how to make money from later. The company didn't get initially get hung up on "putting a lot of structure in the model," Cronin said. This has been termed by Google engineers as the "unreasonable effectiveness of data."

In the long haul, however, we will have to put more thought into deeper learning techniques than we have now, Cronin said. Today's methods just aren't going to get us to full artificial intelligence. "We need richer, more predictive models," Conin said, ones that can "routinely make predictions of what will happen."

One member of the audience, Juan Pablo Velez, a data analyst at New York data science consultancy firm Polynumeral, agreed with Cronin's assessment of AI.

"A lot of new innovation has come around in deep learning that has been rolled out in scale, like Google image search. But the research is very much tied to the agendas of big companies and it doesn't necessarily mean we are any closer to generalized machine intelligence," Velez said.

In many ways, we are at the same point in AI research where we've always been: moving forward rapidly in some aspects, while seemingly standing still in relation to the big goal, generalized artificial intelligence. As Facebook head of AI research Yann LeCun has said, AI research is like driving fast in the fog, where you can't see the next roadblock you will hit.

Until the day when we build a machine to look ahead into the fog for us, the future of AI will be uncertain for some time to come.

Joab Jackson covers enterprise software and general technology breaking news for The IDG News Service. Follow Joab on Twitter at @Joab_Jackson. Joab's e-mail address is Joab_Jackson@idg.com

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Salesforce.comsoftwareapplicationsdata mining

More about AppleFacebookGoogleIDGMassachusetts Institute of TechnologyMicrosoftMITNewsO'ReillyPerceptronReillySalesforce.comSRI InternationalStrataTechnology

Show Comments
[]