Last November, when Google announced that machine learning research luminary Fei-Fei Li, Ph.D. would join Google’s Cloud Group Platform group, a lot was known about her academic work.
But Google revealed little about why she was joining the company except she would lead machine learning for the Google Cloud business.
After five months of suspense, Li revealed the focus of her new role during her keynote address at Google’s cloud developer conference, Cloud Next 2017. She will apply her experience to democratize machine learning to the enterprise.
Her task: Study the problems that machine learning could solve in a wide variety of industries and enable enterprises to adopt machine learning.
It sounds more like a job for an enterprise salesman, not a Stanford research professor with over a hundred papers published in the field, but that would be the wrong conclusion.
Machine learning has produced amazing results, but its application has been narrow so far, applied to university research and by long-term investors in machine learning research and applications such as Google, Facebook, IBM, and Microsoft to solve their domain-specific problems.
Some of this work is extensible to other industries, such as medical imaging that has produced the same diagnostic accuracy as doctors diagnosing skin cancer and diabetic retinopathy, the leading cause of blindness, mentioned by Li during her keynote.
But she is looking for new greenfield applications that enterprises can use.
4 ways Google will enable enterprises to adopt machine learning and AI
Li made four points on the topics of democratising AI. She began by saying, “Machine learning can deliver, but this remains a field of high barriers. It requires rare expertise and resources that few companies can afford.”
She proposed that Google’s cloud, technology and services serve as an AI and machine learning on-ramp for enterprises.
1. Machine learning computing in Google Cloud
Because a deep learning algorithm can have tens of millions of parameters, training these machine learning models requires enormous computational resource.
Here Li announced the release from beta of the Cloud Machine Learning Engine. This capability is designed for companies with data scientists and machine learning experts who are able to build their own unique machine learning models with libraries such as Tensorflow.
Training big models is computationally intensive and often requires expensive special purpose hardware. Training is iterative, requiring multiple learning cycles to optimize the performance and accuracy of the model.
Slow hardware means the model developers have to wait days, weeks or even longer for one training run so that they can iterate to improve the model’s accuracy and performance.
A machine learning team’s training resource demand is not consistent with operational systems, resulting in inefficient utilization of the capital investment in on-premise hardware resources.
Li proposed Google’s infrastructure as the solution to speed training times and improve the return on investment. Google has specialized ASIC, GPU and TPU hardware in its cloud to accelerate training and improve the ROI with on-demand cloud resource utilisation.
After the model is trained, it is deployed in range of platforms—from on-premise to mobile devices.
2. Algorithms and pretrained machine learning models
At this point in time, most enterprises do not have the technical capabilities to build and train custom machine learning models that would utilize the Machine Learning Engine.
These companies can apply machine learning with Google’s pre-trained models (full list) using APIs to add machine learning capability to their applications, such as understanding natural language, images and natural language.
An API beta for understanding videos was also announced. It tags the content in a video via a timeline. Li referred to the videos as the dark matter of the internet because they are not indexed and require a serial search to find a specific element of content within the video.
This 3-minute video of the demonstration of the Cloud Video Intelligence beta quickly explains the capability.
Li also mentioned that Google would apply its formidable investment in AI and machine learning research to create new products.
3. Google acquires Kaggle for data
Data is the raw material of AI and a steep barrier to an enterprise's machine learning on-ramp. Li drew on her experience building the open-source ImageNet data set of over 15 million labeled images that enabled advances in deep learning research.
Imagenet is an important resource, but there are many other machine learning challenges that need different data sets.
Google acquired Kaggle for data sets and talent. Kaggle, founded in 2010, is a community of 850,000 data scientists from around the world that hosts competitions to create the most accurate predictive models and market models, as well as to acquire new public data sets in a variety of fields.
Li introduced the Advanced Solutions Lab for customers with ambitious goals to develop machine learning to solve complex problems. She mentioned the insurance company USAA as an example that partnered with the Advanced Solutions Lab.
A team of USAA engineers came to Google to learn from Google’s engineers and create a broad skill base, specific to their insurance needs.
The Advanced Solutions Lab transfers skills to the enterprises most able to apply them. But it is also an opportunity for Li and her team to research the greenfield machine learning opportunities structured around specific industries.
Within the portfolio, Google’s parent company Alphabet, life sciences research company Verily engages its scientific and engineering teams to solve novel and difficult to solve problems brought to them by other companies and institutions. It is unclear if Li’s team will take this further step.
Notable enterprise applications of AI
Earlier in her keynote, Li began to describe some of the enterprise applications that interested her by saying: “So much more is waiting to be done.”
- Retail: Google’s Adsense can be extended by retailers to serve the best ad to the individual consumer.
- Supply chain: Optimize routes and inventory, predict changes in demand, drone and autonomous vehicle deliveries.
- News content: Individually personalize news and, presumably, screen fake news.
- Financial services: Predict credit card risk, manage an individual’s finances, flag criminal activity like money laundering and fraud, and automate processes, such as replacing call centers and processing insurance claims with trained AI agents.
- Healthcare: Li said the implications of AI in healthcare were profound—automated visual diagnosis, reduced overhead, fewer errors, extending healthcare to the underserved, augmented surgical practices, and improved administration in areas such as scribing electronic medical records (EMR) during doctor’s visits and management of chronic conditions.
It appears from Li’s keynote that Google has become aggressive in an effort to apply its very long and extensive experience in AI and machine learning to differentiate its cloud business and lead this segment of the market.
This article originally appeared on Network World.
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.