IDF 2016: Growing the Power of Intel’s Deep Learning Platforms

By Charles King, Pund-IT, Inc.  August 17, 2016

I’m at the Intel Developers Forum (IDF) in San Francisco today. While I plan to discuss the overall IDF experience in next week’s Review, here are a few thoughts on Intel’s efforts around and strategy regarding artificial intelligence (AI), a subject that’s been featured or addressed in virtually all of the conference keynotes and executive sessions I’ve attended.

AI and related technologies, including advanced analytics, computer vision, language processing, machine learning and deep learning have a deservedly high profile in the tech industry. All are contributing to what many people consider leading edge technologies and next gen services and solutions, like self-driving cars, predictive healthcare and precision medical treatments. But how does the larger IT industry get from here to there?

As has been the case in past technological leaps forward, innovative specialists produce encouraging, breakthrough results but typically lack the funding and personnel to sustain viable solutions. Established companies often spread their attention across numerous commercial projects and markets, achieving commercial momentum but failing to deliver “ah ha” insights or innovations.

That’s the point where major vendors can step in to provide the investments and commitment necessary to bring energy and order to the innovation process. Intel’s acquisition of Nervana Systems (announced just prior to IDF) is one such effort, aimed at significantly expanding the company’s skills, solutions and services related to deep learning and AI.

What is deep learning?

So what is deep learning and what is its role related to machine learning and AI? At a fundamental level, deep learning is a primary building block of modern AI services.

To begin, what we call AI is the ability of computers to perform human tasks and exhibit humanlike traits. AI has deep footprints in science fiction and popular entertainment; think the “HAL” computer in 2001: A Space Odyssey and the gun-happy cyborgs of the Terminator films. While those capabilities are far beyond current technologies, more targeted versions of AI are increasingly common in services like machine-enabled voice recognition, facial recognition and file/image classification.

Machine learning broadly describes the algorithms and processes used to develop those AI capabilities. In short, machine learning leverages algorithms to collect, parse and classify data, thus enabling the systems driving AI to “learn” from the data they consume and use that knowledge to draw conclusions, make predictions and enable other deterministic processes. But like any kind of learned skill or process, the success of AI depends on repetition and associating newly parsed information with past lessons learned.

As a comparison, consider the current Olympic Games, and the performance of premiere athletes like Simone Biles and Michael Phelps. Both dedicated themselves to tens of thousands of hours of rigorous, often repetitious exercise and practice to achieve gold medal-quality performance. That parallels the role that deep learning plays in AI development.

Deep learning technologies are used to train AI systems with computationally-intensive neural networks. Those networks are designed to serve up thousands or tens of thousands or millions of “lessons” to AI systems, assess the answers, then tune the systems to increase the accuracy and relevance of their responses. In other words, deep learning is complementary to and impactful on the machine learning processes fundamental to AI.

Mutual deep learning commitments

So what exactly is Nervana Systems, and why is it important to Intel? Founded in 2014 and headquartered in San Diego, California, the company has developed a fully-optimized software and hardware stack for deep learning. Intel believes that Nervana’s IP and expertise in accelerating deep learning will expand its AI capabilities.

Nervana’s algorithms, software skills and Neon deep learning framework complement Intel’s Math Kernel Library and will facilitate integrations into industry standard frameworks. Nervana’s Engine (a custom ASIC optimized for deep learning) and silicon expertise will advance Intel’s AI portfolio and enhance the deep learning performance and TCO of the company’s Xeon and Xeon Phi processors.

Final analysis

What does all this mean to Intel, its developer partners and their mutual customers? First and foremost, the company believes that AI and its supporting technologies represent an enormous yet still nascent commercial market. At IDF 2016, Diane Bryant, EVP and GM of Intel’s Data Center Group, noted that while just 7 percent of servers worldwide are currently supporting machine learning processes, Intel silicon is running on 97% of those systems.

That represents a huge opportunity but Intel is anything but alone in pursuing it. Other notable companies are in the AI hunt, including enterprise vendors like IBM, cloud players including Google and Amazon, and other silicon vendors, such as NVIDIA. To stay ahead of the curve, Intel is committing sizable financial and human capital to its commercial AI efforts.

From the looks of things, Nervana Systems will play a critical role in those efforts, and represents Intel’s willingness to bet big and sizably expand its search for leading technologies and a competitive edge. Competitors may believe they can overtake and overcome the company. But time and again, Intel has demonstrated that it has what it takes to go for and win the gold.

© 2016 Pund-IT, Inc. All rights reserved.