By Charles King, Pund-IT, Inc. February 1, 2017
IBM’s recent announcement of supporting TensorFlow 0.12 on its PowerAI distribution may have been confusing for some readers, especially those whose views of the company focus on its enterprise business roots. The fact is that while the TensorFlow announcement details both where IBM is today and where it aims to be in the future, just as importantly it demonstrates some practical steps the company will take to get from here to there. That’s worth further consideration.
Machine learning, deep learning and PowerAI
If you haven’t been keeping up with trends related to artificial intelligence (AI), this may seem especially confounding, but AI-related core technologies are fairly straightforward. In order to develop solutions that support AI-related functions or tasks, like computer vision, speech and pattern recognition and text analytics, underlying systems are trained with machine learning and deep learning frameworks.
The difference between the two is that while machine learning encompasses the algorithms and processes that enable systems to support AI-based skills, deep learning focuses on the processes needed to train or tune “neural networks” to perform AI-related tasks. In virtually every case, that requires underlying systems to ingest massive datasets with millions or even billions of elements, and repetitiously perform training “exercises” millions or tens of millions of times.
This recalls the “10,000 hours rule” to attain expertise that Malcolm Gladwell mentioned in his 2008 book, Outliers. The difference here is that by using the right deep learning framework and robust foundational systems, AI projects can benefit from performing endless training exercise repetitions far more quickly and efficiently than they would using conventional servers.
That’s where IBM’s PowerAI distribution which leverages the company’s Power Systems S822LC for HPC servers and POWER8 processors, along with partner NVIDIA’s NVLink interconnect technologies and Tesla Pascal P100 GPU accelerators comes in.
Riding the TensorFlow wave
What makes TensorFlow so special? That’s a good question that isn’t easy to answer. It certainly isn’t the market’s only deep learning framework. In fact, IBM also supports CAFFE, Chainer, Theano, Torch and NVIDIA DIGITS, along with several others. But TensorFlow has attained remarkably enthusiastic popularity and momentum since Google released it in November 2015.
How popular? In a blog he posted in July, Delip Rao, founder of Joost, reported adoption data he derived from StackOverflow that demonstrated unusually strong and growing interest in TensorFlow. He also noted engagement statistics from Github that showed TensorFlow to be the most forked project of 2015 despite being launched in November of that year. There are other sources that demonstrate similar trends (the one at RedMonk is particularly valuable), along with discussions on Quora and other online sources.
Why is TensorFlow adoption so enthusiastic? That depends on who you talk with, but arguments range from its depth and ease of use to Google’s marketing muscle to the framework’s likely longevity (due to Google’s resources and commitment). In IBM’s case, you can add in the fact that Google was one of the five founding members of the OpenPOWER Foundation, the 300+ member industry consortium focusing on developing new data center solutions based on IBM’s open source POWER architecture.
Add in the fact that IBM plans to deliver both Technology Support Services for software support for the PowerAI stack and Global Business Services for deep learning design and development, and you can see how the company should be able to capture solid practical, strategic, tactical and economic benefits by supporting TensorFlow 0.12.
Virtually every IT vendor balances its investments between building solutions customers require immediately and developing those they are likely to need in the months and years ahead. IBM’s considerable expenditures in advanced analytics have allowed it to establish a leadership position in advanced analytics with its Watson cognitive platform.
But those same investments, particularly in expanding the capabilities of its muscular POWER silicon and Power Systems solutions are resulting in expansive AI market opportunities for IBM. Not every machine learning and deep learning effort will utilize TensorFlow 0.12 but the accelerating popularity and momentum behind Google’s framework suggest it is likely to become an AI platform of choice for many enterprises.
If that should come to pass, the decision to support TensorFlow will be looked back on as a prescient bet with substantial payoffs for IBM, its partners and customers.
© 2017 Pund-IT, Inc. All rights reserved.