GTC 2016: The Growing Value of Data Center Innovation

By Charles King, Pund-IT, Inc.  April 13, 2016

For many in Silicon Valley, NVIDIA’s annual GPU Technology Conference (GTC) is one of the industry’s “must-attend” events. In large part, that’s because GPU/graphics technologies are at the leading edge of some of the IT industry’s coolest and most mind boggling advancements.

That was certainly clear at this year’s event where the GTC Expo floor was crowded with vendors promoting their latest efforts in virtual reality (VR), 3D hardware and software, gaming and other graphics-intensive products. But you could also argue that GTC’s popularity is due to NVIDIA’s continuing, evolutionary application of GPU technologies extending far beyond traditional PC- and workstation-based solutions.

Beginning the journey

That process began over half a decade ago when the company’s then-new Kepler K20x GPUs were picked (along with AMD Opteron CPUs) to drive the next generation Titan supercomputer at the Oak Ridge National Laboratory. NVIDIA’s Kepler was chosen because, like other graphics accelerators, its high thread count supported more robust parallelism than traditional CPUs. When Titan went online in late 2012, it immediately led the bi-annual Top500.org list of global supercomputers, and was ranked #3 in terms of energy efficiency.

Titan was displaced the following June by China’s Tianhe-2, and the pair has retained their 1-2 ranking on the Top500.org list since then. But more importantly, Titan’s success led the concept of leveraging graphics acceleration solutions, like NVIDIA’s Kepler K20x and Fermi 2050 and Intel’s Xeon Phi into the mainstream. In the most current Top500.org list (November 2015), 70 systems utilize accelerators (52 with Kepler K20x, 14 with Fermi 2050 and 4 with Xeon Phi), including two in the top 10 and six in the top 20.

At the same time, the company targeted traditional data center solutions. In November 2014, it launched the NVIDIA Grid, a virtualized, GPU-based system that enables advanced graphics capabilities to be streamed as a commercial service, as an online gaming platform. The following year, the company was onstage at VMworld promoting NVIDIA Grid as a platform for streaming sophisticated graphics content to endpoints in virtual desktop infrastructures (VDI), including thin clients and tablets that lack conventional graphics cards.

At GTC 2016, NVIDIA focused on impressive new and continuing efforts in advanced supercomputing installations, artificial intelligence (AI), autonomous (self-driving) cars and deep learning. But the conference also provided concrete examples of how a widening range of NVIDIA partners and customers in business and vertical industries are bettering themselves with its advanced GPU-based solutions. As a result, this year’s conference was the most impressive GTC event to date.

Keynotes and the Expo

Major NVIDIA events typically kick off with a keynote by company co-founder and CEO Jen-Hsun Huang, and GTC 2016 was no exception. Huang’s “A New Computing Model” theme focused on evolutionary changes in computing—toward gigantic scale, high stakes and great impact—that require “supercharged GPU-Accelerated Computing.”

Huang’s comments focused on a number of new solutions NVIDIA is offering to developers and business customers, including

  • software developer kits (SDKs) for photorealistic graphics,
  • a preview of features coming in CUDA 8, NVIDIA’s GPU computing programming architecture,
  • interactive/photorealistic VR (IRAY) capabilities,
  • new Tesla P100 GPUs for hyperscale data centers,
  • NVIDIA DGX-1 which Huang called, “the world’s first deep learning supercomputer”, starting at $129,000.00, and
  • NVIDIA PX2, a new deep-learning-enabled system for cars.

If this sounds ambitious on paper, it was even more impressive on stage where Huang was joined by guests and co-presenters, including legendary Apple co-founder Steve Wozniak (by video feed) and technical leaders from Google and Baidu.

The GTC audience ate it all up and hit the conference Expo for extra helpings. A general note about conference expos – many if not most are designed to generate business leads for attending vendors, so they tend to mix product information with overt sales pitches. There was certainly some of that at GTC but it was mixed heavily with “gee whiz” presentations, including NVIDA’s own immersive VR demonstration booth which consistently took the top prize for the Expo’s longest lines.

The sense that GTC is offering a “glimpse of tomorrow, today” probably contributes to the patient expectation I witnessed among conference attendees. The crowd was heavily weighted toward computing professionals, not surprising since attendance leans toward developers and others looking to glean information from the conference’s 500+ technical sessions. But along with the explicit seriousness of the material was an implicit sense of fun.

Yes, the abundance of the information on hand was enough to keep even the most serious gearheads happy. But that didn’t mean they couldn’t enjoy themselves.

OpenPOWER Redux

As I mentioned in last week’s Review, my main reason for being at GTC 2016 was to attend the parallel OpenPOWER Summit, and it did not disappoint. Guest analyst Joe Clabby (in this same issue of the Review) provides a terrific overview of the Summit, so I’ll offer thoughts on a couple of specific issues.

First, the continuing growth of both Foundation members (from the 130 announced last year to over 200 this year) and commercial solutions (adding 50 more solutions to the 30 introduced in 2015) suggests that the Foundation has real staying power (no pun intended). That’s partly due to the robustness of IBM’s POWER architecture, the leading force in RISC-based systems, as well as the growth of Linux-based solutions in on-premises and cloud computing.

But it also says something about the value of open source collaboration in initiating new business models and standards. Like most IT industry analysts, I’ve seen dozens of strategic partnerships and alliances try and mostly fail to drive forward from the starting line. In contrast, the OpenPOWER Foundation seems to be firing on all cylinders and accelerating with little, if any, resistance. The Foundation also seems to be lending momentum to IBM’s Power Systems organization, which noted a 4% increase in revenues during Q4 2015.

That was a significant improvement over recent quarters, but Power Systems’ performance also outshone most x86-based server vendors. That good news can’t all be attributed to the OpenPOWER Foundation’s success, but it seems reasonable to infer that some synergies exist. In particular, by nurturing new vendors, use cases and markets for POWER-based technologies, the Foundation is also emphasizing the continuing vitality of the POWER architecture. That’s all to the good for IBM.

Google and POWER

There were numerous news events and product announcements during the Summit, but the most impactful were those made by Foundation co-founder Google, which noted that it is developing (with Rackspace) a new server form factor based on IBM’s upcoming POWER9 processors. That was big news for two connected reasons: 1) the sheer size of Google’s market presence and hyperscale IT infrastructure, and 2) the company’s longstanding reticence in discussing its data center efforts and plans.

That Google is publicly discussing its current use of POWER-based systems was surprising enough, but even more so was this comment by Maire Mahoney, a Google engineering manager and a director on the OpenPOWER Foundation board: “We (Google) have ported our infrastructure onto the Power architecture. And most importantly, what that means is that our toolchain supports Power. So for our Google developers, enabling Power for their software applications is simply a matter of modifying a config file and off they go.”

In other words, Google has used software innovation to make shifting from one hardware platform to another an essentially trivial event. That should allow the company to easily leverage whichever processor/system best complements a specific application, workload or use case. But it also significantly lowers the risk of testing/adopting new silicon and system designs, and to quickly abandon those that fail to deliver the goods. Certainly immediately, and probably over the longer term, this is very good news for the POWER architecture.

Final analysis

Overall, GTC 2016 delivered the goods in terms of short term newsworthiness and long term strategic vision. NVIDIA’s past successes bolster the company’s new efforts in deep learning, hyperscale data centers and autonomous automobiles. If you’ve found new markets and use cases for innovative GPUs in the past, the future looks bright, indeed.

A bright future also seems probable for the OpenPOWER Foundation and its growing member roster. That was evident in the expanding depth and breadth of new POWER-based data center solutions. But Google’s willingness to break silence and discuss its current and future use of OpenPOWER technologies bodes well for the Foundation.

In fact, it seems likely that GTC 2017 will feature significant announcements and surprises. After this year’s conference, anything seems possible.

© 2016 Pund-IT, Inc. All rights reserved.