Intel: New chip materials will enable massive AI research gains

In a recent survey conducted by Lopez Research, 86% of companies said they thought AI would be of strategic significance to their business, while only 36% believed they’d actually made meaningful progress with AI. Why the disparity? Intel VP and CTO of AI products Amir Khosrowshahi and general manager of IoT Jonathan Ballon shared their thoughts onstage at VentureBeat’s 2019 Transform conference in San Francisco.

It’s undoubtedly true that the barriers to AI adoption are much lower than they once were, according to Ballon. He believes what’s changed is that startups and developers — not just academics and large companies — in “every industry” now have access to vast amounts of data, in addition to the tools and training necessary to implement machine learning in production.

That insight jibes with a report from Gartner in January that found AI implementation grew a whopping 270% in the past four years and 37% in the past year alone.

That’s up from 10% in 2015, which isn’t too surprising, considering that by some estimates the enterprise AI market will be worth $6.14 billion by 2022.

Despite the embarrassment of development riches, Ballon says identifying the right tools remains a hurdle for some projects. “If you’re doing something that’s cloud-based, you’ve got access to vast computing resources, power, and cooling, and all of these things with which you can perform certain tasks. But what we’re finding is that almost half of all of the deployments and half of all the world’s data sits outside of the datacenter, and so customers are looking for the ability to access that data at the point of origination,” he said.

This burgeoning interest in “edge AI” has to an extent outpaced hardware, much of which is practically incapable of accomplishing tasks better suited to a datacenter. Training state-of-the-art AI models is infinitely more time-consuming without the aid of cutting-edge cloud chips like Google’s Tensor Processing Units and Intel’s forthcoming Nervana Neural Network Processor for training (also known as NNP-T 1000), a purpose-built high-speed AI accelerator card.

“Processor cooling infrastructure, software frameworks, and so forth have really enabled [these AI models], and it’s kind of an enormous amount of compute,” said Khosrowshahi. “[It’s all about] scaling up processing compute and running all the stuff on specialized hardware infrastructure.”

Fragmentation doesn’t help, either. Khosrowshahi says that despite the proliferation of tools like Google’s TensorFlow and Open Neural Network Exchange, an open container format for the exchange of neural network models between different frameworks, the developer experience isn’t particularly streamlined.

Ballon said that looking at the workflow associated with actually deploying an AI model, the degree that the hardware architecture is abstracted from data scientists and application developers has a long way to go. “We’re not there yet, and until we get to that point, I think it’s incumbent on software developers to understand both the pros and cons, the limitations of various hardware choices.”

There’s no magic bullet, but both Ballon and Khosrowshahi believe hardware innovations have the potential to further democratize powerful AI.