Enabling AI from the Edge to the Cloud: Gyrfalcon Technologies Offers New IP Licensing Model

By Charles King, Pund-IT, Inc.  April 24, 2019

Artificial Intelligence (AI) is a cause célèbre inside and outside the IT industry, inspiring often heated debate. However, a point that many—especially AI focused vendors—make is that cloud-based computing offers the best model for supporting AI frameworks, like Caffe, PyTorch and TensorFlow, and related machine learning processes.

But is that actually the case?

Gyrfalcon Technology (GTI) would argue that delivering robust AI at far edges of networks and in individual devices is both workable and desirable for many applications and workloads. In fact, the company offers a host of AI inference accelerator chips that can be used for those scenarios, as well as cloud-based server solutions for AI applications.

Now GTI is licensing its proprietary circuitry and intellectual property (IP) for use in System on Chip (SoC) designs. As a result, silicon vendors will be able to enhance and customize their own offerings with GTI’s innovations.

Let’s take a closer look at what Gyrfalcon Technologies is up to.

AI in the cloud

Why do most AI solutions focus on cloud-based approaches and architectures? You could call it an extreme case of “When all you have is a bulldozer, everything looks like a dirt pile” syndrome. The fact is that until fairly recently, the cost of AI far outweighed any practical benefits. That changed with new innovations, including cost-effective technologies like GPUs and FGPAs.

Some of the most intriguing and ambitious AI projects and commercial offerings, like human language processing, were undertaken by cloud vendors and infrastructure owners, including Amazon, Google and IBM, supported on the silicon side by NVIDIA, Intel and chipmakers. They had the compute and brain power to take on large-scale efforts where data accumulated by edge devices, like smart phone conversations and commands, is relayed to cloud data centers.

There, the data is used for training and enabling AI-based services, such as language translation and transcription, and products like smart home speakers.Are there any problems with this approach? Absolutely, with data privacy and security leading the charge. AI vendors uniformly claim that they are sensitive to their customers’ concerns about privacy and have tools and mechanisms in place to ensure that data is anonymized and safe. But Facebook, Google and others have been regularly dinged for mishandling or cavalierly maintaining customer data.

Cloud-based AI can also suffer latency issues, especially if network traffic is snarled. That might not be a big deal when you’re asking Alexa to recommend a good restaurant but it’s more problematic if it involves AI-enabled self-driving cars. There’s also the matter of using energy wisely. With the percentage of electricity consumed by data centers continuing to rise globally, building more IT facilities to support occasionally frivolous services seems like a literal waste.

AI at the edge

Gyrfalcon Technologies would argue that while cloud-based AI has an important role, it isn’t needed for every application or use case. Instead of a bulldozer, some jobs require a shovel or even a garden trowel. To that end, GTI offers a range of AI inference accelerator chips that support AI Processing in Memory (APiM) via ultra-small and energy efficient cores running GTI’s Matrix Processing Engine (MPE).

As a result, GTI’s solutions, like its Lightspeeur 2801 AI Accelerator can deliver 2.8 TOPS while using only 300mW of power. That makes it a great choice for edge-of-network devices, including security cameras and home smart locks. After being set up, chip adaptive training functions allow devices to learn from their surroundings. For example, a smart lock might use arrival and departure patterns to identify the residents of a home.

Enabling AI at the edge means that devices will be able to perform many functions autonomously or, if cloud connectivity is required, will be capable of vastly reducing the amount of data that needs to be transmitted. That lowers the costs, complexity and network traffic of AI implementations.

For cloud-based applications GTI offers the Lightspeeur 2803 AI Accelerators which are used in concert with GTI’s GAINBOARD 2803 PCIe card. For example, a single GAINBOARD card delivers up to 270 TOPS using 28 Watts for 9.6 TOPS/Watt, or about 3X greater efficiency than what competitors’ solutions offer.

Final analysis

The IT industry rightfully focuses on the value that innovative technologies and products provide to both consumers and businesses. Such solutions regularly come from massive Tier 1 vendors with decades of experience and billions of dollars in annual R&D funding. But oft times, innovative products and approaches are the brain children of smaller vendors like Gyrfalcon Technologies that are unawed by conventional wisdom.

With its AI Processing in Memory (APiM) and Matrix Processing Engine (MPE) technologies, GTI has enabled clients, including LG, Fujitsu and Samsung to reimagine how artificial intelligence can be incorporated into new consumer and business offerings. By licensing its Lightspeeur 2801 and 2803 AI Accelerators circuitry and intellectual property (IP) for use in System on Chip (SoC) designs, GTI is offering existing and future clients remarkable autonomy in determining how AI can best serve their own organizations and their end customers.

© 2019 Pund-IT, Inc. All rights reserved.