How embedded FPGAs fit AI applications

June 18, 2018 //By Alok Sanghavi, Achronix Semiconductor Corp.
How embedded FPGAs fit AI applications
Artificial intelligence, and machine learning in particular, is reshaping the way the world works, opening up countless opportunities in industry and commerce, but the optimum hardware architecture to support neural network evolution, diversity, training and inferencing is not determined. Alok Sanghavi surveys the landscape and makes the case for embedded FPGAs.

Alok Sanghavi, senior product marketing
manager, Achronix Semiconductor Corp

Applications span diverse markets such as autonomous driving, medical diagnostics, home appliances, industrial automation, adaptive websites, financial analytics and network infrastructure.

These applications, especially when implemented on the edge, demand high performance and, low latency to respond successfully to real-time changes in conditions. They also require low power consumption, rendering energy-intensive cloud-based solutions unusable. A further requirement is for these embedded systems to always be on and ready to respond even in the absence of a network connection to the cloud. This combination of factors calls for a change in the way that hardware is designed.

Neural networks

Many algorithms can be used for machine learning, but the most successful ones today are deep neural networks. Inspired by biological processes and structures, a deep neural network can employ ten or more layers in a feed-forward arrangement. Each layer uses virtual neurons that perform a weighted sum on a set of inputs and then passing the result to one or more neurons in the next layer.

Although there is a common-core approach to constructing most deep neural networks, there is no one-size-fits-all architecture for deep learning. Increasingly, deep-learning applications are incorporating elements that are not based on simulated neurons. As the technology continues to develop, many different architectures will emerge. Much like the organic brain itself, plasticity is a major requirement for any organization that aims to build machine learning into its product designs.

Next: Training and inference

Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.