The company is due to report on this circuit at the VLSI Symposia on Technology and Circuits, which has gone virtual for 2020 and takes place June 15 to 19.
Most software neural networks were originally deployed using native or floating-point datatypes. But it has been shown that binarization of data and weights in neural networks are a promising technique for deploying deep models on resource-limited devices, notwithstanding information loss. The act of binarization can represent an additional processing burden and can proliferate networks but it also allows for network pruning.
Intel's neural network accelerator to be presented by senior research scientist Phil Knag, achieves the exceptionally high figure of 617TOPS/watt.
Intel has been involved in machine learning in multiple ways including the acquisitions of Movodius, Nervana and most recently its $2 billion acquisition of Habana Labs Ltd. (Caeserea, Israel) in December 2019.
Phil Knag is based at Intel Labs in Portland, Oregon.
Related links and articles: