Startup offers inference processor for data centers: Page 2 of 2

November 08, 2018 //By Peter Clarke
Startup offers inference processor for data centers
Habana Labs, Ltd. (Tel-Aviv, Israel) has started sampling its neural network processor, the HL-1000, otherwise known as Goya, to selected customers.

The SynapseAI software stack analyses the trained model and then optimizes for use on the HL-1000 processor. It also enables the interface from neural-network frameworks such as MXNet, Caffe 2, TensorFlow, Microsoft Cognitive Toolkit, PyTorch, and the Open Neural Network Exchange Format (ONNX).

However, Habana feels that training and inference are not conveniently performed by the same chip. To optimize efficiency, Habana offers separate processors for training and inference workloads. Habana Labs plans to sample the HL-2000 or Gaudi training processor in the second quarter of 2019. Gaudi has a 2Tbps interface per device and its training performance scales well to thousands of processors, Habana claims.

But the Goya processor is not restricted to working on models trained by the Gaudi HL-2000. The inference processor supports models trained by any processor; GPU, TPU, CPU, and Habana Gaudi.

Habana Labs was founded in 2016 the company employs 120 people worldwide.

Related links and articles:

www.habana.ai

HL-1000 white paper

News articles:

Gyrfalcon launches second AI accelerator

NovuMind benchmarks tensor processor

Chinese AI startup preps accelerator IC samples

Indo-US startup preps agent-based AI processor

Alibaba forms chip subsidiary Pingtouge


Vous êtes certain ?

Si vous désactivez les cookies, vous ne pouvez plus naviguer sur le site.

Vous allez être rediriger vers Google.