The SynapseAI software stack analyses the trained model and then optimizes for use on the HL-1000 processor. It also enables the interface from neural-network frameworks such as MXNet, Caffe 2, TensorFlow, Microsoft Cognitive Toolkit, PyTorch, and the Open Neural Network Exchange Format (ONNX).
However, Habana feels that training and inference are not conveniently performed by the same chip. To optimize efficiency, Habana offers separate processors for training and inference workloads. Habana Labs plans to sample the HL-2000 or Gaudi training processor in the second quarter of 2019. Gaudi has a 2Tbps interface per device and its training performance scales well to thousands of processors, Habana claims.
But the Goya processor is not restricted to working on models trained by the Gaudi HL-2000. The inference processor supports models trained by any processor; GPU, TPU, CPU, and Habana Gaudi.
Habana Labs was founded in 2016 the company employs 120 people worldwide.
Related links and articles: