Kneron said it has adopted a filter decomposition technology that allows the division of large-scale convolutional computing blocks into a number of smaller ones to compute in parallel. Together with the reconfigurable convolution accelerating technology, the computing results from the small blocks will be integrated to achieve better overall computing performance. Model compression allows unoptimized models to be shrunk a few dozen times. The multi-level caching technique reduces the use of CPU resources and further improves the overall operational efficiency.
Kneron NPU IP Series allows ResNet, YOLO and other deep learning networks to run on edge devices including hardware IP, compiler, and model compression. It supports various types of CNNs such as Resnet-18, Resnet-34, Vgg16, GoogleNet, and Lenet, as well as mainstream deep learning frameworks, including Caffe, Keras, and TensorFlow.
Albert Liu, Kneron´s founder and CEO said: "Since the release of its first NPU IP in 2016, Kneron has been making continuous efforts to optimize its NPU design and specifications for various industrial applications. We are pleased to introduce the new NPU IP Series and to announce that the KDP 500 will be adopted by our customer and enter to the mask tape-out process in the upcoming second quarter."
Kneron was founded in 2015 and completed a Series A financing round worth more than $10 million in November 2017. Alibaba Entrepreneurs Fund and CDIB Capital Group are the lead investors, and Himax Technologies, Qualcomm, Thundersoft, Sequoia Capital and Cy Zone are co-investors.
Related links and articles: