Akida spiking neural processor could head to FDSOI
BrainChip is starting to deliver its Akida1000 system chip to customers although the company insists that is main business model is intellectual property licensing, similar to ARM’sThe Akida1000 contains 80 neural processors and is implemented in 28nm CMOS (see Brainchip’s Akida is a fast learner) and Brainchip announced the start of volume production in April 2021. This was shortly after Louis DiNardo quietly left the position of CEO in March. Peter van der Made, the founder of Brainchip and previously CTO, has taken up the CEO role.
Anil Mankar, chief development officer, told eeNews Europe: “Chip production volume is just starting now. But you will see a lot of IP licensing going forward.” He added: “We are process agnostic.”
The near-term focus is supplying the Akida IP to 22nm although some customers may go back to 90nm process, Akida executives said.
Rob Telson, vice president of worldwide sales, said BrainChip is drawing up plans for smaller and larger versions of Akida under the names Akida500, Akida1500 and Akida2000. Some of these may well comply to a new generation of the Akida architecture – Akida 2.0 – due to arrive in 2022. It is thought Akida500 could be implemented in 22nm FDSOI manufacturing process, and serve as a demonstrator of the agnostic nature of the Akida architecture.
Mankar emphasizes that the Akida architecture can implement both conventional convolutional neural networks (CNNs) and spiking neural networks (SNNs) that allow for a broader range of data processing models and learnings. The human brain based on spiking signals passed between neurons. “Spikes are spatio-temporal. There’s a lot of information to extract from spikes that we are not yet taking advantage of,” said Mankar.
The scalability of the architecture is also important, he said. “Our IP can from 2 processing nodes to 128,” said Mankar. If a licensee goes to 7nm CMOS then they can go to many nodes, he added.
Next: Free development tools
The MetaTF software development tools are free and allow users to investigate what Akida can do for their application, how many processing nodes they need. For some customers Akida is prepared to supply boards with the Akida1000 silicon and provide help customizing the network. Others will want to license and optimize their own chip.
One of the application areas of interest is automotive where Artificial Intelligence and Machine Learning (AI/ML) to train an increasing number of sensors, components, image and video processors in each vehicle. Autonomous vehicles and near-autonomous vehicles are predicted to generate between 12 and 15 terabytes of data for every two hours of driving.
Latency, power consumption and privacy are the key reasons not to send this data to the cloud for processing.
One advantage of spiking neural network architectures is the ability to perform
real-time incremental learning, sometimes called one-shot learning, within a fraction of a second. The ability to add voice commands, accept individuals as drivers by facial recognition and to flag events as significant or not in sensors is improved when using Akida, said BrainChip executives. “We are being benchmarked against deep learning accelerators and a GPU vendor and it is coming back favourably to us.”
Related links and articles: