IEDM: CEA-Leti integrates spiking neural network
Spiking neural networks are closer to biological neural functioning because they classify information based on temporal spikes rather than continuous levels. This also makes them more energy efficient if somewhat more complex than digital neural networks.
CEA-Leti’s claims comes in spite of the fact that BrainChip Holdings Ltd. has launched its Akida Neuromorphic System-on-Chip (SoC) back in September 2018, claiming to be the first company with a hardware implementation of a spiking neural network architecture (see BrainChip launches spiking neural network SoC).
CEA-Leti built its chip in a 130nm CMOS manufacturing process with analog neurons and resistive-RAM-based (ReRAM) synapses, integrated monolithically on top of CMOS devices. The ReRAM devices are based on titanium-oxide and hafnium-oxide layers between titanium-nitride electrodes, transition-metal oxide two terminal ReRAMs.
CEA-Leti spiking neural network chip floor plan. Source: IEDM.
The test chip includes 11,500 1T-1R ReRAM cells and demonstrated an accuracy of 84 percent at recognizing handwritten digits from the MNIST database, with 5x lower energy dissipation at the synapse and neuron level (3.6 pJ) versus other chips that use formal programming methods for image classification.
Next: Possible improvements
The researchers say that moving to the 28nm node from 130nm could result in a 10x energy reduction and a 30x density gain, and that using RRAM to build multiple-level memory cells versus the single-level cells in the test chip, can further reduce synaptic density by 4x.
It should also be noted that CEA-Leti is a development and manufacturing partner for Weebit Nano Ltd., a startup making progress with silicon-oxide based ReRAM (see Weebit, Leti to demo SiOx ReRAM in neuromorphic application). The paper,, number 14.3, is titled: Fully Integrated Spiking Neural Network with Analog Neurons and RRAM Synapse.
Elsewhere at IEDM Stefan Cosemans of IMEC will present Towards 10,000TOPS/W DNN inference with analog in-memory computing – a circuit blueprint, device options and requirements.
IBM and Kioxia (formerly Toshiba Memory) will also discuss analog in-memory as the basis of neuromorphic computing at IEDM. Tayfun Gokmen of IBM will present The marriage of training and inference for scaled deep learning analog hardware. Jun Deguchi of Kioxia will ask: Can in-memory/analog accelerators be a silver bullet for energy-efficient inference?
Related links and articles: