The work shows a six-fold improvement in energy efficiency compared to system-level design implemented using 65nm CMOS technology and the approach shows potential for further improvements towards brain-like energy efficient computing, the researchers claim.
The approach reflects a trade-off between accuracy and energy efficiency but one that works well for such applications as pattern recognition and classfication. One advantage over artificial neural networks is that training can be done in a much more brain-like manner, with a single exposure to available data. ANNs tend to required iterative training sessions that are highly energy consumptive. The work is reported in Nature Electronics (see https://www.nature.com/articles/s41928-020-0410-3).
Hyperdimensional computing, is an emerging form of computing that parallels some key aspects of biological memory, perception and cognition. Many of these biological functions can be modelled by the mathematical properties of hyperdimensional vectors, holographic representation and pseudo-randomness. In these cases, the information may be encoded with vectors that may have more than 1,000 dimensions. In IBM's research, vectors with 10,000 dimensions were used.
By applying the mathematics of hypervectors such computation can be applied to machine learning tasks, such as learning and classification.
Although the encoding is rich and vector operations mathematically intensive, operations can be performed using in-memory computation. The approach can be applied to sophisticated tasks such as object detection, language and object recognition, voice and video classification, time series analysis, text categorization, and analytical reasoning.
One key benefit of hyperdimensional computing is that training is more efficient than conventional neural network approaches as object categories are learned in a single pass of available data. The approach is memory-centric and robust against noise, variations or failed components within the computing platform.
Next: There's a video