Will sensor fusion drive neuromorphic computing?
What sensor fusion can do is turn data at what may become hundreds of sensors to useful information for the application processor to use, such as power up the modem and graphics as the user has likely picked up the phone – or turn up the screen brightness because the user has probably gone outside.
Contrast that with neural networking, which was a hot topic 25 years ago as software simulations of weighted summing networks started to show some interesting abilities to learn how to process data. However, hardware integration was less successful as the number of neurons was relatively limited and interfacing to conventional computing was a burden.
I predict that neural networks – or neuromorphic computing as the topic is now called – is about to go through a renaissance and could be encouraged by sensor fusion acting as a pioneer.
A first part of the landscape behind this conclusion is that the complexity of conventional digital circuits with multiple software-programmable cores has reached a level that it is becoming almost impossible for human developers to understand all the use cases and software paths through the system and develop tests for them. Even though a building block approach is taken to make use of previously tested subsystems to try and curtail this exponentially growing problem the fact remains that many companies are now betting their existence on products they cannot be absolutely sure will not enter some sort of deadlock condition under some unforeseen set of conditions.
What is really needed is a system that while not perfectly tested is fit for purpose the vast majority of the time and has the ability to learn and adapt to the time it is not. Does this sound like a neural network?
A second part of the landscape is that as more specialized application-specific processors become economically viable because of the size of markets they can serve, their architecture is moving away from general-purpose architectures and towards neuromorphics ones. For example graphics processors are highly parallel and matched to graphics rendering. We are starting to see the emergence of computer vision processors that can be optimized to extract useful information from image sensor data. This could be object detection and recognition, gesture and facial recognition and increasingly it is being found that the most energy efficient way to perform these tasks are with computing architectures with similarities to neuromorphic systems. We are seeing CMOS image sensors move towards the hyperspectral to try and extract more information, such as distance or depth information, at the pixel level.
Finally the adoption of sensor fusion in mobile phones is educating developers to think along neuromorphic lines. This means a temperature sensor can be used to help calibrate a pressure sensor which in turn can help provide information to inertial sensors. In the end the accuracy of the multi-sensor cross calibrated system is greater than that of the individual sensors.
A follow–on from that might be develop architectures of variable resolution. There is no point in wasting energy calculating values to 32-bit accuracy throughout multiple processors if you only want a go/no-go decision about whether to power-up the LTE modem.
And so it should come as no surprise that Qualcomm is working on neuromorphic cores for potential inclusion in future Snapdragon-like application processors. Qualcomm is developing something called the Zeroth processor that comprises a spiking neural network because of its energy efficiency for encoding information.
Qualcomm thinks of mobile phone as brain covered in sensors including pressure, touch, vision, hearing, humidity and even small. It envisions a neural processor core able to live side-by-side with conventional software-programmable cores in future application processors. In this way it is possible to develop programs using traditional programming languages, but also to lean on the neuromorphic processor to train the device for human interaction and behaviour.
This holds out the promise of not only energy-efficient learned behaviour but also a human-machine interface that is human-friendly. So as sensors proliferate – both on the mobile phone and in Internet of Things applications – I expect neuromorphic computing to follow.
Related links and articles: