Memory compressor IP can save time, energy
The company is a 2015 spin-off from research conducted at Chalmers University and from participation in the EuroServer collaborative research project partially funded by the European Union (see European server project promotes ARM on FDSOI).
ZeroPoint claims it can produce memory savings of a factor of three or greater on real-life data. This in turn means less data is transferred from processor to memory and back, which saves time, and also saves energy in the communications and in the storage of that data.
Typically compression systems are designed to work with inactive data, such as compressing files for storage or for transmission, compressing inactive information/programs and with the option of more aggressive lossy compression versus milder loss-less compression.
Instead ZeroPoint has chosen to attack the larger problem of active data and multiple data types, which might include music, video, photography, text files and the multiple formats that they use and where the data may be changing. This makes it applicable both in server computers but also, potentially on smartphones and Internet of Things nodes.
The company does this via a lossless approach and an analysis of multiple data types. The fact that it addresses active data means that minimizing the latency is vital and requires a hardware based approach. As a result the company is taking a hardware IP licensing approach, similar to ARM.
“Our business model is to license IP blocks for inclusion in FPGAs as well as in SoC/processor chips,” Stefan Lindeberg, CEO of ZeroPoint, told eeNews Europe in email correspondence.
Lindeberg said: “Software solutions would be way to slow. Because of our very fast algorithms/implementations we manage to compress very effectively at memory speed.” He added: “For active data, the compression has to work at the ‘speed of memory’ with nanosecond or even without latency in order not to reduce performance. Adding compression without adding latency might seem contradictory but since MaxiMem reduces the actual data written to memory, while we add latency of the order of nanoseconds when compressing/decompressing we reduce the transfer time to memory.” Lindeberg said the net result is parity or even reduced latency compared with the native system.
Lindeberg is not saying much about how MaxiMem operates but said the term describes a product family that in general is based on statistical compression and analyzes the statistical properties of memory data to select optimal encodings.
However a hint of the thinking behind MaxiMem may be found in a technical paper published in 2015. This paper described HyComp, a hybrid cache compression method for selection of data-type-specific compression methods. HyComp takes a heuristic approach to determining the data type and employs multiple compression/decompression engines. In the HyComp approach the type of compression algorithm used is encoded as a tag that accompanies the compressed memory blocks. This allows a return call to pick the right decompressor for that data. The same paper also discusses the synthesis of the engines and the heuristic analyser in VHDL targeting a 32nm process and verifying that HyComp could work a clock frequency of 3GHz.
“For confidentiality reasons we cannot tell you exactly what is in our current product but we have patented a host of algorithms and their implementations including what is in the HyComp paper,” Lindeberg said.
ZeroPoint will attend the HiPEAC conference that takes place in Stockholm, Sweden January 23 to 25. HiPEAC stands for High Performance and Embedded Architecture and Compilation and HiPEAC is an open network that has received funding from the European Union’s Horizon2020 research and innovation programme.
Related links and articles: