The bfloat16 is a truncation of float32 to its first 16 bits. It has 8 bits for exponent and 7 bits for mantissa. This makes for easy conversion to and from float32 while minimizing risks of hard or impossible to compute artefacts when switching from float32.
Intel said it is also working to integrate popular deep learning frameworks such as TensorFlow, MXNet, Paddle Paddle, CNTK and ONNX onto nGraph, Intel's open-source library for development frameworks that can run deep learning computations efficiently on a variety of processors.
Related links and articles: