Some of the largest semiconductor processor vendors have jointly authored a new research paper detailing a specification for an 8-bit floating point (FP8) interchangeable format for artificial intelligence (AI) training and inference.
Intel Corp., Nvidia Corp. and Arm Inc. authored the cross-industry specification alignment that will allow AI models to operate and perform consistently across various hardware platforms.
The goal is to accelerate the development of AI software by making it easier to meet computational requirements from hardware used to run the AI. The companies said that new innovation is required across both hardware and software to be able to meet computational throughput needed to advance AI in all industries.
Intel said it will support this new format specification across its AI product roadmap for CPUs, GPUs and other AI accelerators such as the Habana Gaudi deep learning accelerator.
Why it is needed
The companies said reducing the numeric precision requirements for deep learning helps to improve memory and computational efficiencies. Reduced precision methods exploit the inherent noise resilient properties of deep neural networks to improve this efficiency.
The FP8 specification is different from the existing IEEE 754 floating point format as it balances hardware and software for existing implementations to accelerate adoption and improve productivity, the companies said.
The specification will leverage the concepts and algorithms built on the IEEE standard to enable greater latitude of future AI innovation while adhering to current industry conventions.
The full research can be found on Arxiv.