Electronic design automation giant Synopsys Inc. has introduced a new neural processing unit (NPU) intellectual property (IP) and toolchain designed for artificial intelligence (AI) system-on-chips (SoCs).
DesignWare ARC NPX6 and NPX6GS NPU IP address real-time compute with ultra-low power consumption for AI applications. The toolkit provides a compilation environment with automatic neural network algorithm partitioning to maximize resource utilization.
The ARC NPX6 NPU IP helps develop neural network models for advanced driver assistance systems (ADAS), surveillance, digital TVs and cameras and other emerging AI segments. The IP puts demands on compute and memory resources, often for safety-critical functions.
The IP delivers up to 250 tera operations per second (TOPS) at 1.3 GHz on 5 nm processes in worst-case conditions, or up to 440 TOPS by using sparsity features that can increase performance and decrease energy demands of executing a neural network.
The IP also integrates hardware and software connectivity for implementation of multiple NPU instances to achieve up to 3,500 TOPS of performance on a single SoC. Additionally, it provides more than 50x the performance of the maximum configuration of the ARC EV7x processor IP. It also offers optional 16-bit floating point support inside the neural processing hardware, maximizing layer performance and simplifying the transition from GPUs used in AI prototyping to high-volume power and area-optimized SoCs.
The ARC MetaWare MX development toolkit includes compilers and debugger, neural network software development kit, virtual platforms SDK, runtimes and libraries as well as advanced simulation models.