The device is able to decouple and restructure coprocessors and CPU computing resources including GPU, Xeon Phi and FPGA, enabling them to meet the needs of various AI application scenarios. It also expands computing power on demand and provides highly flexible support to various AI applications in GPU-accelerated computing.
Computational efficiency is expanded by connecting standard rack servers with GPU computing expansion modules, overcoming the obstacle of needing to adjust the entire design of the system and motherboard in order to change computing topologies. GX4's independent computing acceleration module design increases system deployment flexibility, and changes the connection between server and expansion module to provide flexible topology changes.
AI computing equipment is generally limited to an expansion limitation of 8 GPU cards. Each GX4, by contrast, supports four accelerating cards in 2U form factor, and one head node can connect up to four GX4s -- achieving 16 accelerating cards in one acceleration computing pool.
According to the company, the GX4 was designed to provide a flexible and innovative AI computing solution for companies and research organizations engaged in artificial intelligence across the world.