Nvidia Corp. has released two graphics processor units (GPUs) designed to accelerate machine learning workloads in web-based applications racing to incorporate artificial intelligence (AI) capabilities.
The new hyperscale accelerators include the Tesla M40 GPU that allows researchers to design deep neural networks for a number of applications that they want to power with AI. The second accelerator, Tesla M4 GPU, is a low-power accelerator designed to deploy these AI-based networks across the data center.
“The artificial intelligence race is on,” says Jen-Hsun Huang, co-founder and CEO of NVIDIA. “Machine learning is unquestionably one of the most important developments in computing today, on the scale of the PC, the Internet and cloud computing. Industries ranging from consumer cloud services, automotive and health care are being revolutionized as we speak.”
Huang says the Nvidia hyperscale accelerator line will give machine learning a 10x boost in performance while saving data centers both time and costs.
The company says machine learning is being used to make voice recognition more accurate, to enable automatic object, scene or facial recognition in video or photos and to help tailor services to an individual’s tastes and interests or even to organize schedules or deliver new stories that are of interest to a user. The challenge is to provide the supercomputing power needed to innovate and train the growing number of deep neural networks and the processing to instantly respond to the billions of queries from consumers using the services, the company says.