Researchers from Nvidia created an artificial intelligence (AI) system that can transfer a pet’s face and expression onto another animal’s body.
Nvidia researchers pioneered an AI technique with GANs to put on an existing pet’s image the expression and pose of another animal using a single input image. Source: Nvidia
The system is powered by generative adversarial networks (GANs) and an algorithm. GANs pit neural networks against each other. The algorithm is called Few Shot Unsupervised Image to Image Translation algorithm (FUNIT). FUNIT works on unseen target classes that are specified by a few example images given to the system during testing.
Most GAN-based image translation networks are trained to solve a single task. FUNIT is trained to jointly solve many tasks at one time. The goal of each task is to translate one random animal, the source, into another random animal, the target, by leveraging a few examples of the target animal. The algorithm learns to generalize the target image to translate known animals' faces into unknown animals.
Other GAN-based image translation models are trained with many images of the target animal. FUNIT only needs one photo of the target animal because the training function has many image translation tasks in its GAN process.
The team's overall goal was to find a way to code human-like imagination into neural networks. They are currently working on expanding FUNIT to include more kinds of images with higher resolutions, and are testing this with photos of flowers and food.
The app is available for public users to try on their pets at home.