Scientists in California get to work on the coolest things. Take, for instance, the challenge of making computer-generated creatures look more realistic for their role in computer-animated films, special effects and video games.
Recently, at the SIGGRAPH Asia conference in Thailand, researchers from UC San Diego and UC Berkeley presented their method for dramatically improving the way computers simulate fur – specifically, the way light bounces within an animal's pelt.
It seems that most existing models were designed to create computer-generated hair – but fur, not so much. By not taking into account the medulla, or central cylinder, present in each fur fiber – a structure much bigger than in human hair – these models miss out on realistic rendering of the scattering and passage of light through that cylinder. This has prompted some researchers to attempt a workaround that follows a ray of light bouncing from one fur fiber to the next – this works, but it requires a tremendous amount of computation and is both expensive and slow.
The California researchers tried a different approach: a concept called "subsurface scattering," which essentially describes how light enters the surface of a translucent object at one point; scatters at various angles; interacts with the object's material; and then exits the object at a different point.
To better understand the concept, try turning on a small flashlight in a dimly-lit room and covering it with your finger. The resulting ring of light is produced because it has entered through your finger, scattered inside and then gone back out (and it's red, incidentally, because the red portion of the light is not absorbed by the body – unlike the green and blue portions, which are).
Subsurface scattering is often used in computer graphics and computer vision simulations. Noting that there is no explicit physical or mathematical way to apply the properties of the concept to fur fibers, researchers used a neural network. They found that the network needed only to be trained with a single scene before it was able to apply the concept to all the scenes with which it was presented. This resulted in simulations running 10 times faster than state of the art.
The resulting algorithm works for hair, too, offering more realistic rendering than current methods.