Cinema

Watch: Rendering Realistic Fur

21 February 2018

Scientists in California get to work on the coolest things. Take, for instance, the challenge of making computer-generated creatures look more realistic for their role in computer-animated films, special effects and video games.

Recently, at the SIGGRAPH Asia conference in Thailand, researchers from UC San Diego and UC Berkeley presented their method for dramatically improving the way computers simulate fur – specifically, the way light bounces within an animal's pelt.

A hamster rendered with the researchers' method. Source: University of California, San Diego.A hamster rendered with the researchers' method. Source: University of California, San Diego.It seems that most existing models were designed to create computer-generated hair – but fur, not so much. By not taking into account the medulla, or central cylinder, present in each fur fiber – a structure much bigger than in human hair – these models miss out on realistic rendering of the scattering and passage of light through that cylinder. This has prompted some researchers to attempt a workaround that follows a ray of light bouncing from one fur fiber to the next – this works, but it requires a tremendous amount of computation and is both expensive and slow.

The California researchers tried a different approach: a concept called "subsurface scattering," which essentially describes how light enters the surface of a translucent object at one point; scatters at various angles; interacts with the object's material; and then exits the object at a different point.

To better understand the concept, try turning on a small flashlight in a dimly-lit room and covering it with your finger. The resulting ring of light is produced because it has entered through your finger, scattered inside and then gone back out (and it's red, incidentally, because the red portion of the light is not absorbed by the body – unlike the green and blue portions, which are).

Subsurface scattering is often used in computer graphics and computer vision simulations. Noting that there is no explicit physical or mathematical way to apply the properties of the concept to fur fibers, researchers used a neural network. They found that the network needed only to be trained with a single scene before it was able to apply the concept to all the scenes with which it was presented. This resulted in simulations running 10 times faster than state of the art.

The resulting algorithm works for hair, too, offering more realistic rendering than current methods.



Powered by CR4, the Engineering Community

Discussion – 0 comments

By posting a comment you confirm that you have read and accept our Posting Rules and Terms of Use.
Engineering Newsletter Signup
Get the GlobalSpec
Stay up to date on:
Features the top stories, latest news, charts, insights and more on the end-to-end electronics value chain.
Advertisement
Weekly Newsletter
Get news, research, and analysis
on the Electronics industry in your
inbox every week - for FREE
Sign up for our FREE eNewsletter
Advertisement