Tips3DWritten by Thomas DenhamDisclosure: This post may contain affiliate links. That means if you buy something we get a small commission at no extra cost to you(learn more)
3D rendering is the process of a computer taking raw information from a 3D scene(polygons, materials, and lighting) and calculating the final result. The output is usually a single image or a series of images rendered & compiled together.
Rendering is usually the final phase of the 3D creation process, with the exception being if you take your render into Photoshop for post-processing.
If you’re rendering an animation it will be exported as a video file or a sequence of images that can later be stitched together. One second of animation usually has at least 24 frames in it, so a minute of animation has 1440 frames to render. This can take quite a while.
There’s generally considered two types of rendering: CPU rendering and GPU(real-time) rendering.
The difference between the two lies in the difference between the two computer components themselves.
CPUs are often optimized to run multiple smaller tasks at the same time, whereas GPUs generally run more complex calculations better.
Generally GPU rendering is much faster than CPU rendering. This is what allows modern games to run at around 60 FPS. CPU rendering is better at getting more accurate results from lighting and more complex texture algorithms.
However in modern render engines the visual differences between these two methods are almost unnoticeable except in the most complex scenes.
CPU rendering (sometimes referred to as “pre rendering”) is when the computer uses the CPU as the primary component for calculations.
It’s the technique generally favoured by movie studios and architectural visualisation artists.
This is due to its accuracy when making photorealistic images and the fact render times are not a considerable issue for these industries.
Although render times can vary wildly and can become very long.
A scene with flat lighting and materials with simple shapes can render out in a matter of seconds. But a scene with complex HDRI lighting and models can take hours to render.
An extreme example of this is in Pixar’s 2001 film Monsters Inc.
The main character Sully had around 5.4 million hairs, which meant scenes with him on screen took up to 13 hours to render per frame!
To combat these long render times many larger studios use a render farm.
A render farm is a large bank of high-powered computers or servers that allow multiple frames to be rendered at once, or sometimes an image is split into sections that are rendered by each part of the farm. This helps reduce overall render time.
It is possible to render more advanced effects using the CPU as well.
These include techniques such as:
This is where each pixel in the final image is calculated as a particle of light that is simulated as interacting with objects in your scene.
However due to recent advances in GPU technology in NVIDIA’s 2000 series cards, ray tracing as a rendering method can make its way into mainstream games via GPU rendering in the coming years.
Path tracing calculates the final image by determining how the light will hit a certain point of a surface in your scene, and then how much of it will reflect back to the viewport camera.
It repeats this for each pixel of the final render.
It is considered the best way to get photorealism in your final image.
The computer fires ‘photons’ (rays of light in this instance) from both the camera and any light sources which are used to calculate the final scene.
This uses approximation values to save computational power, but you can adjust the amount of photons to get more accurate results.
Using this method is good for simulating caustics as light refracts through transparent surfaces.
Radiosity is similar to path tracing except it only simulates lighting paths that are reflected off a diffused surface into the camera.
It also accounts for light sources that have already reflected off other surfaces in the scene. This allows lighting to fill a full scene easier and simulates realistic soft shadows.
GPU rendering(used for real-time rendering) is when the computer uses a GPU as the primary resource for calculations.
This rendering type is usually used in video games and other interactive applications where you need to render anywhere from 30 to 120 frames a second to get a smooth experience.
To achieve this result, real-time rendering cannot use some of the advanced computational options mentioned before. So a lot of it is added in post processing using approximations.
Other effects are used to trick the eye into making things look smoother, such as motion blur.
Due to the rapid advancement in technology and developers creating computationally cheaper methods for great render results, limitations of GPU rendering are quickly becoming history.
That’s why games and similar media get better with each new console generation. As chipsets and developer knowledge improves, so do the graphical results.
GPU rendering doesn’t always have to be used for real time, as it’s valid for making longer renders too.
It’s good for throwing out approximations of final renders relatively quickly so you can see how the final scene is looking without having to wait hours for a final render. This makes it a very useful tool in the 3D workflow while setting up lighting and textures.
There are dozens of render engines on the market and it can be difficult to decide which to use.
These are usually fine for learning the basics of rendering, and can be used to get some nice final results. But they can be limiting compared to many incredible 3rd party render engines.
Here are some examples worth looking into:
V-Ray is a very common engine. It’s capable of using both CPU and GPU rendering so it’s very flexible and it’s available for Maya, Blender, and almost every other 3D suite out there.
Corona is another engine used a lot by architectural visualizers. It is very powerful, but only available for 3DS Max and Cinema 4D.
RenderMan is developed and used by Pixar studios for all of their films. It’s also used by many other large film studios. It can be used as a plug-in directly with Maya, or as a standalone product on Windows, Mac, and Linux computers.
You only usually need to learn one render engine and once you understand its workflow it can be used to achieve any effect you want.
Author: Thomas Denham
Thomas is a 3D creative working with both high and low poly modelling for still renders or real time engines. He is currently working freelance after spending 4 years at a multi-national VR company. To see Thomas' work and learn more, feel free to look at his personal site.