3D Rendering in the CG Pipeline and Rendering Techniques
The rendering process plays an important role in the computer graphics development cycle. Rendering is the technically complex aspect of 3D production, but it can be understood quite easily in the context of an analogy: Much like a film photographer must print and develop his photos before they can be displayed, computer graphics professionals are burdened a similar necessity. Let’s take a look at types of rendering and rendering techniques.
Types of Rendering
There are 2 types of rendering, their chief difference being the speed at which images are computed and finalized.
1. Real-Time Rendering
Real-time rendering is used prominently in gaming and interactive graphics, where images must be computed from 3D information at an incredibly rapid pace.
It is impossible to predict how a player will interact with the game environment, and images must be rendered in real time as the action unfolds.
Motion to appear fluid, a minimum of 18 to 20 frames per second must be rendered to screen. Anything less than that and action will look choppy.
Real-time rendering is improved by dedicated graphics hardware and by pre-compiling. A great deal of a game environment’s lighting data is pre-computed and “baked” directly into the environment’s texture files to improve the render speed.
Click on Below Video: Rendering and Animation
Offline-Rendering is used in places where speed is less of an issue, with calculations performed using multi-core CPUs rather than dedicated graphics hardware.
Offline Rendering is frequently seen in animation and effects work where photorealism and visual complexity are held to a much higher standard. Since there is no unpredictability as to what appears in every frame, large studios have been known to dedicate upto 90 hours render time to the individual frames.
Because offline rendering occurs in an open-ended time-frame and higher levels of photorealism can be achieved with real-time rendering.
Environments, Characters and their associated lights and textures are typically allowed higher polygon counts, and 4k resolution texture files.
Click on Below Video: How Rendering Graphics Works in Games!
There are 3 major computational techniques used for rendering. Making all these three viable options in certain conditions. Each rendering techniques has its own set of advantages and disadvantages.
Scanline rendering used when speed is a necessity, which makes it the technique of opportunity for real-time rendering and interactive graphic. Instead of rendering an image pixel by pixel, scanline renderers compute on a polygon by basis polygon. Scanline techniques are used in combination with precomputed lighting can achieve speeds of 60 frames per second or better for the high-end graphics card.
In raytracing technique, every pixel in the scene, one or more rays of light are traced from camera to the nearest 3D object. The light ray is passed through a set number of “bounces,” which can involve reflection or refraction depending on materials in a 3D scene. Each pixel color is computed algorithmically based on the light ray’s interaction with objects in its traced path. Raytracing is able of greater photorealism than scanline but exponentially slower.
Radiosity is calculated independent of the camera and is surface oriented rather than pixel by pixel. The main function of radiosity is to simulate surface color by accounting for indirect illumination more explicitly. Radiosity is characterized by color bleeding and soft graduated shadows where light from brightly colored objects “bleeds” onto nearby surfaces.
In practice, radiosity and raytracing are often used in combination with one another, using the advantages of each system to achieve extraordinary levels of photorealism.
Rendering is a technical topic but can be quite interesting when you really start to take a more in-depth look at above common rendering techniques.