Virtual production: The future with innovation

Mohit Naik
9 min readDec 8, 2020

--

Virtual production tends to be wont to help visualize complex scenes or scenes that simply can’t be filmed for real. In general, though, virtual production can refer to any techniques that allow filmmakers to plan, imagine, or complete some kind of filmic element, typically with the aid of digital tools. Previs, techvis, postvis, motion capture, VR, ar, simul-cams, virtual cameras, and real-time rendering — and combinations of those — are all terms now synonymous with virtual production.

LED Light screen with real-time rendering
Green Screen Filming

What does Rendering mean?

Rendering is the process involved in the generation of a two-dimensional or three-dimensional image from a model utilizing application programs. Rendering is usually utilized in architectural designs, video games, and animated movies, simulators, TV computer graphics, and style visualization. The techniques and features used vary consistent with the project. Rendering helps increase efficiency and reduce cost in design.

There are two categories of rendering: pre-rendering and real-time rendering. The striking difference between the two lies in the speed at which the computation and finalization of images take place.

· Real-Time Rendering:

The prominent rendering technique using interactive graphics and gaming where images must be created at a rapid pace. Because user interaction is high in such environments, real-time image creation is required. Dedicated graphics hardware and pre-compiling of the available information has improved the performance of real-time rendering.

· Pre-Rendering:

This rendering technique is employed in environments where speed isn’t a priority and therefore the image calculations are performed using multi-core central processing units instead of dedicated graphics hardware. This rendering technique is usually utilized in animation and visual effects, where photorealism must be at the very best standard possible.

What is Rendering? (For 3D & CG Work)

3D rendering is the process of a computer taking raw information from a 3D scene (polygons, materials, and lighting) and calculating the ultimate result. The output is usually a single image or a series of images rendered & compiled together.

Rendering is typically the ultimate phase of the 3D creation process, with the exception being if you’re taking your render into Photoshop for post-processing.

If you’re rendering an animation it will be exported as a video file or a sequence of images that can later be stitched together. One second of animation usually has at least 24 frames in it, so a minute of animation has 1440 frames to render. This can take quite a while.

There are generally considered two types of rendering: CPU rendering and GPU (real-time) rendering.

The difference between the two lies in the difference between the two computer components themselves.

CPUs are often optimized to run multiple smaller tasks at the same time, whereas GPUs generally run more complex calculations better.

Generally, GPU rendering is far faster than CPU rendering. This is what allows modern games to run at around 60 FPS. CPU rendering is best at getting more accurate results from lighting and more complex texture algorithms.

However, in modern render engines, the visual differences between these two methods are almost unnoticeable except in the most complex scenes.

CPU Rendering

CPU rendering (sometimes mentioned as “pre-rendering”) is when the pc uses the CPU because of the primary component for calculations.

It’s the technique generally favored by movie studios and architectural visualization artists.

This is due to its accuracy when making photorealistic images and the fact render times are not a considerable issue for these industries.

Although render times can vary wildly and may become very long.

A scene with flat lighting and materials with simple shapes can render out in a matter of seconds. But a scene with complex HDRI lighting and models can take hours to render.

An extreme example of this is in Pixar’s 2001 film Monsters Inc.

The main character Sully had around 5.4 million hairs, which meant scenes with him on screen took up to 13 hours to render per frame!

To combat these long render times many larger studios, use a render farm.

A render farm is a large bank of high-powered computers or servers that allow multiple frames to be rendered at once, or sometimes an image is split into sections that are rendered by each part of the farm. This helps reduce overall render time. Render farm is a computer system/data center which is specialized in the calculation of computer-generated images (CGI). It’s mainly used to create films, visual effects, as well as, architectural visualizations.

How long it takes to calculate one single frame highly depends on the:

· complexity of the scene

· render settings

· available computing power

Meaning that a scene can either be calculated within seconds or it can take many minutes or even hours.

Taking a simple scene as an example. A computer system, that can calculate a simple scene in 10 seconds, still needs about 4 hours to calculate a 1-minute sequence. The system will work 4 hours at full capacity making it unusable during this time.

It’s not uncommon for a high-quality animation based on complex 3D scenes with composite lighting calculations to take up to 30 minutes of calculation per frame.

It is possible to render more advanced effects using the CPU as well.

These include techniques such as:

Ray Tracing

This is where each pixel in the final image is calculated as a particle of light that is simulated as interacting with objects in your scene.

It’s excellent at making realistic scenes with advanced reflection and shadows, but it requires a lot of computational power.

However, due to recent advances in GPU technology in NVIDIA’s 2000 series cards, ray tracing as a rendering method can make its way into mainstream games via GPU rendering in the coming years.

Path Tracing

Path tracing calculates the final image by determining how the light will hit a certain point of a surface in your scene, and then how much of it will reflect back to the viewport camera.

It repeats this for each pixel of the final render.

It is considered the best way to get photorealism in your final image.

Photon Mapping

The computer fires ‘photons’ (rays of light in this instance) from both the camera and any light sources which are used to calculate the final scene.

This uses approximation values to save computational power, but you can adjust the number of photons to get more accurate results.

Using this method is good for simulating caustics as light refracts through transparent surfaces.

Radiosity

Radiosity is similar to path tracing except it only simulates lighting paths that are reflected off a diffused surface into the camera.

It also accounts for light sources that have already reflected off other surfaces in the scene. This allows lighting to fill a full scene easier and simulates realistic soft shadows.

How does a render farm work?

Since a render farm has many render nodes, the frames of a 3D sequence are often calculated simultaneously on these nodes.

Taking the example, we used before — if this sequence is calculated by one hundred render nodes instead of just one local system, 10 days of rendering shrink to 2.5 hours.

This makes it possible to reduce the rendering time notably.

GPU Rendering

GPU rendering (used for real-time rendering) is when the computer uses a GPU as the primary resource for calculations.

This rendering type is usually used in video games and other interactive applications where you need to render anywhere from 30 to 120 frames a second to get a smooth experience.

To achieve this result, real-time rendering cannot use some of the advanced computational options mentioned before. So, a lot of it is added in post-processing using approximations.

Other effects are used to trick the eye into making things look smoother, such as motion blur.

Due to the rapid advancement in technology and developers creating computationally cheaper methods for great render results, limitations of GPU rendering are quickly becoming history.

That’s why games and similar media get better with each new console generation. As chipsets and developer knowledge improves, so do the graphical results.

GPU rendering doesn’t always have to be used for real-time, as it’s valid for making longer renders too.

It’s good for throwing out approximations of final renders relatively quickly so you can see how the final scene is looking without having to wait hours for a final render. This makes it a very useful tool in the 3D workflow while setting up lighting and textures.

Render Engines

Depending on what the final product is, a 3D artist will need a render engine to turn their 3D scene into a completed 2D image.

It is the render engine’s job to calculate all the lighting and geometry in your scene and how they all interact with each other and your materials.

It can be quite an intensive job for your PC and can take several hours depending on the scene. How you render is calculated and eventually turns out is reliant on your render engine.

There are dozens of render engines on the market and it can be difficult to decide which to use.

Whichever 3D software you employ for your workflow will accompany its own render engine built-in.

These are usually fine for learning the basics of rendering and can be used to get some nice final results. But they can be limiting compared to many incredible 3rd party render engines.

Here are some examples worth looking into:

V-Ray, Corona, Render Man

What is cloud rendering?

Cloud rendering is calculated during a similar thanks to general cloud computing, and it’s a rendering method supported render farm.

Users can package their own customized files, upload the packaged files to the cloud rendering server through the cloud rendering client, make full use of the computer hardware resources in the cluster network, and calculate the complex 3D scene through a large number of computer calculations to generate a preview image or the final animated image for visual effect adjustment or post-production synthesis. Better rendering hardware, lower rendering costs, and easier to use. These are the points where cloud rendering has advantages over traditional rendering.

What is the difference between a standard render farm and a cloud rendering render farm?

Generally, a render farm may be a cluster of computers that want to provide CGI (computer-generated imagery) through an execution method. While, a standard one means building it on your own, which suggests you would like to shop for tons of computers and fix them yourself whenever there’s a drag. It is an area render farm, use local nodes.

Cloud rendering renders farm, aka cloud render farm, online render farm, maybe a render center that you simply can send your asset and render from –distant, they’re connected by the internet, “Cloud”, means internet. It becomes a sort of online services for artists to upload their tasks to the services providers, and download their final render results when completed

--

--

No responses yet