EDIT

{{article.title}}

modified {{article.lastModStr}}
{{videoUrlObj.title || videoUrlObj.oEmbed.title}}
{{videoUrlObj.description || videoUrlObj.oEmbed.description}}
{{ article.videoUrls | totalDuration }}

In the following sub-chapters we will be going over useful attributes and tips that will help us successfully manipulate RenderMan outputs in compositing:

  • Color Management
  • Fusion Comp
  • Nuke Comp
  • Relighting
  • 3D compositing
  • Deep Compositing
  • Color Grading and Finishing Touches

Before we start, please open the attached TGT_Compositing.zip file, which will ensure the best interaction possible with this training.

Color Management


In a very similar fashion to the Linear Workflow training for RenderMan, it's very important to composite images in linear gamma. If we don't understand proper color management, we risk improper merging of alpha channels, incorrect color transforms, films grain issues and color banding. The values we worked so hard to light and shade properly will be mudded and frustrating to work with if we don't respect correct linear gamma workflow in post.

Linear workflows work best when images have the correct color Space and color depth, meaning, images need to be rendered in the correct bit-depth and image formats, if not, we risk inefficient I/O speed and insufficient metadata in order to work linearly, bringing us back to the same issues we mentioned before.

Let's oversimplify things and break down an entire compositing workflow into three simple steps.

Basic Compositing Workflow

As we can see, there need to be certain standards for our images to behave properly while we manipulate them in post, but one thing is clear, it's always best to do all calculations in linear space. Let's go into a bit more detail, so we can understand why.

Linear Gamma

Gamma Correction is a term that gets tossed around a lot and is rarely understood. It was invented to compensate for the luminance limitations of a Cathode Ray Tube (CRT) in old monitors, but now a days it is mostly understood as a form of value and color compensation for human vision. Well, you're probably wondering how this affects a render...well, compositing is just a bunch of simple math, so changes in the gamma of the original image are going to affect inconsistently what the compositor is doing to the image, meaning, if we change the value by 1 we expect an equal change in luminance. You can start seeing how this is a problem if each image has their own gamma, because calculations will be different for every image.

This is why it's called Linear Workflow, because we are converting all kinds of different data curves to a simple and reliable linear curve, with which we can always have expected and reasonable results.

sRGB Merge vs Linear Merge

In the example above, we can see how incorrectly merged color and alpha can look muddy and undesirable. On the right, with linear gamma, colors blend naturally and the alpha falloff is perfectly smooth. This aesthetic is even more apparent when merging CG elements with film plates, matte paintings or with each other, because if we don't linearize our images, we will start seeing black fringes and incorrect color values.

A correct linear workflow for compositing would look something like this:

Optimal Gamma Workflow

Modern floating point formats already operate with a linear curve, but many legacy formats like jpg, tif, tga and png don't. They operate in an old sRGB compensatory Color Space. This is where we need to transform all these sRGB curves to behave linearly, so that our compositor can do all the operations evenly and reliably across multiple image formats.

In the image bellow, we can see how Fusion deals with linear color space. All operations have happened in linear space, including our Film Grain, after which we need to add an sRGB gamma curve to our image for final output. We can either do this in the final saver node for the image output, or we can do it manually via a gamut node. Both have the capacity to add the sRGB color space to our image.

Once we add the sRGB color space, the image will be viewable in any modern computer monitor as well as any HD broadcast standard television, since rec 709 shares the same chromacities.

Fusion Color Management

Tip

Setting everything to a linear workflow is not so much about viewing the data as it is about manipulating it in the best possible manner.

This color management workflow differs between Nuke and Fusion. Fusion has color space and gamma handling included in every loader and saver, it also has a separate gamma tool, but it's up to you to handle gamma manually. Nuke on the other hand, tries its best to do everything automatically by abiding to global color management preferences, which will linearize all gamma as long as you configure Nuke to treat certain images with the same standard, for example, an 8-bit image will always have an sRGB color space and a 32-bit image will always be linear. This keeps consistency (and saves many headaches) when doing texture painting, 3D rendering and compositing.

Nuke Color management Settings

Automating everything can be the downfall of a pipeline if the user is not aware of what is happening behind the scenes. Which brings us to our next point: Color Spaces.

Color Spaces

Color Models and sRGB Color Space

A Color Space (sometimes called color profile) is the way an image format encodes color and gamma. Most Color Spaces belong to either one of the most popular Color Models: RGB and CMYK. We will be using the sRGB Color Space for The Grand Tour, since it is the standard for Web and also carries the same chromacity as the HDTV standard Color Space Rec-709.

Tip

Always remember that the compositing program needs linear data to provide the most accurate and predictable image, so think of linear data as lacking a color space, thus being a truly unbiased way of interpreting color and gamma.

Image Types

openEXR

We will be using openEXR (or commonly known as EXR) as our main Image Type. Developed by ILM, EXR is the standard and most extensive way of rendering floating-point images. To compensate for big file sizes, EXR supports many different data types and compressions, within these is Half-Float images, or floating point images with half the data, which is what I used for The Grand Tour. Getting rid of half the data sounds like a big loss, but in reality they hold more than enough stops of luminance for most uses, except data passes, like Z, World Normals and World Position, where we need all the metadata we can get.

In other words, since we are transforming gamma curves back and forth, and manipulating the images constantly, we need as much data as we can get. This mentality will help us avoid image artifacting such as banding and visible compression artifacts. This is why it's important to work with floating point image types such as EXR, because they can carry millions of colors per channel, unlike the 8-bit standard of just 255 colors.

Just like TGA has RLE compression and TIFF has LZW, there are several types of compression types within the EXR format. I chose to use ZIP (or deflate) as this is a lossless compression format and very fast to decompress for Fusion. One of the key things to consider is how your compositor is rendering the frame. Fusion is a whole frame renderer, while Nuke is a scanline renderer, so choosing the right compression is key. Nuke seems to thrive with zips (note the "s") compression, because it compresses every single scanline. Zip compression does every 16 scanlines.

Let's analyze our composites hands on and see how it all fits together in post.

Fusion Comp


Fusion, like Nuke, allows us to composite our renders in a native 32-bit color space, where all our floating point renders will be used to their full potential. In the image bellow we can see a not-very-legible overview of the entire comp, don't worry, we'll be breaking it down in more detail. Let's open our Fusion comp from the attached project files to understand the comp better.

Depending on how you develop a compositing pipeline, there can be several methods to starting a comp. Some compositors like to rebuild the beauty pass entirely using AOV's, which gives the most creative control, but it can be very time consuming and I/O heavy. I chose to use AOV's to isolate and color correct the beauty pass as needed, relying more on finalizing the shot during 3D rendering. This is usually much more I/O friendly, less cumbersome and much more straight forward for stereo pipelines, where heavy compositing needs to be kept to a minimum.

For the Fusion comp, we are using a Chromatic Aberration plugin from the suite Krokodove, but we could also divide the three color channels and shift them manually like we are doing in the Nuke comp.

Fusion Composition (Flow)

Understanding concatenation is crucial to building an efficient and clean composite that will not degrade the image.

As we will learn in more detail in the relighting sub-chapter, we need to merge our AOV metadata with our beauty pass, in order to feed the compositor the correct metadata for its relighting tools. In order to merge our 3D position metadata with our beauty pass, we need to use the Channel Booleans node. Very much like the shuffle node in Nuke, a channel boolean node can re-route any channel into any other channel.

AOV metadata merge

Because we have fed this data onto the node stream of the beauty pass, Fusion will now use the data throughout the composite to manipulate any tools that use positional data. Meaning, the data is not lost in successive tools and can be used at any time by other tools. Bellow, we can see in more detail how we can use the Volume Mask tool to pick 3D masks and later use them to drive or isolate values for color correctors, shader tools or a merge. This 3D mask is the base for the relighting in Fusion.

Volume Masks

In the image below we can see how thanks to the positional data, our masks are sticking to our scene, which allows us to use them to color correct and change the values of our render procedurally, instead of animating masks manually. In our case, we used it to drive a color corrector tool to add visual emphasis to specific places, like our tree top, which guides the eye from the top left down to our car. We also isolated and color corrected some of the grass with the help of the specular pass, which was being isolated by the 3D masks. This helped us add some variance to the grass saturation and luminance in key places, making it look a little more random.

Mask picking in Fusion

We are also using the Moss AOV (REYES only as of version 20) to isolate the moss in the cliff, which we can use as a mask for color correction. With this mask I was able to use a color corrector node to bring up the saturation and contrast by about 50%. It's hard to anticipate exact color balance during rendering, so having separate AOV's to make subtle color changes is very helpful.

Moss AOV

Here we can see the Maya fluid clouds merged. Since I rendered both clouds together, we are using masks to isolate the background from the foreground. We could have also rendered both clouds separately. We can also see the 3D matte painting being done for our sky background and teapot. We are using a bent texture mapped cloud image to get better parallax. To achieve this, we are using the imported FBX cliff geometry as placement reference and the imported Maya ASCI camera to render the 3D matte painting.

Clouds

The next step in the comp is to add some animated 3D dust particles, which will help us add some life to the flythrough. We are again using 3D compositing to achieve this through Fusion's 3D particles.

We are also doing some slight luminance level balancing after merging the 3D render with the sky matte painting, as well as using several AOV's to color correct the composition.

The SSS AOV is being used to add extra vibrance to the character and mushrooms. The reflection (specular indirect) is being used to isolate the hovercraft's windshield and add a sun glare at the rim. Finally, the word Turbo is being isolated with the incandescence AOV to add extra importance to the text.

Fusion particles and AOV's for Color Correction

Next, we are adding some color vibrance to the center of the image to help the eye focus on our hovercraft and character. We have also merged our car dust, which is a separate render from our beauty pass. I did this in order to have the flexibility of shadow sampling the volumes in Maya more efficiently and have more control of the overall effect in post.

Car Dust and CC

Another great reason to separate the car dust into a separate render, was to use the render as a mask for effects. In this case we are using the car dust to blur the background, which adds a nice heat haze effect and makes the dust pop by making it a sharper element than the background.

Car Dust - No Blur Vs Blurred Background

Finally, we apply the main color grade and other effects that simulate a vintage film stock, such as a vignette, chromatic aberration and film grain. We go into more detail in the color grade sub-chapter. More importantly, we need to burn-in an sRGB color profile for an 8-bit output such as tga or quicktime, because we have been working in linear space and that's too dark to view properly on a TV or monitor.

As we can see, using AOV's to manipulate an image in post can add a lot of room for experimentation in real time.

Composition Color Grade and finalizing

Nuke Comp


Nuke and Fusion follow the same workflow with different tools, except Nuke offers a true interactive 3D relighting tool and deep image compositing support, which Fusion somewhat compensates with its volume tools.

For the most part, standard tools like a merge, glow, blur, color corrector, etc...will have the same math operations in both softwares, but since they handle slider values and gamma differently it is almost impossible to match the original Fusion results pixel perfect. So, I took this as an opportunity to come up with a new cartoony version of the image instead of the original naturalistic color grade.

As we can see in the image bellow, I tried to follow a very similar node progression to the original Fusion comp, starting with the merging of the data AOV's into our image. Let's open up the Nuke project files, included in the project files, to follow the script breakdown.

Nuke Script

To merge the AOV's into the beauty pass, we are using a copy node in order to copy the color from the AOV's into data channels. For the Relight tool, we need a normal and a position data channel, both of which are not available by default in Nuke, so we need to create them using the relight node so they can become available inside the copy node dropdown. More detailed info in the Relighting sub-chapter.

This will ensure we have the metadata available throughout the node script.

AOV data merge

After we have placed our lights and are done with the relighting process, we need to transform the RGB output into grayscale luminance to drive a color corrector node. This will allow us to change the value of the original render, instead of merging new color information which can be destructive to the original dynamic range or color information down the comp.

Relight

This is what our final relighting mask looks like. As some of you might have noticed it is slightly different than Fusion's, because I wanted to add some more motivated lighting in the foreground and in some other key places to make the lighting directionality look manipulated and cartoony, as opposed to Fusion's naturalistic approach. The intensity of the effect is also much more exaggerated.

Using a CC and Hue Correct node we are manipulating the image saturation, gamma and gain, to effectively add some vibrance and contrast to certain parts of the image where we need to draw the eye.

Relight result

This is where we start using our AOV's to manipulate color in our final image. We are using the specular AOV to bring out the hovercraft's glass by tinting it orange like the sun haze, we are also using the Moss AOV to add saturation to our cliff moss. In order to do this, we are converting the AOV color channels into standard RGB channels and subsequently into alpha information to mask the corresponding color corrector nodes.

We are also merging our clouds into two separate merges, one cloud for the foreground and another for the background. Finally, we are merging our 3D matte painting which we are creating out of a sky image and our teapot render, both of which are later mapped to 3D cards to get better parallax shift.

Matte painting and AOV's

Still taking advantage of our AOV's, we are using the SSS AOV to emphasize the character's skin and the mushrooms. The incandescence AOV is allowing us to isolate the "Turbo" sign on the car.

We are also doing some manual work to add a slight sun glare in the negative space formed between the car and tree, as if the sun were seeping through the clouds.

We are also merging our car dust, by first doing a little bit of CC work and bringing up the gain, which will make it more prominent. We are also using the luminance of the dust to blur the main render, this will simulate an exhaust heat haze and emphasize the dust by making it sharper than the background.

AOV's and Dust

The last part of the node tree is where we add the finishing touches to our image. We are changing the image in several ways, including hue shifting the middle of the image, adding our main color grade and adding a lens vignette. To finish off the vintage stock effects, we are adding a chromatic aberration to the edges of the frame and some film grain, which not only helps with the look, but is also crucial to eliminating color banding, by introducing slight color changes between colors that can't be represented by an 8-bit output.

Final Grade and Finishing

When comparing the images, the Fusion final is more natural looking, while our Nuke image was made a bit more colorful and cartoony.

Final images

Relighting


As we learned in the comp breakdowns above, thanks to Relighting, re-evaluating and improving upon our shot doesn't have to stop at color correction and effects. Even though relighting techniques are different in Fusion and Nuke, they both use the same AOV's, so let's understand them first.

The World Position AOV or Wp, holds the position of the geometry in the scene relative to world space. The passes can also be encoded in camera space, but we need them in world space so that they retain their global positioning independent of camera animation. This means we can pick a pixel value in post and it will represent an absolute position in space, which is crucial to relighting or making 3D masks, etc...

World Position AOV

As we can see, these passes can be difficult to judge visually in their RAW form, sometimes even invisible (as is the case of z depth), but we can remap these values to a reasonable viewing range (0-1) by normalizing them in post. Each software has a way of normalizing data (Fusion has a normalizer built into the viewer for example), but normalization should only be used for viewing purposes, because the software needs the RAW data to interpret it correctly, therefore never normalize these data AOV's within the comp operations.

Tip

If you have a hard time viewing the data in your software of choice, don't worry, it's still there for the compositor to use even if you can't see it.

Our World Normal AOV or Wn is also in world space and tells the compositor about the surface normal of our objects, or where a point in space is facing. The compositor needs this data to determine the qualities of the surface it is relighting.

Normal AOV

We also use a good'ol Z Depth AOV. Z passes have been used for many years as a trick to achieving fog, depth of field, atmospheric perspective and other depth based tricks in post. In our case, we are using it to feed information to our relighting tools about the depth and dimensions of the scene in world space.

Z Depth AOV

Tip

It is very important we render data AOV's with the highest color depth and least compression possible in order to get very accurate results in post.

Now that we understand the data AOV's a little better. Let's see how we can use these passes in Fusion and Nuke.

Fusion

Fusion, like Nuke, will allow us to use our floating point data natively within the application. Even though there is no dedicated 3D relighting tool with the ability to use 3D lights, Fusion has its own volume tools and 2D shader tool to relight, reshade or just color correct if we feed it the appropriate position and normal data. As discussed above, we are using Wp, Wn and Z. This will allow Fusion to understand the entire 3D scene.

Unlike Nuke, Fusion can interpret normal and position information by default, so we are only converting the color to data and skipping the channel creation.

To convert the color information to position and normal data, we need to use the Channel Booleans tool. This Fusion tool is capable of performing many different mathematical operations, but most importantly for us, it has a very important Aux Channels tab, where we can redirect color information to certain data or Aux channels. In order to do this, we need to check Enable Extra Channels and navigate to our desired Aux channel. From the dropdown, we can choose any incoming color channel, in this case we will redirect each one of the AOV's r,g, b channels into their respective x,y, z normal and position vectors. The Z buffer Aux will read the luminance (or lightness) of the Z depth AOV. Each redirection needs its own Channel Booleans tool.

As we can see in the image bellow, the beauty pass is in the background, as multiple Channel Booleans merge data into the node stream.

Channel Booleans

Once we have the data for Relighting, we can start making our 3D masks by using the VolumeMask tool. We can pick where we want our mask to be, by dragging the "Pick..." icon into the viewer, while viewing the output of the volume tool. We can do this interactively, which will also help us modify it's scale. The resulting mask is being softened by the alpha channel of the beauty pass, in order to limit the Relighting operations.

Fusion Volume Mask Tool

Once we have the VolumeMasks where we want them, we can use several methods to change the original image. One of those is using a Shader tool. The controls are very similar to Nuke's own Relight node, except Fusion's Shader tool is a 2D shader and has no 3D viewport feedback. The tool compensates by adjusting height and angle of reflection. We can also add a reflection map if needed.

Fusion Shader Tool

Like Nuke, the main method used to Relight is using Color Corrector tools in order to modify the saturation, hue, gain and gamma of the original image. The fact that we are using floating point images allows us to get a clean image devoid of banding and artifacting, even though we are heavily manipulating it, because our exposure and color range is very wide.

Nuke

Nuke offers true relighting, where data from 3D lights can be fed into a dedicated Relight node with the ability to change diffuse and specular shading.

For Nuke to interpret the color data in the AOV passes, we need to change color to data, effectively changing it to position, normal and Z data. To transform color into data we will use a copy node, but we first need to create the position and normal layers in Nuke, because they are not available by default.

To create a new layer, we need to either use the Nuke viewer or a Relight node. For example, create a Relight node and in the menu dropdowns choose new (for new layer). Here we will make a new layer called normal and assign each channel (r,g,b) to said layer accordingly. Follow the same process for a new layer called position.

Relight Node - Channel Creation

After we make these new layers, Nuke will make them available in copy nodes, effectively allowing us to change our AOV's into useful Relight position and normal data.

Copy AOV's to positional data

As we can see in the image bellow, we need to connect a shader into the Relight node, as well as lights and camera data. We are using the cliff FBX geometry to view a simple representation of the 3D cliff scene when positioning new lights. Note the relight tool will still work without geometry, but it is recommended for accuracy.

Nuke relighting workflow

The Relight output will have diffuse and specular information, but Instead of relying on shading results, we've gone a step further and converted the resulting image into a luminance mask to color correct our image.

Tip

Don't forget to check "use alpha" in the Relight node, so that the relighting only happens within the boundaries of the alpha channel.

3D Compositing


3D Compositing has been on the rise in recent years. Traditional 2D compositors like Shake lacked 3D tools, but applications like Nuke and Fusion have complex 3D engines built-in, which allows the user to expand the limitations of 2D and bridge the gap between production and post-production. We used 3D Compositing to composite our sky with our cloud teapot and add some atmospheric dust particles to the camera flythrough.

The main workflow consisted in exporting FBX geometry and camera data from Maya, in order to use Fusion to generate particles and composite our sky and teapot. To do this, I exported the Maya camera in .ma format, because Fusion will read Maya animation curves natively. Fusion also supports alembic files, so we can import our vegetation to get a better representation of the scene if needed.

Nuke follows many of the same principles for importing data as Fusion, but can't import Maya .ma files for the camera, so we need to generate FBX data for our camera. Because of animation curve interpolation, it is best to bake the camera animation before exporting.

Tip

To ensure that the camera matches 100% in post, it is crucial to set the camera's Film Aspect Ratio to the same value as the Device Aspect Ratio in the render settings. In our case, this value is 1.850. Unfortunately Maya always defaults to 1.50, so we always need to change it!

To create our sky, we texture mapped an image plane with a sky texture from cgtextures.com, with the same method, we added our cloud teapot onto our sky. The fluid dynamic Maya clouds added the finishing touch of life to the sky.

Fusion Sky

In the image below we can see how using very simple geometry to represent our cliff, we are able to position our particles and better judge spatially where our sky looks best. It's crucial to have a good spatial relationship to represent the parallax needed for such a space. Not having the reference geometry could lead to the sky being too close, thus shrinking the scene down in size visually, because there is not enough perspective shifting.

Fusion 3D space

Tip

Make sure to use FBX instead of OBJ for geometry export, because it allows faster I/O speeds and many more options for interpreting the 3D scene in Fusion.

Deep Compositing


Even though I chose not to use Deep Images in TGT, they have become a very important part of many film pipelines. When I started The Grand Tour, the workflow issues outweighed the benefits. This is no longer the case, since both RMS and Nuke have seamless deep data integration. Let's take a look at how Deep Images can be of great help.

Deep Compositing refers to the use of Deep Image Data as part of the compositing process to accurately merge elements within transparent, intersecting, volumetric or otherwise disparate elements which would otherwise be impossible to merge with 2D compositing. In other words, in softwares such as Nuke, we can use Deep Image Data to merge objects through fur, glass, clouds, transparent leaves, etc, without any of the pitfalls of traditional compositing techniques such as the use of Zdepth maps, where edge filtering, transparency and sub-pixel accuracy were a problem.

In order to generate deep data information from RMS we need to follow some simple steps, as outlined in the docs:

1) Via the Outputs tab for your Final pass, create a new custom output called color Ci. Where color represents the data type and Cirepresents the Incident Ray Color. This could be any AOV.

Create custom channel

2) Once the new channel is available from the list, right-click on it and choose Create Output from Channels. We should now see our new channel in the top list of AOV's that will be rendered.

Create Ci Output

3) Select the new Ci channel from the AOV outputs list and set the Image Format to DeepImage (.dtex). Add the optional Deep Image subimage attribute via Add/Remove Settings, so that our image on disk has the .rgba extension, which will provide Nuke relevant information about the data in the file.

Assign deep data type

Tip

If desired, you can disable the primary output. Now that we've generated our deep data, lets learn how to make use of it in Nuke.

Below is the visualization of the Deep Image Data within Nuke. We are using a DeepReColor node to merge the beauty pass and the Deep Data, then passing the information onto a DeepToPoints node in order to visualize this in 3D space. This is crucial to positioning new elements for our composite, such as our lights and text, but it's not part of the actual result.

The DeepHoldout node will be the main "merge" node for our deep data, because it will create a holdout (or cutout) where the text is. We will then merge the text render as the background with the holdout result as our foreground, effectively exposing the text through the holdout.

Nuke - Deep Data script

We can use the point data to accurately position our newly created 3D elements and any lighting needed to matchlight the scene. In our example scene, we have a simple "SMILE" text mapped onto a card in 3D space. We are lighting the text with several nuke point lights to simulate the directionality of the keylight and the indirect lighting from the colorful saturated walls.

DeepToPoint - viewport

And voila! Our final image with our Nuke text merged through intersecting geometry. We are free to light our "SMILE" text to compensate for the lack of GI in compositing.

Deep Data 3D Merge

Color Grading and Finishing Touches


When you're finishing a shot, it is very important to deal with any outstanding issues, including color banding, matching grain or simply adding a vignette to simulate an uneven luminance in the lens. The amount of effects needed in a shot can vary depending on the complexity and ambition of a shot, but above all, Color should always be a mayor priority. That's why Color correction and Color Grading are a very big part of any post-production process.

Color Correction

Color Correction deals with manipulating colors and values to a standard in a sequence, so lows, mids and highs need to be modified to have a certain range and value which will allow a sequence to look cohesive. This helps create consistency throughout a sequence of shots. For film, this also includes white balance and chemical color timing, but since CG can create more predictable white balancing, Color Correction is usually limited to matching luminance levels and saturation to the film plate. In the case of our shot or an all animated feature, Color Correction can be much more straight forward.

Color Grading

Color Grading is the very last color transform of a shot and maybe even our last creative tool. It serves as a way to modify our color balance in the lows, mids and highs in order to give the overall color palette a desired emotional look, usually tied in with the story. In our case we are using a warm color palette, so our mids and highs are shifted towards orange and yellow, while the lows are shifted towards blue to complement the orange tones and highlight the warmth of the sun.

Color Grade

Tip

Note the differences between Color Correction and Color Grading, the first deals with color technically and the other is a creative tool.

Film Grain

Film grain was crucial to reaching a film stock look. Since many grain tools default to monochromatic or evenly distributed grain, I made sure to adjust the size, intensity and quantity of grain in each color channel, in order to better simulate film stock. This is one of the tools where we really see the advantages of working with correct color management, because when working with a logarithmic response curve, the grain will vary its intensity as the exposure of the plate changes, therefore having a large floating point range of values helps the log curve simulate film stock much more accurately.

In the image bellow, we can see how working with traditional 8-bit methods produces a flat grain response, leading to unpleasant and destructive looking film grain, whereas the image in the bottom produces subtle results.

Incorrect flat grain vs correct log grain

Chromatic Aberration

To keep the image in tune with our retro-future look, we gave our shot a slight chromatic aberration in the edges of the image, this simulates a vintage film stock where color convergence wasn't as accurate and created channel shifts, channel blurs and purple fringing. We can achieve this effect in post by shifting each color channel ever so slightly in opposing directions, in order to create a shift in color. This effect can get more exaggerated with wider, older or lower quality lenses, even blurring some of the channels, but we need to make it very slight so that it doesn't overpower or degrade the image. We are also limiting the effects by using a mask to only affect the edges of the image.

For the Fusion comp, we are using a Chromatic Aberration plugin from the suite Krokodove.

Chromatic Aberration

Lens Flare

A Lens Flare is an artifact of the lens iris as light leaks inside the camera lens and into the diaphragm, that's why we usually get artifacts shaped like a hexagon or other geometric shapes that match the shape of the lens diaphragm. I used a Lens Flare in certain frames of the animation to add sense of realism to the shot (even though it's a furry creature riding a hovercraft...), I used the brightness from the hovercraft glass as my lens flare source. This effect needs to be subtle, because bright and moving elements can be very distracting, especially abstract shapes like lens flares. This effect can be very useful to create a sense of realism, because we associate such artifacts with a real camera lens...and producers think it's cool...

RECAP


  • A color managed compositing workflow is crucial to achieving predictable and accurate results in post.
  • Use data AOV's in order to Relight scenes in post and add any last minute art direction changes.
  • Use 3D Compositing to expand the capabilities of the compositing app and merge 2D with 3D workflows.
  • Get to know the advantages of using Deep Compositing and how it can benefit your compositing workflow for highly complex scenes.



Project Assets
Tags
  • {{tag.name}}

Log in to post a comment

{{ commentBody.length || "0"}} / 10 characters

No Comments, Yet

Comments