compositing

Introduction to Redshift - little project by Xuan Prada

My Patreon series “Introduction to Redshift for VFX” is coming to an end. We have already discussed in depth the most basics features like global illumination and sampling. I shared with you my own “cheat sheets” to deal with GI and sampling. We also talked about Redshift lighting tools, built-in atmospheric effects, and cameras. In the third episode we talked about camera mapping, surface shaders, texturing, displacement maps from Mari and Zbrush, how to ingest Substance Painter textures and did a few surfacing exercises.
This should give you a pretty good base to start your projects in Houdini and Redshift, or whatever 3D app you want to use with Redshift.

The next couple of videos about this series are going to be dedicated to doing from scratch to finish a little project using Redshift. We are going to be able to cover more features of the render engine and also discover more broad techniques that hopefully you will find interesting. Let me explain to you what is all of this about.

We’ll be doing this simple shot below from start to finish, it is quite simple and graphic I know, but to get there I’m going to explain to you many things that you are going to be using quite a lot in visual effects shots, more than we actually end up using in the shot.

We are going to start by having a quick introduction to SpeedTree Cinema 8 to see how to create procedural trees. We will create from scratch a few trees that later will be used in Houdini. Once we have all the models ready, we will see how to deal with SpeedTree textures to use them in Redshift in an ACES pipeline.

These trees will be used in Houdini to create re-usable assets llibraries and later converted to Redshift proxies for memory efficiency and scattering, also to be easily picked up by lighting artists when working on shots.

With all these trees we will take a look at how to create procedural scattering systems in Houdini using Redshift proxies. We will create multiple configurations depending on our needs. We are also going to learn how to ingest Quixel Megascans assets, again preparing them to work with ACES and creating an additional asset for our library. We will also re-use the scatterers made for trees to scatter rocks and pebbles.

To scatter all of that will be used as a base Houdini’s height fields. For this particular shot, we are going a very simple ground made with height fields and Megascans, but I’m going to give you a pretty comprehensive introduction to height fields, way more than what you see in the final shot.

Once all the natural assets are created, we’ll be looking at the textures and look-dev of the character. Yes, there is a character in the shot, you don’t see much but hey, this is what happens in VFX all the time. You spend months working on something barely noticeable. We will look into speed texturing and how to use Substance Painter with Redshift.

suit.png

Now that we are dealing with characters, what if I show you how to “guerrilla” deal with motion capture? So you can grab some random motion capture from any source and apply it to your characters. Look at the clip below, nothing better than a character moving to see if the look actually works.

It looks better when moving, doesn’t it? There is no cloth simulation btw, it is a Redshift course, we are not going that far! Not yet.

Any environment work, of course, needs some kind of volumetrics. They create nice lighting effects, give a sense of scale, look good and make terrible render times. We need to know how to deal with different types of volumetrics in Redshift, so I’m going to show you how to create a couple of different atmospherics using Houdini’s volumes. Quite simple but effective.

Finally, we will combine everything together in a shot. I will show you how to organize everything properly using bundles and smart bundles to configure your render passes. We will take a look at how Redshift deals with AOVs, render settings, etc. Finally, we will put everything together in Nuke to output a nice render.

Just to summarize, this is what I’m planning to show you while working on this little project. My guess is that it will take me a couple of sessions to deliver all this video training.

  • Speed Tree introduction and tree creation

  • ACES texture conversion

  • ACES introduction in Houdini and Redshift

  • Creation of tree assets library in Houdini

  • Megascans ingestion

  • Character texturing and look-dev

  • Guerrilla techniques to apply mocap

  • Introduction to Houdini’s height fields

  • Redshift proxies

  • Scattering systems in Houdini

  • Volume creation in Houdini for atmospherics

  • Scene assembly

  • Redshift render settings

  • Compositing

  • Something that I probably forgot

All of this and much more training will be published on my Patreon. Please consider supporting me.

Thanks,
Xuan.

Cryptomatte in Fusion by Xuan Prada

I'm using Fusion at home and trying to find workarounds for my texturing, look-dev and lighting pipeline. A must thing to have these days is cryptomatte, I can't see any work done without it, going back to ID passes is not an option.

  • To install it properly, you need to place the 3 .lua files in the same directory where your Fusion executable is located.
  • The fuse file should be places inside of your fuses folder and then blackmagic folder.
  • Apparently at the time of writing this, there is a bug with cryptomatte for Fusion not reading properly the cryptomatte data inside of a multi channel .exr
  • Rendering the cryptomate data in individual .exr is the best way to work.
  • Having the cryptomatte in it's own .exr will save the pain of shuffle channels in Fusion.
  • Use the add button and the color picker in the viewport to isolate parts of your render in the alpha channel.

Split EXR in Fusion by Xuan Prada

I recently started to use Blackmagic's Fusion at home (budget reasons) and I'm liking it so far but, one of the most important features coming from Nuke, is obviously the ability to shuffle between all the AOVs of your multi channel EXRs. Unfortunately Fusion doesn't support this. It has something called booleans to separate RGB channels but not AOVs.

Chad Ashley pointed me to this third party script that splits a multi channel EXR in many different loaders with each one of your AOVs. Not as good as Nuke's shuffle but good enough!

Stmaps by Xuan Prada

One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.

  • Using lens grids is always the easiest, fastest and most accurate way of delensing.
  • Set the output type to displacement and look through the forward channel to see the uvs in viewport.
  • Write the image as .exr 32 bits
  • This will output the uv information and can be read in any software.
  • To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.

Bake from Nuke to UVs by Xuan Prada

  • Export your scene from Maya with the geometry and camera animation.
  • Import the geometry and camera in Nuke.
  • Import the footage that you want to project and connect it to a Project 3D node.

 

  • Connect the cam input of the Project 3D node to the previously imported camera.
  • Connect the img input of the ReadGeo node to the Project 3D node.
  • Look through the camera and you will see the image projected on to the geometry through the camera.
  • Paint or tweak whatever you need.
  • Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
  • Projection option of the UVProjection should be set as off.

 

  • Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
  • Set the projection mode to UV.
  • If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
  • Finally use a write node to output your DMP work.
  • Render in Maya as expected.

Clarisse AOVs overview by Xuan Prada

This is a very quick overview of how to use AOVs in Clarisse.

  • I started from this very simple scene.

  • Select your render image and then the 3D layer.

  • Open the AOV editor and select the components that you need for your compositing. In my case I only need diffuse, reflection and sss.

  • Click on the plus button to enable them.

  • Now you can check every single AOV in the image view frame buffer.

  • Create a new context called "compositing" and inside of it create a new image called "comp_image".

  • Add a black color layer.

  • Add an add filter and texture it using a constant color. This will be the entry point for our comp.

  • Drag and drop the constant color to the material editor.

  • Drag and drop the image render to the material editor.

  • If you connect the image render to the the constant color input, you will see the beauty pass. Let's split it into AOVs.

  • Rename the map to diffuse and select the diffuse channel.

  • Repeat the process with all the AOVs, you can copy and paste the map node.

  • Add a few add nodes to merge all the AOVs until you get the beauty pass. This is it, your comp in a real time 3D environment. Whatever you change/add in you scene will be updated automatically.

  • Lets say that you don't need your comp inside Clarisse. Fine, just select your render image, configure the output and bring the render manager to output your final render.

  • Just do the comp in Nuke as usual.

RGB masks by Xuan Prada

We use RGB masks all the time in VFX, don't we?
They are very handy and we can save a lot of extra texture maps combining 4 channels in one single texture map RGB+A.

We use them to mix shaders in look-dev stage, or as IDs for compositing, or maybe as utility passes for things like motion blur o depth.

Let's see how I use RGB masks in my common software: Maya, Clarisse, Mari and Nuke.

Maya

  • I use a surface shader with a layered texture connected.
  • I connect all the shaders that I need to mix to the layered texture.
  • Then I use a remapColor node with the RGB mask connected as mask for each one of the shaders.

This is the RGB mask that I'm using.

  • We need to indicate which RGB channel we want to use in each remapColor node.
  • Then just use the output as mask for the shaders.

Clarisse

  • In Clarisse I use a reorder node connected to my RGB mask.
  • Just indicate the desired channel in the channel order parameter.
  • To convert the RGB channel to alpha just type it in the channel order field.

Mari

  • You will only need a shuffle adjustment layer and select the required channel.

Nuke

  • You can use a shuffle node and select the channel.
  • Or maybe a keyer node and select the channel in the operation parameter. (this will place the channel only in the alpha).

Clarisse, layers and passes by Xuan Prada

I will continue writing about my experiences working with Clarisse. This time I'm gonna talk about working with layers and passes, a very common topic in the rendering world no matter what software you are using.

Clarisse allows you to create very complex organization systems using contexts, layers/passes and images. In addition to that we can compose all the information inside Clarisse in order to create different outputs for compositing.
Clarisse has a very clever organization methods for huge scenes.

  • For this tutorial I'm going to use a very simple scene. The goal is to create one render layer for each element of the scene. At the end of this article we will have foreground, midgrodund, backgorund, the floor and shadows isolated.
  • At this point I have an image with a 3DLayer containing all the elements of the scene.
  • I've created 3 different contexts for foreground, midground and background.
  • Inside each context I put the correspondent geometry.
  • Inside each context I created an empty image.
  • I created a 3DLayer for each image.
  • We need to indicate which camera and renderer need to be used in each 3DLayer.
  • We also need to indicate which lights are going to be used in each layer.
  • At this point you probably realized how powerful Clarisse can be for organization purposes.
  • In the background context I'm rendering both the sphere and the floor.
  • In the scene context I've created a new image. This image will be the recipient for all the other images created before.
  • In this case I'm not creating 3DLayers but Image Layers.
  • In the layers options select each one of the layers created before.
  • I put the background on the bottom and the foreground on the top.
  • We face the problem that only the sphere has working shadows. This is because there is no floor in the other contexts.
  • In order to fix this I moved the floor to another context called shadow_catcher.
  • I created a new 3DLayer where I selected the camera and renderer.
  • I created a group with the sphere, cube and cylinder.
  • I moved the group to the shadows parameter of the 3DLayer.
  • In the recipient image I place the shadows at the bottom. That's it, we have shadows working now.
  • Oh wait, no that fast. If you check the first image of this post you will realize that the cube is actually intersecting the floor. But in this render that is not happening at all. This is because the floor is not in the cube context acting as matte object.
  • To fix this just create an instance of the floor in the cube context.
  • In the shading options of the floor I localize the parameters matte and alpha (RMB and click on localize).
  • Then I activated those options and set the alpha to 0%
  • That's it, working perfectly.
  • At this point everything is working fine, but we have the floor and the shadows together. Maybe you would like to have them separated so you can tweak both of them independently.
  • To do this, I created a new context only with the floor.
  • In the shadows context I created a new "decal" material and assigned it to the floor.
  • In the decal material I activated receive illumination.
  • And finally I added the new image to the recipient image.
  • You can download the sample scene here.

P-maps by Xuan Prada

P-maps or position maps are one of those render passes that can save your life sometimes. They are really useful for compositing artist, matte painters or texture artists. You can save a lot of time rendering p-maps out from your rendering engine and avoid those tiny changes in a 3D software to rely on 2D or 2.5D techniques.

I personally use p-maps for different purposes, let me tell you some of them.

To place cards or another 2,5D or 3D elements

  • This is the render that I’m using for this small article. Nothing fancy there, just a few cubes and a couple of direct lights. This image has been rendered in Maya and V-Ray but of course you can render p-maps with any other combination of 3D program and render engine.
  • As additional render passes for this image I got the “normal pass” and the “world position pass”. You probably know that there are three different p-maps: World position, camera position and object position. They can be used for different purposes but all of them have the same kind of information, which is position.

    As they name says, one is the position information based on the world centre, the second one has the position in relationship with the camera, and the third one the position based on the centre of the object.

    You can render out all of them if you need, but for this example I’m going to use only world position map. All the techniques shown here can be extrapolated to the other maps.
  • This image is an .exr with all the render passes embedded so you can easily switch between them.
  • This is how the world position map looks like.
  • And this is how the normals pass looks like.
  • Use a shuffle node to read the p-map. If you need to un-premultiply it you can do it before the shuffle.
  • Use a position to points node to read the p-map and convert it to a 3D view.
  • Now you can move around the scene like any 3D software. This is extremely useful if you need to place any 2,5D or 3D element, like cards for example. Cards are extremely useful to place matte paintings, animated 2D elements, etc. You don’t need to guess anymore, just place your card in the right position.

To re-lit completely your scene

  • This is how your scene looks like out from the render package.
  • As we did before shuffle the p-map.
  • Take a re-light node. Connect the rgb to the color and the shuffle to the material. Then tweak the re-light node and select the normals and the point position.
  • Finally create a camera node and a scene node. Hook them all to the re-light node.
  • Now create a light node and connect it to the scene node. Play with the light to re-lit your scene.
  • As we see before, you can use your 3D scene to place the light.

To add subtle lighting information (or not that subtle)

  • Use a p-matte node to input your p-map and to output the information to the alpha channel. If you play with the shape, position and scale you will see the information of your new light in the alpha channel.
  • Connect the p_matte to the mask input of a grade node and play with it to tweak the intensity and color of the light.
  • Use a plus node to add as many lights as you need.
  • Of course you can help yourself to place the lights using the 3D view provided by the p-maps.

To project through camera

  • Projecting mattes, smoke, or any other information has never been so easy. Use the 3D view and a 3D camera to project detail on to the current render.
  • You can create a new camera or import it from your 3D package.
  • Use a re-project node to connect the camera, the image that you want to project (in this case a grid) and use a shuffle node with the p-map to provide vector information.
  • Finally I’m using a merge node to combine the grid projection with the original render and it’s being masked out using the embedded alpha from the .exr render.
  • Of course you can use the 3D view to place or modify the camera.

My favourite V-Ray passes by Xuan Prada

Recently working with V-Ray I discovered that these are the render passes which I use more often.
Simple scene, simple asset, simple texture and shading and simple lighting, just to show my render passes and pre-compositing stuff.

  • Global Illumination
  • Direct lighting
  • Normals
  • Reflection
  • Specular
  • Z-Depth
  • Occlusion
  • Snow (or up/down)
  • Uvs
  • XYZ (or global position)

RGB

GI

Direct lighting

Normals

Occlusion

Reflection

Snow

Specular

UVs

XYZ global position

Slapcomp

Black holes with final gather contribution by Xuan Prada

Black holes are a key feature in 3D lighting and compositing, but black holes with bounced information are super!

  • Apply a Mental Ray Production Shader called “mip_rayswitch_advanced” to your black hole object.
  • In the “eye” channel, connect a “surface shader” with the “out_matte_opacity” parameter pure black.
  • In the Final Gather input, connect the original shader of your object. (a blinn shader for example).

Beauty channel.

Alpha channel