Hello,
I just published on my Patreon a video about Houdini’s window box.
This video is about how to use the new window box system in Houdini. One of those tools that I've been using for many years at different VFX studios, but now it works out of the box with Houdini and Karma. I hope you find it useful and you can start using it soon in your projects!
All the info on my Patreon site.
rendering
Deep compositing - going deeper /
Hello patrons,
This is a continuation of the intro to deep compositing, where we go deeper into compositing workflows using depth.
I will show you how to properly use deep information in a flat composition, to work fast and efficient with all the benefits of depth data but none of the caveats.
The video is more than 3 hours long and we will explore:
- Quick recap of pros and cons of using deep comp.
- Quick recap of basic deep tools.
- Setting up render passes in a 3D software for deep.
- Deep holdouts.
- Organizing deep comps.
- How to use AOVs in deep.
- How to work with precomps.
- Creating deep templates.
- Using 3D geometry in deep.
- Using 2D elements in deep.
- Using particles in deep.
- Zdepth using depth information.
Thanks for your support!
Head over to my Patreon for all the info.
Xuan.
Deep compositing /
Hello patrons,
In this 2 hour video we are going to be talking about deep compositing workflows.
I will show you how to use deep compositing and why you should be using it for most of your shots.
I will explain the basics behind deep rendering and compositing techniques and also we'll go through all the deep tools available in Nuke while comping some simple shots. From volumes and atmospheric effects to solid assets.
Video and downloadable material will be included in the next posts.
All the information on my Patreon.
Thanks for your support!
Xuan.
Mix 03 /
Hello patrons,
This month I have another mix video for you. In this case I'm talking about two different ways of using the camera frustum to optimize your scenes. The first method is using the camera frustum to control the amount of subdivisions, a very common practice when dealing with large terrains that need a lot of displacement detail. We will use Houdini and Arnold, but this technique can be used in any DCC that supports Arnold. Other renderers have similar features.
The second method will use the camera frustum to blast parts of the scene not seeing by the camera. This is a tool that we will build in Houdini and can be used with any render engine.
Then we will move to another important topic in VFX, motion blur. I will show you how to use it properly to achieve photo realism. Motion blur should be taken very siriously, specially by lighters and FX TD's.
All the information on my Patreon.
Thanks!
Xuan.
Cryptomatte in Katana and Arnold /
This post is mainly for my Patreon supporters. Some of them are having issues while setting up cryptomatte AOVs. This is how you do it.
Create a material node.
Add an AOVs shader -> cryptomatte.
If you are using the Arnold AOVs supertool, add all the cryptomatte AOVs that you need.
In the arnold global settings add the cryptomatte shader to the AOV shaders.
If you are not using the Arnold AOVs supertool.
Create an arnold output channel define node.
Set it to be crypto_material.
Type RGBA.
Add a render output define node.
Set the channel to crypto_material.
Repeat the same steps to create crypto_object and crypto_asset.
Katana Fastrack, episode 07 /
Hello patrons,
Episode 07 of Katana Fastrack is already available.
In this episode we will work on the second and latest lighting shot of this course, a live action shot where we have to integrate some CG elements, in this case our very own Ant Man falling on the ground.
We will learn a few things like:
- How to quickly capture HDRIs on-set and lighting references.
- How to technical grade footage and HDRIs to live under the same context.
- How to approach live action shots in Katana.
- How to slap comp CG renders on top of a plate for validation.
More information on my Patron site.
Thank you very much for your support!
Xuan.
Render mask in HtoA /
This is how to setup a render mask, or render patch, or whatever you want to call it, in Houdini using Arnold.
Render patches are generally used when a high cost render needs a fix that only affects to a small portion of the frame, or when most of the frame is going to be covered by a foreground plate.
In these scenarios there is no need to waste render time and render the whole frame, but just what is needed to finalize the shot.
This is the scene that I’m going to use for this example. Let’s pretend that we have already render 4k full range of this shot. All of the sudden we need to make some changes on the rubber toy screen left.
The best way to create a render mask is using Nuke. You can use an old render as template to make sure everything you need in the frame is covered by the mask. Rotopaint nodes are very useful specially if you need to animate your mask.
Create a camera shader and connect the render mask to its filter map.
Connect the shader to the camera shader input of the camera, in the Arnold tab.
If you render now, only the mask area will be rendered, saving us a lot of render time.
Huge limitation, that I don’t know how to fix and I’m hoping for someone to throw some light on this topic. If you are rendering with overscan, this won’t work nicely, let me show you why.
I’m rendering with a 120 pixels overscan, I know is generally speaking a lot, but I just want to illustrate this example very clearly.
Now if you render the same overscan with the render mask applied, you will get a black border around the render. Below is the render patch comped over the full frame render.
I’m pretty sure the issue is related to the wrap options of the render mask. By changing the wrapping mode you will get away of this issue in some shots, but in an example like the one on this post, there is no fix playing with the wrapping modes.
Any ideas?
You can definitely use the camera crop options and it will work perfectly fine, no issues at all. It is not as flexible as using your own textures, but it will do in most cases.
Introduction to Redshift for VFX, episode 01 /
I'm starting a new training series for my Patreon feed, it's called "Intro to Redshift" for visual effects. I'm kind of learning Redshift and trying to figure out how to use it within my visual effects workflow, and I'll be sharing this trip with you. In this very first episode, I'll be talking about probably the most important topics around Redshift and the base for everything that will come later global illumination and sampling.
I will go deep about these two topics, sharing with you the basic theory behind global illumination and sampling, and I will also share with you a couple of "cheat sheets" to deal with noise and gi easily in Redshift while rendering your shots.
Check the first video out on my Patreon feed.
Cheers,
Xuan.
A bit more of gaffer /
Keep playing with gaffer and keep discovering how to do stuff that I’m used to do in other software.
RAW lighting and albedo AOVs in Arnold /
If you are new to Arnold you are probably looking for RAW lighting and albedo AOVs in the AOV editor. And yes you are right, they are not there. At least when using AiStandard shaders.
The easiest and fastest solution would be to use AlShaders, they include both, RAW lighting and albedo AOVs. But if you need to use AiStandard shaders, you will have to create your own AOVs quite easily).
- In this capture you can see available AOVs for RAW lighting and albedo for the AlShaders.
- If you are using AiStandard shaders you won't see those AOVs.
- If you still want/need to use AiStandard shaders, you will have to render your beauty pass with the standard AOVs and utility passes and you will have to create the albedo pass by hand. You can easily do this replacing AiStandard shaders by Surface shaders.
- if we have a look at them in Nuke they will look like these.
- If we divide the beauty pass by the albedo pass we will get the RAW lighting.
- We can now modify only the lighting without affecting the colour.
- We can also modify the colour component without modifying the lighting.
- In this case I'm color correcting and cloning some stuff in the color pass.
- With a multiply operation I can combine both elements again to obtain the beauty render.
- If I disable all the modification to both lighting and color, I should get exactly the same result as the original beauty pass.
- Finally I'm adding a ground using my shadow catcher information.
UVs AOV in Arnold Render /
This video was created for elephantvfx.com which means that it's only available in Spanish. But you can mute it and follow what I do visually :)
I basically explain how to render UVs AOVs in a couple of different ways using Maya and Arnold.
Clarisse AOVs overview /
This is a very quick overview of how to use AOVs in Clarisse.
I started from this very simple scene.
Select your render image and then the 3D layer.
Open the AOV editor and select the components that you need for your compositing. In my case I only need diffuse, reflection and sss.
Click on the plus button to enable them.
Now you can check every single AOV in the image view frame buffer.
Create a new context called "compositing" and inside of it create a new image called "comp_image".
Add a black color layer.
Add an add filter and texture it using a constant color. This will be the entry point for our comp.
Drag and drop the constant color to the material editor.
Drag and drop the image render to the material editor.
If you connect the image render to the the constant color input, you will see the beauty pass. Let's split it into AOVs.
Rename the map to diffuse and select the diffuse channel.
Repeat the process with all the AOVs, you can copy and paste the map node.
Add a few add nodes to merge all the AOVs until you get the beauty pass. This is it, your comp in a real time 3D environment. Whatever you change/add in you scene will be updated automatically.
Lets say that you don't need your comp inside Clarisse. Fine, just select your render image, configure the output and bring the render manager to output your final render.
- Just do the comp in Nuke as usual.
IBL and sampling in Clarisse /
Using IBLs with huge ranges for natural light (sun) is just great. They give you a very consistent lighting conditions and the behaviour of the shadows is fantastic.
But sampling those massive values can be a bit tricky sometimes. Your render will have a lot of noise and artifacts, and you will have to deal with tricks like creating cropped versions of the HDRIs or clampling values out of Nuke.
Fortunately in Clarisse we can deal with this issue quite easily.
Shading, lighting and anti-aliasing are completely independent in Clarisse. You can tweak on of them without affecting the other ones saving a lot of rendering time. In many renderers shading sampling is multiplied by anti-aliasing sampling which force the users to tweak all the shaders in order to have decent render times.
- We are going to start with this noisy scene.
- The first thing you should do is changing the Interpolation Mode to
MipMapping in the Map File of your HDRI.
- Then we need to tweak the shading sampling.
- Go to raytracer and activate previz mode. This will remove lighting
information from the scene. All the noise here comes from the shaders.
- In this case we get a lot of noise from the sphere. Just go to the sphere's material and increase the reflection quality under sampling.
- I increased the reflection quality to 10 and can't see any noise in the scene any more.
- Select again the raytracer and deactivate the previz mode. All the noise here is coming now from lighting.
- Go to the gi monte carlo and disable affect diffuse. Doing this gi won't affect lighting. We have now only direct lighting here. If you see some noise just increase the sampling of our direct lights.
- Go to the gi monte carlo and re-enable affect diffuse. Increase the quality until the noise disappears.
- The render is noise free now but it still looks a bit low res, this is because of the anti-aliasing. Go to raytracer and increase the samples. Now the render looks just perfect.
- Finally there is a global sampling setting that usually you won't have to play with. But just for your information, the shading oversampling set to 100% will multiply the shading rays by the anti-aliasing samples, like most of the render engines out there. This will help to refine the render but rendering times will increase quite a bit.
- Now if you want to have quick and dirt results for look-dev or lighting just play with the image quality. You will not get pristine renders but they will be good enough for establishing looks.
Clarisse, layers and passes /
I will continue writing about my experiences working with Clarisse. This time I'm gonna talk about working with layers and passes, a very common topic in the rendering world no matter what software you are using.
Clarisse allows you to create very complex organization systems using contexts, layers/passes and images. In addition to that we can compose all the information inside Clarisse in order to create different outputs for compositing.
Clarisse has a very clever organization methods for huge scenes.
- For this tutorial I'm going to use a very simple scene. The goal is to create one render layer for each element of the scene. At the end of this article we will have foreground, midgrodund, backgorund, the floor and shadows isolated.
- At this point I have an image with a 3DLayer containing all the elements of the scene.
- I've created 3 different contexts for foreground, midground and background.
- Inside each context I put the correspondent geometry.
- Inside each context I created an empty image.
- I created a 3DLayer for each image.
- We need to indicate which camera and renderer need to be used in each 3DLayer.
- We also need to indicate which lights are going to be used in each layer.
- At this point you probably realized how powerful Clarisse can be for organization purposes.
- In the background context I'm rendering both the sphere and the floor.
- In the scene context I've created a new image. This image will be the recipient for all the other images created before.
- In this case I'm not creating 3DLayers but Image Layers.
- In the layers options select each one of the layers created before.
- I put the background on the bottom and the foreground on the top.
- We face the problem that only the sphere has working shadows. This is because there is no floor in the other contexts.
- In order to fix this I moved the floor to another context called shadow_catcher.
- I created a new 3DLayer where I selected the camera and renderer.
- I created a group with the sphere, cube and cylinder.
- I moved the group to the shadows parameter of the 3DLayer.
- In the recipient image I place the shadows at the bottom. That's it, we have shadows working now.
- Oh wait, no that fast. If you check the first image of this post you will realize that the cube is actually intersecting the floor. But in this render that is not happening at all. This is because the floor is not in the cube context acting as matte object.
- To fix this just create an instance of the floor in the cube context.
- In the shading options of the floor I localize the parameters matte and alpha (RMB and click on localize).
- Then I activated those options and set the alpha to 0%
- That's it, working perfectly.
- At this point everything is working fine, but we have the floor and the shadows together. Maybe you would like to have them separated so you can tweak both of them independently.
- To do this, I created a new context only with the floor.
- In the shadows context I created a new "decal" material and assigned it to the floor.
- In the decal material I activated receive illumination.
- And finally I added the new image to the recipient image.
- You can download the sample scene here.
Love Vray's IBL /
When you work for a big VFX or animation studio you usually light your shots with different complex light rigs, often developed by highly talented people.
But when you are working at home or for small studios or doing freelance tasks or whatever else.. you need to simplify your techniques and tray to reach the best quality as you can.
For those reasons, I have to say that I’m switching from Mental Ray to V-Ray.
One of the features that I most love about V-Ray is the awesome dome light to create image based lighting setups.
Let me tell you a couple of thing which make that dome light so great.
- First of all, the technical setup is incredible simple. Just a few clicks, activate linear workflow, correct the gamma of your textures and choose a nice hdri image.
- Is kind of quick and simple to reduce the noise generated by the hdri image. Increasing the maximum subdivisions and decreasing the threshold should be enough. Something between 25 to 50 or 100 as max. subdivision should work on common situations. And something like 0.005 is a good value for the threshold.
- The render time is so fast using raytracing stuff.
- Even using global illumination the render times are more than good.
- Displacement, motion blur and that kind of heavy stuff is also welcome.
- Another thing that I love about the dome light using hdri images is the great quality of the shadows. Usually you don’t need to add direct lights to the scene. If the hdri is good enough you can match the footage really fast and accurately enough.
- The dome light has some parameters to control de orientation of your hdri image and is quite simple to have a nice preview in the Maya’s viewport.
- In all the renders that you can see here, you probably realized that I’m using an hdri image with “a lot” of different lighting points, around 12 different lights on the picture. In this example I put a black color on the background and I changed all the lights by white spots. It is a good test to make a better idea of how the dome light treats the direct lighting. And it is great.
- The natural light is soft and nice.
- These are some of the key point because I love the VRay’s dome light :)
- On the other hand, I don’t like doing look-dev with the dome light. Is really really slow, I can’t recommend this light for that kind of tasks.
- The trick is to turn off your dome light, and create a traditional IBL setup using a sphere and direct lights, or pluging your hdri image to the VRay’s environment and turn on the global illumination.
- Work there on your shaders and then move on to the dome light again.
My favourite V-Ray passes /
Recently working with V-Ray I discovered that these are the render passes which I use more often.
Simple scene, simple asset, simple texture and shading and simple lighting, just to show my render passes and pre-compositing stuff.
- Global Illumination
- Direct lighting
- Normals
- Reflection
- Specular
- Z-Depth
- Occlusion
- Snow (or up/down)
- Uvs
- XYZ (or global position)
Linear Workflow in Maya with Vray 2.0 /
I’m starting a new work with V-Ray 2.0 for Maya. I never worked before with this render engine, so first things first.
One of my first things is create a nice neutral light rig for testing shaders and textures. Setting up linear workflow is one of my priorities at this point.
Find below a quick way to set up this.
- Set up your gamma. In this case I’m using 2,2
- Click on “don’t affect colors” if you want to bake your gamma correction in to the final render. If you don’t click on it you’ll have to correct your gamma in post. No big deal.
- The linear workflow option is something created for Chaos Group to fix old VRay scenes which don’t use lwf. You shouldn’t use this at all.
- Click on affect swatches to see color pickers with the gamma applied.
- Once you are working with gamma applied, you need to correct your color textures. There are two different options to do it.
- First one: Add a gamma correction node to each color texture node. In this case I’, using gamma 2,2 what means that I need to use a ,0455 value on my gamma node.
- Second option: Instead of using gamma correction nodes for each color texture node, you can click on the texture node and add a V-Ray attribute to control this.
- By default all the texture nodes are being read as linear. Change your color textures to be read as sRGB.
- Click on view as sRGB on the V-Ray buffer, if not you’ll see your renders in the wrong color space.
- This is the difference between rendering with the option “don’t affect colors” enabled or disabled. As I said, no big deal.