Introduction to Gaffer part 04 where I talk mostly about volumes. I also mention a few things about good practices while look-deving “fetchin” textures and what not.
rendering
A bit more of gaffer /
Keep playing with gaffer and keep discovering how to do stuff that I’m used to do in other software.
Introduction to gaffer /
By gaffer hq: Gaffer is a free, open-source, node-based VFX application that enables look developers, lighters, and compositors to easily build, tweak, iterate, and render scenes. Built with flexibility in mind, Gaffer supports in-application scripting in Python and OSL, so VFX artists and technical directors can design shaders, automate processes, and build production workflows.
With hooks in both C++ and Python, Gaffer's readily extensible API provides both professional studios and enthusiasts with the tools to add their own custom modules, nodes, and UI.
The workhorse of the production pipeline at Image Engine Design Inc., Gaffer has been used to build award-winning VFX for shows such as Jurassic World: Fallen Kingdom, Lost in Space, Logan, and Game of Thrones.
Quick and dirty free IBLs /
Some of my spare IBLs that I shot while ago using a Ricoh Theta. They contain around 12EV dynamic range. Resolution is not pretty good but it stills holds up for look-dev and lighting tasks.
Feel free to download the equirectangular .exrs here.
Please do not use in commercial projects.
Clarisse shading layers: Crowd in 5 minutes /
One feature that I really like in Clarisse are the shading layers. With them you can drive shaders based on naming convention or location of assets in the scene. With this method you can assign shaders to a very complex scene structure in no time. In this particular case I'll be showing you how to shade an entire army and create shading/texturing variations in just a few minutes.
I'll be using an alembic cache simulation exported from Maya using Golaem. Usually you will get thousand of objects with different naming convention, which makes the shading assignment task a bit laborious. With shading layer rules in Clarisse we can speed up a lot this tedious process
- Import an alembic cache with the crowd simulation through file -> import -> scene
- In this scene I have 1518 different objects.
- I'm going to create an IBL rig with one of my HDRIs to get some decent lighting in the scene.
- I created a new context called geometry where I placed the army and also created a ground plane.
- I also created another context called shaders where I'm going to place all my shaders for the soldiers.
- In the shaders context I created a new material called dummy, just a lambertian grey shader.
- We are going to be using shading layers, to apply shaders globally based on context and naming convention. I created a shading layers called army (new -> shading layer).
- With the pass (image) selected, select the 3D layer and apply the shading layer.
- Using the shading layer editor, add a new rule to apply the dummy shader to everything in the scene.
- I'm going to add a rule for everything called heavyArmor.
- Then just configure the shader for the heavyArmour with metal properties and it's correspondent textures.
- Create a new rule for the helmets and apply the shader that contains the proper textures for the helmets.
- I keep adding rules and shaders for different parts of the sodliers.
- If I want to create random variation, I can create shading layers for specific names of parts or even easier and faster, I can put a few items in a new context and create a new shading rule for them. For the bodies I want to use caucasian and black skin soldiers. I grabbed a few bodies and place them inside a new context called black. Then create a new shading rules where I apply a shader with different skin textures to all the bodies in that context.
- I repeated the same process for the shields and other elements.
- At the end of the process I can have a very populated army with a lot of random texture variations in just a few minutes.
- This is how my shading layers look like at the end of the process.
RAW lighting and albedo AOVs in Arnold /
If you are new to Arnold you are probably looking for RAW lighting and albedo AOVs in the AOV editor. And yes you are right, they are not there. At least when using AiStandard shaders.
The easiest and fastest solution would be to use AlShaders, they include both, RAW lighting and albedo AOVs. But if you need to use AiStandard shaders, you will have to create your own AOVs quite easily).
- In this capture you can see available AOVs for RAW lighting and albedo for the AlShaders.
- If you are using AiStandard shaders you won't see those AOVs.
- If you still want/need to use AiStandard shaders, you will have to render your beauty pass with the standard AOVs and utility passes and you will have to create the albedo pass by hand. You can easily do this replacing AiStandard shaders by Surface shaders.
- if we have a look at them in Nuke they will look like these.
- If we divide the beauty pass by the albedo pass we will get the RAW lighting.
- We can now modify only the lighting without affecting the colour.
- We can also modify the colour component without modifying the lighting.
- In this case I'm color correcting and cloning some stuff in the color pass.
- With a multiply operation I can combine both elements again to obtain the beauty render.
- If I disable all the modification to both lighting and color, I should get exactly the same result as the original beauty pass.
- Finally I'm adding a ground using my shadow catcher information.
Environment reconstruction + HDR projections /
I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.
I tried to make it as simple as possible, spending just a couple of hours on location.
- The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
- Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
- With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
- I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
- For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
- Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
- Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
- These are the five different HDRI positions and some render tests.
- The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
- Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
- After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
- Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
- After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.
Subdivide multiple objects in Arnold /
As you probably know Arnold manages subdivision individually per object. There is no way to subdivide multiple objects at once. Obviously if you have a lot of different objects in a scene going one by one adding Arnold's subdivision property doesn't sound like a good idea.
This the easiest way that I found to solve this problem and subdivide tons of objects at once.
I have no idea at all about scripting, if you have a better solution, please let me know :)
- This is the character that I want to subdivide. As you can see it has a lot of small pieces. I'd like to keep them separate and subdivide every single one of them.
- First of all, you need to select all the geometry shapes. To do this, select all the geometry objects in the outliner and paste this line in the script editor.
/* you have to select all the objects you want to subdivide, it doesn’t work with groups or locators.
once the shapes are selected just change aiSubdivType and aiSubdivIterations on the attribute spread sheet.
*/
pickWalk -d down;
string $shapesSelected[] = `ls -sl`;
- Once all the shapes are selected go to the attribute spread editor.
- Filter by ai subd.
- Just type the subdivision method and iterations.
- This is it, the whole character is now subdivided.
rendering Maya particles in Clarisse /
This is a very simple tutorial explaining how to render particle systems simulated in Maya inside Isotropix Clarisse. I already have a few posts about using Clarisse for different purposes, if you check by the tag "Clarisse" you will find all the previous posts. Hope to be publishing more soon.
In this particular case we'll be using a very simple particle system in Maya. We are going to export it to Clarisse and use custom geometries and Clarisse's powerful scatterer system to render millions of polygons very fast and nicely.
- Once your particle system has been simulated in Maya, export it via Alembic. One of the standard 3D formats for exchanging information in VFX.
- Create an IBL rig in Clarisse. In a previous post I explain how to do it, it is quite simple.
- With Clarisse 2.0 it is so simple to do, just one click and you are ready to go.
- Go to File -> Import -> Scene and select the Alembic file exported from Maya.
- It comes with 2 types of particles, a grid acting as ground and the render camera.
- Create a few contexts to keep everything tidy. Geo, particles, cameras and materials.
- In the geo context I imported the toy_man and the toy_truck models (.obj) and moved the grid from the main context to the geo context.
- Moved the 2 particles systems and the camera to their correspondent contexts.
- In the materials context I created 2 materials and 2 color textures for the models. Very simple shaders and textures.
- In the particles context I created a new scatterer calle scatterer_typeA.
- In the geometry support of the scatter add the particles_typeA and in the geometry section add the toy_man model.
- I’m also adding some variation to the rotation.
- If I move my timeline I will see the particle animation using the toy_man model.
- Do not forget to assign the material created before.
- Create another scatterer for the partycles_typeB and configure the geometry support and the geometry to be used.
- Add also some rotation and position variation.
- As these models are quite big compared with the toy figurine, I’m offsetting the particle effect to reduce the presence of toy_trucks in the scene.
- Before rendering, I’d like to add some motion blur to the scene. Go to raytracer -> Motion Blur -> 3D motion blur. Now you are ready to render the whole animation.
Clarisse AOVs overview /
This is a very quick overview of how to use AOVs in Clarisse.
I started from this very simple scene.
Select your render image and then the 3D layer.
Open the AOV editor and select the components that you need for your compositing. In my case I only need diffuse, reflection and sss.
Click on the plus button to enable them.
Now you can check every single AOV in the image view frame buffer.
Create a new context called "compositing" and inside of it create a new image called "comp_image".
Add a black color layer.
Add an add filter and texture it using a constant color. This will be the entry point for our comp.
Drag and drop the constant color to the material editor.
Drag and drop the image render to the material editor.
If you connect the image render to the the constant color input, you will see the beauty pass. Let's split it into AOVs.
Rename the map to diffuse and select the diffuse channel.
Repeat the process with all the AOVs, you can copy and paste the map node.
Add a few add nodes to merge all the AOVs until you get the beauty pass. This is it, your comp in a real time 3D environment. Whatever you change/add in you scene will be updated automatically.
Lets say that you don't need your comp inside Clarisse. Fine, just select your render image, configure the output and bring the render manager to output your final render.
- Just do the comp in Nuke as usual.
Zbrush displacement in Clarisse /
This is a very quick guide to set-up Zbrush displacements in Clarisse.
As usually, the most important thing is to extract the displacement map from Zbrush correctly. To do so just check my previous post about this procedure.
Once your displacement maps are exported follow this mini tutorial.
- In order to keep everything tidy and clean I will put all the stuff related with this tutorial inside a new context called "hand".
- In this case I imported the base geometry and created a standard shader with a gray color.
- I'm just using a very simple Image Based Lighting set-up.
- Then I created a map file and a displacement node. Rename everything to keep it tidy.
- Select the displacement texture for the hand and set-up the image to raw/linear. (I'm using 32bit .exr files).
- In the displacement node set the bounding box to something like 1 to start with.
- Add the displacement map to the front value, leave the value to 1m (which is not actually 1m, its like a global unit), and set the front offset to 0.
- Finally add the displacement node to the geometry.
- That's it. Render and you will get a nice displacement.
- If you are still working with 16 bits displacement maps, remember to set-up the displacement node offset to 0.5 and play with the value until you find the correct behaviour.
Sketch shader /
First attempt to create a shader that looks like rough 2D sketches.
I will definitely put more effort on this in the future.
I'm pretty much combining three different pen strokes.
Clarisse, layers and passes /
I will continue writing about my experiences working with Clarisse. This time I'm gonna talk about working with layers and passes, a very common topic in the rendering world no matter what software you are using.
Clarisse allows you to create very complex organization systems using contexts, layers/passes and images. In addition to that we can compose all the information inside Clarisse in order to create different outputs for compositing.
Clarisse has a very clever organization methods for huge scenes.
- For this tutorial I'm going to use a very simple scene. The goal is to create one render layer for each element of the scene. At the end of this article we will have foreground, midgrodund, backgorund, the floor and shadows isolated.
- At this point I have an image with a 3DLayer containing all the elements of the scene.
- I've created 3 different contexts for foreground, midground and background.
- Inside each context I put the correspondent geometry.
- Inside each context I created an empty image.
- I created a 3DLayer for each image.
- We need to indicate which camera and renderer need to be used in each 3DLayer.
- We also need to indicate which lights are going to be used in each layer.
- At this point you probably realized how powerful Clarisse can be for organization purposes.
- In the background context I'm rendering both the sphere and the floor.
- In the scene context I've created a new image. This image will be the recipient for all the other images created before.
- In this case I'm not creating 3DLayers but Image Layers.
- In the layers options select each one of the layers created before.
- I put the background on the bottom and the foreground on the top.
- We face the problem that only the sphere has working shadows. This is because there is no floor in the other contexts.
- In order to fix this I moved the floor to another context called shadow_catcher.
- I created a new 3DLayer where I selected the camera and renderer.
- I created a group with the sphere, cube and cylinder.
- I moved the group to the shadows parameter of the 3DLayer.
- In the recipient image I place the shadows at the bottom. That's it, we have shadows working now.
- Oh wait, no that fast. If you check the first image of this post you will realize that the cube is actually intersecting the floor. But in this render that is not happening at all. This is because the floor is not in the cube context acting as matte object.
- To fix this just create an instance of the floor in the cube context.
- In the shading options of the floor I localize the parameters matte and alpha (RMB and click on localize).
- Then I activated those options and set the alpha to 0%
- That's it, working perfectly.
- At this point everything is working fine, but we have the floor and the shadows together. Maybe you would like to have them separated so you can tweak both of them independently.
- To do this, I created a new context only with the floor.
- In the shadows context I created a new "decal" material and assigned it to the floor.
- In the decal material I activated receive illumination.
- And finally I added the new image to the recipient image.
- You can download the sample scene here.
Love Vray's IBL /
When you work for a big VFX or animation studio you usually light your shots with different complex light rigs, often developed by highly talented people.
But when you are working at home or for small studios or doing freelance tasks or whatever else.. you need to simplify your techniques and tray to reach the best quality as you can.
For those reasons, I have to say that I’m switching from Mental Ray to V-Ray.
One of the features that I most love about V-Ray is the awesome dome light to create image based lighting setups.
Let me tell you a couple of thing which make that dome light so great.
- First of all, the technical setup is incredible simple. Just a few clicks, activate linear workflow, correct the gamma of your textures and choose a nice hdri image.
- Is kind of quick and simple to reduce the noise generated by the hdri image. Increasing the maximum subdivisions and decreasing the threshold should be enough. Something between 25 to 50 or 100 as max. subdivision should work on common situations. And something like 0.005 is a good value for the threshold.
- The render time is so fast using raytracing stuff.
- Even using global illumination the render times are more than good.
- Displacement, motion blur and that kind of heavy stuff is also welcome.
- Another thing that I love about the dome light using hdri images is the great quality of the shadows. Usually you don’t need to add direct lights to the scene. If the hdri is good enough you can match the footage really fast and accurately enough.
- The dome light has some parameters to control de orientation of your hdri image and is quite simple to have a nice preview in the Maya’s viewport.
- In all the renders that you can see here, you probably realized that I’m using an hdri image with “a lot” of different lighting points, around 12 different lights on the picture. In this example I put a black color on the background and I changed all the lights by white spots. It is a good test to make a better idea of how the dome light treats the direct lighting. And it is great.
- The natural light is soft and nice.
- These are some of the key point because I love the VRay’s dome light :)
- On the other hand, I don’t like doing look-dev with the dome light. Is really really slow, I can’t recommend this light for that kind of tasks.
- The trick is to turn off your dome light, and create a traditional IBL setup using a sphere and direct lights, or pluging your hdri image to the VRay’s environment and turn on the global illumination.
- Work there on your shaders and then move on to the dome light again.
My favourite V-Ray passes /
Recently working with V-Ray I discovered that these are the render passes which I use more often.
Simple scene, simple asset, simple texture and shading and simple lighting, just to show my render passes and pre-compositing stuff.
- Global Illumination
- Direct lighting
- Normals
- Reflection
- Specular
- Z-Depth
- Occlusion
- Snow (or up/down)
- Uvs
- XYZ (or global position)
Linear Workflow in Maya with Vray 2.0 /
I’m starting a new work with V-Ray 2.0 for Maya. I never worked before with this render engine, so first things first.
One of my first things is create a nice neutral light rig for testing shaders and textures. Setting up linear workflow is one of my priorities at this point.
Find below a quick way to set up this.
- Set up your gamma. In this case I’m using 2,2
- Click on “don’t affect colors” if you want to bake your gamma correction in to the final render. If you don’t click on it you’ll have to correct your gamma in post. No big deal.
- The linear workflow option is something created for Chaos Group to fix old VRay scenes which don’t use lwf. You shouldn’t use this at all.
- Click on affect swatches to see color pickers with the gamma applied.
- Once you are working with gamma applied, you need to correct your color textures. There are two different options to do it.
- First one: Add a gamma correction node to each color texture node. In this case I’, using gamma 2,2 what means that I need to use a ,0455 value on my gamma node.
- Second option: Instead of using gamma correction nodes for each color texture node, you can click on the texture node and add a V-Ray attribute to control this.
- By default all the texture nodes are being read as linear. Change your color textures to be read as sRGB.
- Click on view as sRGB on the V-Ray buffer, if not you’ll see your renders in the wrong color space.
- This is the difference between rendering with the option “don’t affect colors” enabled or disabled. As I said, no big deal.
Faking SSS in Softimage /
SSS is a very nice shader which works really great with a good lighting setup, but sometimes is so expensive shader when you´re using Mental Ray.
Find below a couple of tecniques to deal better with SSS. Just keep in mind that those tricks could improve your render times a bit, but never will reach the same quality than using SSS for itself.
- I’m using this simple scene, with one key light (left), one fill light (right) and one rim light.
- A SSS compound is connected to the material surface input, and the SSS_lightmap (you can find that node in the render tree -> user tools) connected to the lightmap input of the SimpleSSS. And then, the Simple SSS lightimap connected to the material lightmap input.
- Write the output and resolution of your lightmap.
- Hit a render and check the render time.
- Disconnect the lightmap.
- Render again and check the render times as well. We have imprpved the times.
- If you need to really fake the SSS and render so fast, you can bake the SSS to texture using RenderMap, but keep in mind that the result will be much worst than using SSS. Anyways you can do that for background asset or similar.
- Now you can use another cheaper shader like blinn, phong or even constant with your baked SSS.
- As you can see the render is now so fast.