look-dev

Katana Fastrack episode 04 by Xuan Prada

Katana Fastrack episode 04 is already available.
In this episode, we will finish the Ant-Man lookDev by tweaking all the shaders and texture maps created in Mari.

Then we will do a very quick slapcomp in Katana and Nuke to check that everything works as expected and looks good. We will do this by render a full motion range of Ant-Man walk cycle. And finally, we will write a Katana look file to be used by the lighters in their shots.

Check it out on my Patreon feed.

Nuke IBL templates by Xuan Prada

Hello,

I just finished recording about 3 hours of content going through a couple of my Nuke IBL templates. The first one is all about cleaning up and artifacts removal. You know, how to get rid of chunky tripods, removing people from set and what not. I will explain to you a couple of ways of dealing with these issues, both in 2D and in 3D using the powerful Nuke's 3D system.

In the second template, I will guide you through the neutralization process, that includes both linearization and white balance. Some people knows this process as technical grading. A very important step that usually lighting supervisors or sequence supervisor deal with before starting to light any VFX shot.

Footage, scripts and other material will be available to you if you are supporting one of the tiers with downloadable material.

Thanks again for your support! and if you like my Patreon feed, please help me to spread the word, I would love to get at least 50 patrons, we are not that far away!

All the info on my Patreon feed.

Katana Fastrack episode 03 by Xuan Prada

Episode 03 of my Katana series is out. We are going to be talking about expressions, macros and tools to take our look-dev template to the next level. Right after that, we will take a look at the texture channels that I painted in Mari for this character and then we will start the look-dev of Ant-Man.

We divide the look-dev in different stages, the first on is blocking, and we are going to spend quite a few time working on this today.

All the info on my Patreon feed.

Katana Fastrack episode 02 by Xuan Prada

Katana Fastrack episode 02 is now available for all my patrons. I cover how to create a proper look-dev template to be used in visual effects. Everything will be setup from scratch and at the end of this lesson we will have a Katana script ready to be used. In lesson 03 we'll be using this script to do all the look-dev for Ant-Man.

In Katana Fastrack episode 02 you will learn:

- How to create master look files
- How to use live groups to create light rigs
- How to create a look-dev template for production

All the info on my Patreon feed.

Katana Fastrack episode 01 by Xuan Prada

Here it is, the very first episode of my series "Katana Fastrack", available to all my exclusive patrons.
This is an introductory video where I'm going to give you an overview of what this course is all about. I hope you like it, it is going to be a lof of fun!

You will learn:

- Where Katana fits in the pipeline
- The most important concepts of Katana's workflow
- How to prepare assets for Katana
- The importance of look-dev recipes
- How to create a very basic recipe

Check it out on my Patreon feed.

Patreon: Houdini as scene assembler: Bundles, takes and rops by Xuan Prada

In this video I talk about the usage of Houdini as scene assembler. This topic will be recurrent in future posts, as Houdini is becoming a very popular tool for look-dev, lighting, rendering and layout, among others.

In this case I go trhough bundles, takes and rops and how we use them while lighting shots in visual effects projects.

You will learn:

- Bundles, takes, rops
- Alembic import
- Different ways of assign materials
- Create look-dev collections
- Generate .ass files
- Create render layers
- Create quick slap comps
- Override materials

Check it out here.

Katana, constraint lights to an alembic geometry by Xuan Prada

One of the most common situations while lighting a shot is attaching a CG light in your scene assembler to an alemic cache exported from Maya. This is very simple to do in Katana, let’s have a look at it.
I’m using this simple animation of a car spining around.

01.gif

In most cases you need an object within the alembic cache that has the animation baked into it. The usual approach is to use a locator. To do so, snap it onto one of the lights geometry of the car and parent constrain it to the master control of the car. Then bake the animation of the locator and export it with the rest of the alembic cache to Katana.

In Katana, create a gafferThree node but do not place any lights yet. It is better to do the constraints first, if not you might have to deal with offset issues later on.
Use a parentChildConstraint node indicating the gaffer node in the basePath and the locator of the car in the target.

Now place both headlights according with the model of the car. If you press play they should follow the animation of the car perfectly.

04.gif

In case you forget to do the parentConstraint before adding lights to the gaffer, you might have to control the offset and compensate for it. To actually see the values you can add constraintResolve and a transformEdit to check the transformations.

Houdini as scene assembler part 05. User attributes by Xuan Prada

Sometimes, specially during the layout/set dressing stage artists have to decide certain rules or patterns to compose a shot. For example let’s say a football stadium. Imagine that the first row of seats is blue, the next row is red and the third row is green.
There are so many ways of doing this, but let’s say that we have thousands of seats and we know the colors that they should have. Then it is easy to make rules and patterns to allow total flexibility later on when texturing and look-deving.

In this example I’m using my favourite tool to explain 3D stuff, Lego figurines. I have 4 rows of Lego heads and I want each of those to have a different Lego face. But at the same time I want to use the same shader for all of them. I just want to have different textures. By doing this I will end up with a very simple and tidy setup, and iteration won’t be a pain.

Doing this in Maya is quite straightforward and I explained the process some time ago in this blog. What I want to illustrate now is another common situation that we face in production. Layout artists and set dresser usually do their work in Maya and then pass it on to look-dev artists and lighting td’s that usually use scene assemblers like Katana, Clarisse, Houdini, or Gaffer.

In this example I want to show you how to handle user attributes from Maya in Houdini to create texture and shader variations.

  • In Maya select all the shapes and add a custom attribute.

  • Call it “variation”

  • Data type integer

  • Default value 0

  • Add a different value to each Lego head. Add as many values as texture variations you need to have

  • Export all the Lego heads as alembic, remember to add the attributes that you want to export to houdini

  • Import the alembic file in Houdini

  • Connect all the texture variations to a switch node

  • This can be done also with shaders following exactly the same workflow

  • Connect an user data int node to the index input of the switch node and type the name of your attribute

  • Finally the render comes out as expected without any further tweaks. Just one shader that automatically picks up different textures based on the layout artist criteria

Houdini as scene assembler part 03 by Xuan Prada

In this post I will talk about using texture bitmaps and subdivision surfaces.
I have a material network with a couple of shaders, one for the body of this character and another one for the rest. If using Arnold I would have a shop network.

To bring texture bitmaps I use texture nodes when working with Mantra and image nodes when working with Arnold. The principled shader has tabs with inputs for textures. I rarely use these, I always create nodes to take care of the texturing. At the end of the day I never use only one texture per channel. More of this in future posts.
In Mantra, textures are multiplied by the albedo color. Be careful with this.

With Mantra, this is the UDIM tag textureName.%(UDIM)d.exr with arnold textureName.<UDIM>.exr

There is a triplanar node that can be used with Arnold and a different one called UV triplanar projection for Mantra. I don’t usually work without UVs, but these nodes can be useful when working with terrains or other large surfaces.

To subdivide geometry, at object level you can just go to the Arnold tab and select the type of subdivision and the amount. If you need to subdivide only a few parts of you alembic asset, create an unpack node (transfer attributes and groups) and then a subdivide node. This works with both Mantra and Arnold, although there is a better way of doing this with Arnold. We will talk about it in the future.

Houdini as scene assembler part 02 by Xuan Prada

In the previous post I showed you how to load alembic caches using the file node and then change the viewport visualization to bounding box. This is good enough if you are let's say look-deving a character. If you want to load a heavy alembic cache, like a very detailed city with a lot of buildings, or a huge spaceship, you might want to use a different approach.

Instead of using a file node, it is better to use the alembic node to load you assets, and then set the option Load As: alembic delayed load primitives, and display as bounding box. This isn't actually loading the geometry in memory and it will be way more efficient down the line.

In this post I'm just talking about shading assignment in Houdini.
The easiest and more simple way to assign shaders is by selecting the asset node in the /obj context and assign a shader in the render tab. Your Mantra shaders should be placed in the /mat context and your Arnold shaders in the /shop context as /mat is not fully supported yet.

In the /mat context you can just go and create a Mantra Principled Shader. For Arnold, it is better to create an arnold shader network and then any arnold shader inside connected to the surface input.

Houdini doesn't have an isolation mode for shading components like Maya (as far as I know) but you can drag and drop shaders and textures onto the viewport or IPR while look-deving. This only works in the /mat context (again, as far as I know).

Another way of assigning shaders is creating material nodes inside of the alembic node. This material nodes can be assigned to different parts of your asset using wildcards. To assign multiple materials you can create different tabs in the material node or you can just concatenate material nodes (which I prefer). This technique works with both Mantra and Arnold.

You will find yourself most of the time creating material networks (Mantra) or shop networks (arnold) containing all the shaders of your asset. In a lighting shot you will end up with different subnetworks for each asset on the shot.
This subnetworks of shaders can be place at the /obj level or inside of the alembics containing your assets.

Another clever way of assignment shaders is using the data tree -> object appearance. This only works at object level. If you want to go deeper in your alembic asset, you need to add first a node called packed edit. Then in the data tree you will have access to all the different parts of your asset.

There is another way of controlling looks in Houdini, and that is using the material style sheets. We will cover this tool in future posts.

Introduction to gaffer by Xuan Prada

By gaffer hq: Gaffer is a free, open-source, node-based VFX application that enables look developers, lighters, and compositors to easily build, tweak, iterate, and render scenes. Built with flexibility in mind, Gaffer supports in-application scripting in Python and OSL, so VFX artists and technical directors can design shaders, automate processes, and build production workflows.

With hooks in both C++ and Python, Gaffer's readily extensible API provides both professional studios and enthusiasts with the tools to add their own custom modules, nodes, and UI.

The workhorse of the production pipeline at Image Engine Design Inc., Gaffer has been used to build award-winning VFX for shows such as Jurassic World: Fallen Kingdom, Lost in Space, Logan, and Game of Thrones.

Houdini as scene assembler, part 01 (of many) by Xuan Prada

It’s been a while since I used Houdini at work, the very first time I used Houdini on a show it was while working on Happy Feet 2, it was our main scene assembler for the show. Look-dev, lighting and rendering was all done in Houdini and 3Delight.

From there I never used Houdini again until I was working on Geostorm at Dneg. Most of the shots were managed with Houdini and PrMan. That is all my experience with Houdini in a professional environment. No need to say that I have only used Houdini for assembly tasks, look-dev, lighting and rendering, nothing like fx or other fancy stuff.

The common thing between the two shows where I used Houdini as assembler is that we had pretty neat tools to take care of most of the steps through the pipeline. Becasue of that I can’t barely use Houdini out of the box, so I’m going to try to learn how to use it and share it here for future reference.

During my time working at facilities like MPC, Dneg or Framestore, I have used different scene assemblers like Katana, Clarisse or other propietary tools. My goal is to extrapolate my knowledge and experience using those software to Houdini. I’m pretty sure that I’d be using tools and techniques in the wrong way just because Houdini has a different philosophy than other tools or just because my lack of knowledge in general about Houdini and proceduralism. But anyway, I’ll try to make it work, if you see anything that I’m doing terribly wrong, please let me know, I’ll be listening.

I’ll be posting about stuff that I’m dealing with in no particular order but always assembly oriented, do not expect to see here anything related with fx or more “traditional” use of Houdini. Most of the stuff is going to be very basic, specially at the beginning but please bare with me, it will get more interesting in the future.

If you are assembling a scene one of the first steps it would be to bring all your assets from other applications. You can of course generate content in Houdini but usually most of you assets will be created in other packages, being Maya the most common one. So I guess the very first thing you’d have to deal with is how to import alembic caches. If you are working in a vfx facility chances of having automated tools to setup your shots for you are pretty high. Launching Houdini from a context in a terminal will take care of everything. If you are at home or starting to use Houdini in a vfx boutique you will have to setup your shots manually. There are clever and easy ways to create Houdini templates for your show/shot but we will leave this topic for future posts.

To bring your assets as alembic caches just create a file node, step inside and replace the existing file for another file node pointing to your alembic cache, or just use the existing file node and change the path to read you alembic cache.

If you are look-deving a character lets say, it is completely fine to look at the full geometry in the viewport. If you are assembling a big scene like a city or a space ship you’d probably want to change your viewport settings to something like bounding boxes. There are better ways of dealing with bounding box without loading the geo, more to come soon.

Assets are usually complex and we try to keep everything tidy and organised by naming everything properly and structuring groups and hierarchies in a particular way that makes sense for our purposes. The unpack node will allow you to access to all the different parts and componentes of the alembic caches and to perform different operations later on. The groups can be selected based on the hierarchies created in Maya or based on wildcards. It is extremely important to use a clever naming and structuring groups following certain logic to make the assembly process easier and faster.

The blast node will help you also to access to the information contained in the alembic cache and remove whatever you don’t need to use for a particular operation. You can also invert the selection to keep the items that you wrote in the group field and get rid of the rest.

The group node is another very useful node to point to different groups in your alembic caches. Again based on Maya grouping and wildcards.

That is it for now in that sense, there are many ways to manipulate alembic caches but we don’t need to talk about that just yet. In these first posts I will be talking mostly about bringing assets, working with textures and look-dev. That is the first step for assembling a shot, we need assets ready to travel trough the pipeline.

Uv mapping is key for us, a lot of tasks performed in Houdini use procedural UVs or no UVs at all. This is not the case for us. Asset always have proper UV mapping. Generally speaking you will do all the UV related tasks in Maya, UV Layout or similar tools. In order to see the UVs in Houdini we need to unpack the alembic cache first, then we will be able to press “5” and look at the UVs.

Use a quick uv shade node to display a checkered texture in the viewport. You can easily change the size of the checker or use a different texture. There is also a group field that you can use for filtering.

Not ideal but if you are working on extremely simple assets like walls, grounds, maybe terrains, it is totally fine to create the UVs in Houdini. Houdini UV tools are not the best but you will find yourself using them at some point. The uv texture node crates basic projections like cylindrical, orthographic, etc.

The uv unwrap node create automatic UVs based on projection planes.

The uv layout node is a tools for packing your UVs. Using a fixed scale you can distribute the UVs in different UDIMs.

The auto uv node is actually pretty good. It is part of the game development tools shipped with Houdini. You need to activate this package first, just go the shelf, click on the plus button and look for game development tools. Then click on the icon update toolset to get the latest version.

The auto uv tools has different methods for UVing and for packing, it is worth trying them, it works really well specially with messy objects.

The uv transform node deals with anything related to moving, translating and rotating UVs. You don’t really want to do this here in Houdini, but if you have to, this is the tool. I use it a lot if I need to re-distribute UDIM tiles.

Attribute create node (with the following parameters) allos you to create a parameter to move UVs to a specific UDIM. Then add a uv layout node and set the packing method to UDIM attribute.

Ricoh Theta for image acquisition in VFX by Xuan Prada

This is a very quick overview of how I use my tiny Ricoh Theta for lighting acquisition in VFX. I always use one of my two traditional setups for capturing HDRI and bracketed textures but on top of that, I use a Theta as backup. Sometimes if I don't have enough room on-set I might only use a Theta, but this is not ideal.

There is no way to manually control this camera, shame! But using an iPhone app like Simple HDR at least you can do bracketing. Still can't control it, but it is something.

As always capturing any camera data, you will need a Macbeth chart.

For HDRI acquisition it is always extremely important to have good references for you lighting distribution, density, temperature, reflection and shadow. Spheres are a must.

For this particular exercise I'm using a Mini Manfrotto tripod to place my camera above 50cm from the ground aprox.

This is the equitectangular map that I got after merging 7 brackets generated automatically with the Theta. There are 2 major disadvantages if you compare this panorama with the ones you typically get using a traditional DSLR + fisheye setup.

  • Poor resolution, artefacts and aberrations
  • Poor dynamic range

I use HDR merge pro in Photoshop to merge my brackets. It is very fast and it actually works. But never use Photoshop to work with data images.

Once the panorama has been stitched, move to Nuke to neutralise it.

Start by neutralising the plate.
Linearization first, followed by white balance.

Copy the grading from the plate to the panorama.

Save the maps, go to Maya and create an IBL setup.
The dynamic range in the panorama is very low compared with what we would have if were using a traditional DSLR setup. This means that our key light is not going to work very well I'm afraid.

If we compare the CG against the plate, we can easily see that the sun is not working at all.

The best way to fix this issue at this point is going back to Nuke and remove the sun from the panorama. Then crop it and save it as a HDR texture to be mapped in a CG light.

Map the HDR texture to a area light in Maya and place it accordingly.

Now we should be able to match the key light much better.

Final render.

Quick and dirty free IBLs by Xuan Prada

Some of my spare IBLs that I shot while ago using a Ricoh Theta. They contain around 12EV dynamic range. Resolution is not pretty good but it stills holds up for look-dev and lighting tasks.

Feel free to download the equirectangular .exrs here.
Please do not use in commercial projects.

Cafe in Barcelona.

Cafe in Barcelona render test.

Hobo hotel.

Hobo hotel render test.

Campus i12 green room.

Campus i12 green room render test.

Campus i12 class.

Campus i12 class render test.

Chiswick Gardens.

Chiswick Gardens render test.

Mixing displacement and multiple bump maps by Xuan Prada

A very common situation when look-deving an asset is combining various displacement and bump maps. Having them in different texture maps gives you the possibility to play with them and making very fast changes without going back to Mari and Zbrush and waste a lot of time going back and forward until reaching the right look. You also want to keep busy your look-dev team, of course.

While ago I told you how to combine different displacement maps coming from different sources, today I want to show you how to combine multiple bump maps, with different scales and values. This is a very common situation in vfx, I would say every single asset has at least one displacement layer and one bump layer, but usually, you would have more than one. This is how you can combine multiple bump layers in Maya/Arnold.

  • The first thing I'm going to do is add a displacement layer. To make this post easy I'm using a single displacement layer. Refer back to the tutorial I mentioned previously on this post to mix more than one displacement layer.
  • Now connect your first bump map layer as usual. Connecting the red channel to the bump input of the shader.
  • in the hypershade create a file texture for your second bump layer. In this case a low frequency noise.
  • Create an avergage node and two multiply nodes.
  • Connect the red channel of the first bump layer to the input 1 of the multiply node. Control the intensity of this layer with the input 2 of the multiply node.
  • Repeat with previous step with the second bump layer.
  • Connect the outputs of both multiply nodes to the inputs 3D0 and 3D1 of the average node.
  • It is extremely important to leave the bump depth at 1 in order to make this work.