Hello,
I just published on my Patreon a video about Houdini’s window box.
This video is about how to use the new window box system in Houdini. One of those tools that I've been using for many years at different VFX studios, but now it works out of the box with Houdini and Karma. I hope you find it useful and you can start using it soon in your projects!
All the info on my Patreon site.
look-dev
Houdini Solaris and Katana. Custom attributes /
This is a quick video showcasing how to use custom attributes in Houdini Solaris for USD scattering systems. The USD layer will be exported to Katana to do procedural look-dev using the exported custom parameters.
This technique will be explored in depth in my upcoming video about Houdini Solaris and Katana interoperability.
Subscribe to my Patreon to have full access to my entire library of visual effects training.
www.patreon.com/elephantvfx
VDB as displacement /
The sphere is the surface that needs to be deformed by the presence of the cones. The surface can't be modified in any way, we need to stick to its topology and shape. We want to do this dynamically just using a displacement map but of course we don't want to sculpt the details by hand, as the animation might change at any time and we would have to re-sculpt again.
The cones are growing from frame 0 to 60 and moving around randomly.
I'm adding a for each connected piece and inside the loop adding an edit to increase the volume of the original cones a little bit.
Just select all in the group field, and set the transform space to local origin by connectivity, so each cone scales from it's own center.
Add a vdb from polygons, set it to distance VDB and add some resolution, it doesn't need to be super high.
Then I just cache the VDB sequence.
Create an attribute from volume to pass the Cd attribute from the vdb cache to the sphere.
To visualize it better you can just add a visualizer mapped to the attribute.
In shading, create an user data float and read the Cd attribute and connect it to the displacement.
If you are looking for the opposite effect, you can easily invert the displacement map.
Detailing digi doubles using generic humans /
This is probably the last video of the year, let's see about that.
This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.
This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.
In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.
The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.
All the info on my Patreon site.
Thanks!
Xuan.
Lookdev rig for Houdini /
Hello patrons,
In this video I show you how to create a production ready lookdev rig for Houdini, or what I like to call, a single click render solution for your lookdevs.
It is in a way similar to the one we did for Katana a while ago, but using all the power and flexibility of Houdini's HDA system.
Talking about HDA's, I will be introducing the new features for HDA's that come with Houdini 18.5.633 that I think are really nice, specially for smaller studios that don't have enough resources to build a pipeline around HDA's.
By the end of this video you should be able to build your own lookdev tool and adapt it to the needs of your projects.
We'll be working with the latest versions of Houdini, Arnold and ACES.
As usually, the video starts with some slides where I try to explain why building a lookdev rig is a must before you do any work on your project, don't skip it, I know it is boring but very much needed. Downloadable material will be attached in the next post.
Thank you very much for your support!
Head over to my Patreon feed.
Xuan.
Simple spatial lighting /
Hello patrons,
I'm about to release my new video "Simple spatial lighting". Here is a quick summary of everything we will be covering. The length of this video is about 3 hours.
- Differences between HDRIs and spatial lighting.
- Simple vs complex workflows for spatial lighting.
- Handling ACES in Nuke, Mari and Houdini.
- Dealing with spherical projections.
- Treating HDRIs and practical lights.
- Image based modelling.
- Baking textures in Arnold/Maya.
- Simple look-dev in Houdini/RenderMan.
- Spatial lighting setup in Houdini/RenderMan.
- Slap comp in Nuke.
Thanks,
Xuan.
Head over my Patreon site to access this video and many more.
Arnold interoperability /
In this video I will guide you trough arnold operators in both Maya and Houdni to show you advanced methods for creating looks, and potentially anything arnold related. Working with arnold operators can be very beneficial in your visual effect pipeline, among other things you are going to be able to transfer "for free" pretty much anything from one 3D package to another, in this case from Maya to Houdini and vice-versa.
These days it is very common to work in a traditional 3D package like Maya while creating assets and then moving to a scene assembler like Houdini or Katana to do shots. With this workflow you are going to be able to do so in a very clean, tidy and efficient way.
On top of that, I'm going to show you how to create look files that can be easily exported to use in lighting shots, independently in Maya or Houdini. You also are going to be able to override looks, versioning looks in Shotgun and many more things.
This is a two plus hours video tutorial posted on my Patreon feed.
Thanks a lot for your support.
Xuan.
Cryptomatte in Katana and Arnold /
This post is mainly for my Patreon supporters. Some of them are having issues while setting up cryptomatte AOVs. This is how you do it.
Create a material node.
Add an AOVs shader -> cryptomatte.
If you are using the Arnold AOVs supertool, add all the cryptomatte AOVs that you need.
In the arnold global settings add the cryptomatte shader to the AOV shaders.
If you are not using the Arnold AOVs supertool.
Create an arnold output channel define node.
Set it to be crypto_material.
Type RGBA.
Add a render output define node.
Set the channel to crypto_material.
Repeat the same steps to create crypto_object and crypto_asset.
Wade Tillman - spec job /
This is just a spec job for Wade Tillman’s character on HBO’s Watchmen. After watching the series I enjoyed the work done by Marz VFX on Tillman’s mask, that I wanted to do my own. Unfortunately, I don’t have much time so creating this asset seemed like something doable to do in a few hours over the weekend. It is just a simple test, it will require a lot more work to be a production-ready asset of course. I’m just playing here the role of a visual effects designer trying to come up with an idea of how to implement this mask into the VFX production pipeline.
I’m planning to do more work in the future with this asset, including mocap, cloth simulation, proper animated HDRI lighting, etc. I also changed the design that they did on the series. Instead of having the seams in the middle of the head from ear to ear I just place my seams in the middle of the face dividing the face in two. I believe the one they did for the real series works much better but I just wanted to try something different. I will definitely do another test mimicking the other design.
So far I just tried one design in two different stages. The mask covering the entire head and the mask pulled up to reveal the mouth and chin of the character, as seen many times in the series. I also tried a couple of looks, one more mirror-like with small imperfections in the reflections. And another one rougher. I believe they tried similar looks but in the end, the went with the one with more pristine reflections.
I think it would be interesting to see another test with different types of materials, introducing some iridescence would also be fun. I will try something else next time.
Capturing lighting and reflections to lit this asset properly has to be the most exciting part of this task. That is something that I haven’t done yet but I will try as soon as I can. It is pretty much like having a mirror ball in the shots. Capturing animated panoramic HDRIs is definitely the way to go or at least the more simple one. Let’s try it next time.
Finally, I did a couple of cloth simulation tests for both stages of the mask. Just playing a bit with vellum in Houdini.
Just trying different looks here for both stages of the mask.
Simple cloth simulation test. From t-pose to anim pose and walk cycle.
Creases from Maya to Houdini /
This is a quick tip on how to take creases information from Maya to Houdini to be rendered with Arnold. If you are like me and you are using Houdini as scene assembler this is something that you will have to deal with sooner or later.
In Maya, I have a simple cube with creases, on the right side you can see how it looks once subdivided twice.
Not only you can take creases information into Houdini, you can also export subdivision information and HtoA will interpret it automatically. Make sure you add catclark subdivision type and 2 iterations, or whatever you need.
When exporting the alembic caches you need to include the arnold parameters that take care of subdivision and creases. Actually there is no extra parameter for creases, by including subdivision parameters you will already get the creases information.
Note that the arnold parameters in Maya start with ar_arnold_parameter, for example ar_subdiv_iterations. But in Houdini arnold parameters don’t use de ar prefix. Because of that make sure you are exporting the parameter without the ar prefix.
All this can be of course happen automatically in your pipeline while publishing assets. It actually should to make artists life easier and avoid mistakes.
That’s it, if you import the alembic cache in Houdini both creases and subdivisions should render as expected. This information can be overwritten in sops with arnold parameters.
Mari 4.6 new features and production template /
Hello patrons,
I recorded a new video about the new features in Mari 4.6 released just a few weeks ago. I will also talk about some of the new features in the extension pack 5 and finally I will show you my production template that I've been using lately to do all the texturing and pre-lookDev on many assets for film and tv projects.
This is a big picture of the topics covered in this video. The video will be about 2.5 hours long, and it will be published on my Patreon site.
- Mari 4.6 new features
- New material system explained in depth
- Material ingestion tool
- Optimization settings
- How and where to use geo channels
- New camera projection tools
- Extension pack 5 new features (or my most used tools)
- Production template for texturing and pre-lookDev
All the information on my Patreon feed.
Thanks for your support!
Xuan.
Lighting a full cg shot in Houdini, part 01 /
Part 01 of "Lighting a full cg shot in Houdini" is out.
In this first episode I go through everything you need to convert Houdini into a powerful scene assembler, specially focused on look-dev. I will go through other assembly capabilities and lighting/render in future videos.
In this episode we will cover:
- How to organize and prepare assets in Maya to be used in Houdini for assembly and render
- Good uv workflows for vfx and animation productions
- How to assemble multiple assets in Houdini in a scene assembly fashion
- Quick look at speed texturing in Substance Painter
- How to create digital assets and presets in Houdini to re-use in your projects
- Look-dev workflow in Houdini and Arnold
All the information on my Patreon feed.
Thanks for your support,
Xuan.
Katana Fastrack episode 04 /
Katana Fastrack episode 04 is already available.
In this episode, we will finish the Ant-Man lookDev by tweaking all the shaders and texture maps created in Mari.
Then we will do a very quick slapcomp in Katana and Nuke to check that everything works as expected and looks good. We will do this by render a full motion range of Ant-Man walk cycle. And finally, we will write a Katana look file to be used by the lighters in their shots.
Check it out on my Patreon feed.
Katana Fastrack episode 03 /
Episode 03 of my Katana series is out. We are going to be talking about expressions, macros and tools to take our look-dev template to the next level. Right after that, we will take a look at the texture channels that I painted in Mari for this character and then we will start the look-dev of Ant-Man.
We divide the look-dev in different stages, the first on is blocking, and we are going to spend quite a few time working on this today.
All the info on my Patreon feed.
Katana Fastrack episode 02 /
Katana Fastrack episode 02 is now available for all my patrons. I cover how to create a proper look-dev template to be used in visual effects. Everything will be setup from scratch and at the end of this lesson we will have a Katana script ready to be used. In lesson 03 we'll be using this script to do all the look-dev for Ant-Man.
In Katana Fastrack episode 02 you will learn:
- How to create master look files
- How to use live groups to create light rigs
- How to create a look-dev template for production
All the info on my Patreon feed.
Katana Fastrack episode 01 /
Here it is, the very first episode of my series "Katana Fastrack", available to all my exclusive patrons.
This is an introductory video where I'm going to give you an overview of what this course is all about. I hope you like it, it is going to be a lof of fun!
You will learn:
- Where Katana fits in the pipeline
- The most important concepts of Katana's workflow
- How to prepare assets for Katana
- The importance of look-dev recipes
- How to create a very basic recipe
Check it out on my Patreon feed.
Katana, constraint lights to an alembic geometry /
One of the most common situations while lighting a shot is attaching a CG light in your scene assembler to an alemic cache exported from Maya. This is very simple to do in Katana, let’s have a look at it.
I’m using this simple animation of a car spining around.
In most cases you need an object within the alembic cache that has the animation baked into it. The usual approach is to use a locator. To do so, snap it onto one of the lights geometry of the car and parent constrain it to the master control of the car. Then bake the animation of the locator and export it with the rest of the alembic cache to Katana.
In Katana, create a gafferThree node but do not place any lights yet. It is better to do the constraints first, if not you might have to deal with offset issues later on.
Use a parentChildConstraint node indicating the gaffer node in the basePath and the locator of the car in the target.
Now place both headlights according with the model of the car. If you press play they should follow the animation of the car perfectly.
In case you forget to do the parentConstraint before adding lights to the gaffer, you might have to control the offset and compensate for it. To actually see the values you can add constraintResolve and a transformEdit to check the transformations.
IBL pack 02 /
Houdini as scene assembler part 05. User attributes /
Sometimes, specially during the layout/set dressing stage artists have to decide certain rules or patterns to compose a shot. For example let’s say a football stadium. Imagine that the first row of seats is blue, the next row is red and the third row is green.
There are so many ways of doing this, but let’s say that we have thousands of seats and we know the colors that they should have. Then it is easy to make rules and patterns to allow total flexibility later on when texturing and look-deving.
In this example I’m using my favourite tool to explain 3D stuff, Lego figurines. I have 4 rows of Lego heads and I want each of those to have a different Lego face. But at the same time I want to use the same shader for all of them. I just want to have different textures. By doing this I will end up with a very simple and tidy setup, and iteration won’t be a pain.
Doing this in Maya is quite straightforward and I explained the process some time ago in this blog. What I want to illustrate now is another common situation that we face in production. Layout artists and set dresser usually do their work in Maya and then pass it on to look-dev artists and lighting td’s that usually use scene assemblers like Katana, Clarisse, Houdini, or Gaffer.
In this example I want to show you how to handle user attributes from Maya in Houdini to create texture and shader variations.
In Maya select all the shapes and add a custom attribute.
Call it “variation”
Data type integer
Default value 0
Add a different value to each Lego head. Add as many values as texture variations you need to have
Export all the Lego heads as alembic, remember to add the attributes that you want to export to houdini
Import the alembic file in Houdini
Connect all the texture variations to a switch node
This can be done also with shaders following exactly the same workflow
Connect an user data int node to the index input of the switch node and type the name of your attribute
Finally the render comes out as expected without any further tweaks. Just one shader that automatically picks up different textures based on the layout artist criteria
Introduction to Gaffer 04 /
Introduction to Gaffer part 04 where I talk mostly about volumes. I also mention a few things about good practices while look-deving “fetchin” textures and what not.