Hello Patrons,
This is the first of two parts of my Houdini crowds training.
The first video is an introduction to the system. I will cover everything from agent creation, motion capture clips, props attachment, configuring contraints for feet contact, transition clips, and rendering in Solaris with Karma.
I will also explain in the video what are the plans for the second video, which will be a bit more advanced and it will touch on some important topics not covered in the first video.
I hope you like it.
Head to my Patreon to get all of this.
Thanks,
Xuan.
rendering
Houdini's window box /
Hello,
I just published on my Patreon a video about Houdini’s window box.
This video is about how to use the new window box system in Houdini. One of those tools that I've been using for many years at different VFX studios, but now it works out of the box with Houdini and Karma. I hope you find it useful and you can start using it soon in your projects!
All the info on my Patreon site.
Solaris Katana interoperability part 2/2 /
Hello patrons,
In this video we will finish this mini series about Solaris and Katana interoperability.
I'll be covering the topics that I didn't have time to cover in the first video, including.
- Manual set dressing from Solaris to Katana.
- Hero instancing from Solaris to Katana.
- background instancing and custom attributes from Solaris to Katana.
- Dummy crowds from Solaris to Katana.
- Everything using USD of course.
There are many more things that could be covered when it comes to Solaris and Katana interoperability, I'm pretty sure that I'll be covering some of them in future USD videos.
All the info on my Patreon.
Deep compositing - going deeper /
Hello patrons,
This is a continuation of the intro to deep compositing, where we go deeper into compositing workflows using depth.
I will show you how to properly use deep information in a flat composition, to work fast and efficient with all the benefits of depth data but none of the caveats.
The video is more than 3 hours long and we will explore:
- Quick recap of pros and cons of using deep comp.
- Quick recap of basic deep tools.
- Setting up render passes in a 3D software for deep.
- Deep holdouts.
- Organizing deep comps.
- How to use AOVs in deep.
- How to work with precomps.
- Creating deep templates.
- Using 3D geometry in deep.
- Using 2D elements in deep.
- Using particles in deep.
- Zdepth using depth information.
Thanks for your support!
Head over to my Patreon for all the info.
Xuan.
VDB as displacement /
The sphere is the surface that needs to be deformed by the presence of the cones. The surface can't be modified in any way, we need to stick to its topology and shape. We want to do this dynamically just using a displacement map but of course we don't want to sculpt the details by hand, as the animation might change at any time and we would have to re-sculpt again.
The cones are growing from frame 0 to 60 and moving around randomly.
I'm adding a for each connected piece and inside the loop adding an edit to increase the volume of the original cones a little bit.
Just select all in the group field, and set the transform space to local origin by connectivity, so each cone scales from it's own center.
Add a vdb from polygons, set it to distance VDB and add some resolution, it doesn't need to be super high.
Then I just cache the VDB sequence.
Create an attribute from volume to pass the Cd attribute from the vdb cache to the sphere.
To visualize it better you can just add a visualizer mapped to the attribute.
In shading, create an user data float and read the Cd attribute and connect it to the displacement.
If you are looking for the opposite effect, you can easily invert the displacement map.
Detailing digi doubles using generic humans /
This is probably the last video of the year, let's see about that.
This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.
This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.
In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.
The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.
All the info on my Patreon site.
Thanks!
Xuan.
Lookdev rig for Houdini /
Hello patrons,
In this video I show you how to create a production ready lookdev rig for Houdini, or what I like to call, a single click render solution for your lookdevs.
It is in a way similar to the one we did for Katana a while ago, but using all the power and flexibility of Houdini's HDA system.
Talking about HDA's, I will be introducing the new features for HDA's that come with Houdini 18.5.633 that I think are really nice, specially for smaller studios that don't have enough resources to build a pipeline around HDA's.
By the end of this video you should be able to build your own lookdev tool and adapt it to the needs of your projects.
We'll be working with the latest versions of Houdini, Arnold and ACES.
As usually, the video starts with some slides where I try to explain why building a lookdev rig is a must before you do any work on your project, don't skip it, I know it is boring but very much needed. Downloadable material will be attached in the next post.
Thank you very much for your support!
Head over to my Patreon feed.
Xuan.
Real time rendering for vfx, episode 04 /
Happy New Year!
Real time rendering for vfx episode 04 is here!
This is a long one, around 4 hours split in two different videos, both of them available already for you.
In these two videos I cover a lot of things related with lighting and rendering in Unreal. We will cover all the rendering methods, rasterization, raytracing, hybrid rendering and path tracing.
Some of the topics covered in this video are:
- Rendering methods in Unreal.
- Lightmass.
- Type of lights.
- Volumetric lighting.
- Modulate lighting.
- Global illumination.
- Mesh lights.
- Reflection methods.
- Post processing volumes.
- Particles lighting.
- Blueprints for lighting.
- Light function.
- Core components of a lighting scene.
- Neutral lighting conditions.
- Rasterization.
- Raytracing.
- Hybrid methdos.
- Path tracing.
All the info on my Patreon.
Intro to LOPs and USD /
My introduction to Houdini Solaris LOPs and USD is already available on my Patreon feed.
These are the topics that we are going to be covering.
- Introduction to USD and LOPs
- Asset creation worflow
- Simple assets
- Complex assets
- Manual layout and set dressing
- Using instances in LOPs
- Set dressing using information from Maya
- Using departments inputs/outputs
- Publishing system
- Setup for sequence lighting
- Random bits
This introduction is around 4.30 hours long.
Check it out here.
Introduction to Redshift - little project /
My Patreon series “Introduction to Redshift for VFX” is coming to an end. We have already discussed in depth the most basics features like global illumination and sampling. I shared with you my own “cheat sheets” to deal with GI and sampling. We also talked about Redshift lighting tools, built-in atmospheric effects, and cameras. In the third episode we talked about camera mapping, surface shaders, texturing, displacement maps from Mari and Zbrush, how to ingest Substance Painter textures and did a few surfacing exercises.
This should give you a pretty good base to start your projects in Houdini and Redshift, or whatever 3D app you want to use with Redshift.
The next couple of videos about this series are going to be dedicated to doing from scratch to finish a little project using Redshift. We are going to be able to cover more features of the render engine and also discover more broad techniques that hopefully you will find interesting. Let me explain to you what is all of this about.
We’ll be doing this simple shot below from start to finish, it is quite simple and graphic I know, but to get there I’m going to explain to you many things that you are going to be using quite a lot in visual effects shots, more than we actually end up using in the shot.
We are going to start by having a quick introduction to SpeedTree Cinema 8 to see how to create procedural trees. We will create from scratch a few trees that later will be used in Houdini. Once we have all the models ready, we will see how to deal with SpeedTree textures to use them in Redshift in an ACES pipeline.
These trees will be used in Houdini to create re-usable assets llibraries and later converted to Redshift proxies for memory efficiency and scattering, also to be easily picked up by lighting artists when working on shots.
With all these trees we will take a look at how to create procedural scattering systems in Houdini using Redshift proxies. We will create multiple configurations depending on our needs. We are also going to learn how to ingest Quixel Megascans assets, again preparing them to work with ACES and creating an additional asset for our library. We will also re-use the scatterers made for trees to scatter rocks and pebbles.
To scatter all of that will be used as a base Houdini’s height fields. For this particular shot, we are going a very simple ground made with height fields and Megascans, but I’m going to give you a pretty comprehensive introduction to height fields, way more than what you see in the final shot.
Once all the natural assets are created, we’ll be looking at the textures and look-dev of the character. Yes, there is a character in the shot, you don’t see much but hey, this is what happens in VFX all the time. You spend months working on something barely noticeable. We will look into speed texturing and how to use Substance Painter with Redshift.
Now that we are dealing with characters, what if I show you how to “guerrilla” deal with motion capture? So you can grab some random motion capture from any source and apply it to your characters. Look at the clip below, nothing better than a character moving to see if the look actually works.
It looks better when moving, doesn’t it? There is no cloth simulation btw, it is a Redshift course, we are not going that far! Not yet.
Any environment work, of course, needs some kind of volumetrics. They create nice lighting effects, give a sense of scale, look good and make terrible render times. We need to know how to deal with different types of volumetrics in Redshift, so I’m going to show you how to create a couple of different atmospherics using Houdini’s volumes. Quite simple but effective.
Finally, we will combine everything together in a shot. I will show you how to organize everything properly using bundles and smart bundles to configure your render passes. We will take a look at how Redshift deals with AOVs, render settings, etc. Finally, we will put everything together in Nuke to output a nice render.
Just to summarize, this is what I’m planning to show you while working on this little project. My guess is that it will take me a couple of sessions to deliver all this video training.
Speed Tree introduction and tree creation
ACES texture conversion
ACES introduction in Houdini and Redshift
Creation of tree assets library in Houdini
Megascans ingestion
Character texturing and look-dev
Guerrilla techniques to apply mocap
Introduction to Houdini’s height fields
Redshift proxies
Scattering systems in Houdini
Volume creation in Houdini for atmospherics
Scene assembly
Redshift render settings
Compositing
Something that I probably forgot
All of this and much more training will be published on my Patreon. Please consider supporting me.
Thanks,
Xuan.
Arnold interoperability /
In this video I will guide you trough arnold operators in both Maya and Houdni to show you advanced methods for creating looks, and potentially anything arnold related. Working with arnold operators can be very beneficial in your visual effect pipeline, among other things you are going to be able to transfer "for free" pretty much anything from one 3D package to another, in this case from Maya to Houdini and vice-versa.
These days it is very common to work in a traditional 3D package like Maya while creating assets and then moving to a scene assembler like Houdini or Katana to do shots. With this workflow you are going to be able to do so in a very clean, tidy and efficient way.
On top of that, I'm going to show you how to create look files that can be easily exported to use in lighting shots, independently in Maya or Houdini. You also are going to be able to override looks, versioning looks in Shotgun and many more things.
This is a two plus hours video tutorial posted on my Patreon feed.
Thanks a lot for your support.
Xuan.
Wade Tillman - spec job /
This is just a spec job for Wade Tillman’s character on HBO’s Watchmen. After watching the series I enjoyed the work done by Marz VFX on Tillman’s mask, that I wanted to do my own. Unfortunately, I don’t have much time so creating this asset seemed like something doable to do in a few hours over the weekend. It is just a simple test, it will require a lot more work to be a production-ready asset of course. I’m just playing here the role of a visual effects designer trying to come up with an idea of how to implement this mask into the VFX production pipeline.
I’m planning to do more work in the future with this asset, including mocap, cloth simulation, proper animated HDRI lighting, etc. I also changed the design that they did on the series. Instead of having the seams in the middle of the head from ear to ear I just place my seams in the middle of the face dividing the face in two. I believe the one they did for the real series works much better but I just wanted to try something different. I will definitely do another test mimicking the other design.
So far I just tried one design in two different stages. The mask covering the entire head and the mask pulled up to reveal the mouth and chin of the character, as seen many times in the series. I also tried a couple of looks, one more mirror-like with small imperfections in the reflections. And another one rougher. I believe they tried similar looks but in the end, the went with the one with more pristine reflections.
I think it would be interesting to see another test with different types of materials, introducing some iridescence would also be fun. I will try something else next time.
Capturing lighting and reflections to lit this asset properly has to be the most exciting part of this task. That is something that I haven’t done yet but I will try as soon as I can. It is pretty much like having a mirror ball in the shots. Capturing animated panoramic HDRIs is definitely the way to go or at least the more simple one. Let’s try it next time.
Finally, I did a couple of cloth simulation tests for both stages of the mask. Just playing a bit with vellum in Houdini.
Just trying different looks here for both stages of the mask.
Simple cloth simulation test. From t-pose to anim pose and walk cycle.
Creases from Maya to Houdini /
This is a quick tip on how to take creases information from Maya to Houdini to be rendered with Arnold. If you are like me and you are using Houdini as scene assembler this is something that you will have to deal with sooner or later.
In Maya, I have a simple cube with creases, on the right side you can see how it looks once subdivided twice.
Not only you can take creases information into Houdini, you can also export subdivision information and HtoA will interpret it automatically. Make sure you add catclark subdivision type and 2 iterations, or whatever you need.
When exporting the alembic caches you need to include the arnold parameters that take care of subdivision and creases. Actually there is no extra parameter for creases, by including subdivision parameters you will already get the creases information.
Note that the arnold parameters in Maya start with ar_arnold_parameter, for example ar_subdiv_iterations. But in Houdini arnold parameters don’t use de ar prefix. Because of that make sure you are exporting the parameter without the ar prefix.
All this can be of course happen automatically in your pipeline while publishing assets. It actually should to make artists life easier and avoid mistakes.
That’s it, if you import the alembic cache in Houdini both creases and subdivisions should render as expected. This information can be overwritten in sops with arnold parameters.
Render mask in HtoA /
This is how to setup a render mask, or render patch, or whatever you want to call it, in Houdini using Arnold.
Render patches are generally used when a high cost render needs a fix that only affects to a small portion of the frame, or when most of the frame is going to be covered by a foreground plate.
In these scenarios there is no need to waste render time and render the whole frame, but just what is needed to finalize the shot.
This is the scene that I’m going to use for this example. Let’s pretend that we have already render 4k full range of this shot. All of the sudden we need to make some changes on the rubber toy screen left.
The best way to create a render mask is using Nuke. You can use an old render as template to make sure everything you need in the frame is covered by the mask. Rotopaint nodes are very useful specially if you need to animate your mask.
Create a camera shader and connect the render mask to its filter map.
Connect the shader to the camera shader input of the camera, in the Arnold tab.
If you render now, only the mask area will be rendered, saving us a lot of render time.
Huge limitation, that I don’t know how to fix and I’m hoping for someone to throw some light on this topic. If you are rendering with overscan, this won’t work nicely, let me show you why.
I’m rendering with a 120 pixels overscan, I know is generally speaking a lot, but I just want to illustrate this example very clearly.
Now if you render the same overscan with the render mask applied, you will get a black border around the render. Below is the render patch comped over the full frame render.
I’m pretty sure the issue is related to the wrap options of the render mask. By changing the wrapping mode you will get away of this issue in some shots, but in an example like the one on this post, there is no fix playing with the wrapping modes.
Any ideas?
You can definitely use the camera crop options and it will work perfectly fine, no issues at all. It is not as flexible as using your own textures, but it will do in most cases.
Katana Fastrack episode 06 /
Episode 06 of "Katana Fastrack" is already available.
In this episode we will light the first of two shots that I've prepared for this series. A simple full CG shot that will guide you trough the lighting workflow in Katana using the lighting template that we created in the previous episode.
On top of that, we also cover the look-dev of the environment used in this shot.
We'll take a look at how to implement delivery requirements in our lighting template, such as specific resolutions based on production decisions.
We also will take a look and how to create and use interactive render filters, a very powerful feature in Katana. And finally, we will do the lighting and slapcomp of the first shot of this course.
All the info on my Patreon feed.
Lighting a full cg shot in Houdini, part 01 /
Part 01 of "Lighting a full cg shot in Houdini" is out.
In this first episode I go through everything you need to convert Houdini into a powerful scene assembler, specially focused on look-dev. I will go through other assembly capabilities and lighting/render in future videos.
In this episode we will cover:
- How to organize and prepare assets in Maya to be used in Houdini for assembly and render
- Good uv workflows for vfx and animation productions
- How to assemble multiple assets in Houdini in a scene assembly fashion
- Quick look at speed texturing in Substance Painter
- How to create digital assets and presets in Houdini to re-use in your projects
- Look-dev workflow in Houdini and Arnold
All the information on my Patreon feed.
Thanks for your support,
Xuan.
Nuke IBL templates /
Hello,
I just finished recording about 3 hours of content going through a couple of my Nuke IBL templates. The first one is all about cleaning up and artifacts removal. You know, how to get rid of chunky tripods, removing people from set and what not. I will explain to you a couple of ways of dealing with these issues, both in 2D and in 3D using the powerful Nuke's 3D system.
In the second template, I will guide you through the neutralization process, that includes both linearization and white balance. Some people knows this process as technical grading. A very important step that usually lighting supervisors or sequence supervisor deal with before starting to light any VFX shot.
Footage, scripts and other material will be available to you if you are supporting one of the tiers with downloadable material.
Thanks again for your support! and if you like my Patreon feed, please help me to spread the word, I would love to get at least 50 patrons, we are not that far away!
All the info on my Patreon feed.
Introduction to Redshift for VFX, episode 01 /
I'm starting a new training series for my Patreon feed, it's called "Intro to Redshift" for visual effects. I'm kind of learning Redshift and trying to figure out how to use it within my visual effects workflow, and I'll be sharing this trip with you. In this very first episode, I'll be talking about probably the most important topics around Redshift and the base for everything that will come later global illumination and sampling.
I will go deep about these two topics, sharing with you the basic theory behind global illumination and sampling, and I will also share with you a couple of "cheat sheets" to deal with noise and gi easily in Redshift while rendering your shots.
Check the first video out on my Patreon feed.
Cheers,
Xuan.
Patreon: Houdini as scene assembler: Bundles, takes and rops /
In this video I talk about the usage of Houdini as scene assembler. This topic will be recurrent in future posts, as Houdini is becoming a very popular tool for look-dev, lighting, rendering and layout, among others.
In this case I go trhough bundles, takes and rops and how we use them while lighting shots in visual effects projects.
You will learn:
- Bundles, takes, rops
- Alembic import
- Different ways of assign materials
- Create look-dev collections
- Generate .ass files
- Create render layers
- Create quick slap comps
- Override materials
Check it out here.
Houdini as scene assembler part 05. User attributes /
Sometimes, specially during the layout/set dressing stage artists have to decide certain rules or patterns to compose a shot. For example let’s say a football stadium. Imagine that the first row of seats is blue, the next row is red and the third row is green.
There are so many ways of doing this, but let’s say that we have thousands of seats and we know the colors that they should have. Then it is easy to make rules and patterns to allow total flexibility later on when texturing and look-deving.
In this example I’m using my favourite tool to explain 3D stuff, Lego figurines. I have 4 rows of Lego heads and I want each of those to have a different Lego face. But at the same time I want to use the same shader for all of them. I just want to have different textures. By doing this I will end up with a very simple and tidy setup, and iteration won’t be a pain.
Doing this in Maya is quite straightforward and I explained the process some time ago in this blog. What I want to illustrate now is another common situation that we face in production. Layout artists and set dresser usually do their work in Maya and then pass it on to look-dev artists and lighting td’s that usually use scene assemblers like Katana, Clarisse, Houdini, or Gaffer.
In this example I want to show you how to handle user attributes from Maya in Houdini to create texture and shader variations.
In Maya select all the shapes and add a custom attribute.
Call it “variation”
Data type integer
Default value 0
Add a different value to each Lego head. Add as many values as texture variations you need to have
Export all the Lego heads as alembic, remember to add the attributes that you want to export to houdini
Import the alembic file in Houdini
Connect all the texture variations to a switch node
This can be done also with shaders following exactly the same workflow
Connect an user data int node to the index input of the switch node and type the name of your attribute
Finally the render comes out as expected without any further tweaks. Just one shader that automatically picks up different textures based on the layout artist criteria