Hello,
I just published on my Patreon a video about Houdini’s window box.
This video is about how to use the new window box system in Houdini. One of those tools that I've been using for many years at different VFX studios, but now it works out of the box with Houdini and Karma. I hope you find it useful and you can start using it soon in your projects!
All the info on my Patreon site.
look-dev
Solaris Katana interoperability part 2/2 /
Hello patrons,
In this video we will finish this mini series about Solaris and Katana interoperability.
I'll be covering the topics that I didn't have time to cover in the first video, including.
- Manual set dressing from Solaris to Katana.
- Hero instancing from Solaris to Katana.
- background instancing and custom attributes from Solaris to Katana.
- Dummy crowds from Solaris to Katana.
- Everything using USD of course.
There are many more things that could be covered when it comes to Solaris and Katana interoperability, I'm pretty sure that I'll be covering some of them in future USD videos.
All the info on my Patreon.
Solaris Katana interoperability part 1/2 /
Hello patrons,
This is small trailer for the video Houdini Solaris / Katana interoperability part 1/2.
The full video is published only for Patrons.
The whole thing is divided in two videos, the first one is around 2.5 hours and hopefully next month I can publish the second video covering the rest of the topics.
In this first video we are covering.
- Working template in Solaris.
- Working template in Katana.
- Full assets from Solaris to Katana.
- Modifying/overriding looks in Katana.
- Geometry assets from Solaris to Katana.
- Publishing looks as KLF.
- Publishing looks as USD files.
- Full assembly USD files.
All the information on my Patreon.
Thanks!
Xuan.
Shooting HDRIs /
In my next Patreon video I will explain how to capture HDRIs for lighting and lookdev in visual effects. Then we will process all the data in Nuke and Ptgui to create the final textures. Finally everything will be tested using IBL in Houdini and Karma.
This video will be available on my Patreon very soon. Please consider becoming a subscriber to have full access to my library of VFX training.
https://www.patreon.com/elephantvfx
Thanks!
Houdini Solaris and Katana. Custom attributes /
This is a quick video showcasing how to use custom attributes in Houdini Solaris for USD scattering systems. The USD layer will be exported to Katana to do procedural look-dev using the exported custom parameters.
This technique will be explored in depth in my upcoming video about Houdini Solaris and Katana interoperability.
Subscribe to my Patreon to have full access to my entire library of visual effects training.
www.patreon.com/elephantvfx
VDB as displacement /
The sphere is the surface that needs to be deformed by the presence of the cones. The surface can't be modified in any way, we need to stick to its topology and shape. We want to do this dynamically just using a displacement map but of course we don't want to sculpt the details by hand, as the animation might change at any time and we would have to re-sculpt again.
The cones are growing from frame 0 to 60 and moving around randomly.
I'm adding a for each connected piece and inside the loop adding an edit to increase the volume of the original cones a little bit.
Just select all in the group field, and set the transform space to local origin by connectivity, so each cone scales from it's own center.
Add a vdb from polygons, set it to distance VDB and add some resolution, it doesn't need to be super high.
Then I just cache the VDB sequence.
Create an attribute from volume to pass the Cd attribute from the vdb cache to the sphere.
To visualize it better you can just add a visualizer mapped to the attribute.
In shading, create an user data float and read the Cd attribute and connect it to the displacement.
If you are looking for the opposite effect, you can easily invert the displacement map.
Detailing digi doubles using generic humans /
This is probably the last video of the year, let's see about that.
This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.
This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.
In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.
The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.
All the info on my Patreon site.
Thanks!
Xuan.
Lookdev rig for Houdini /
Hello patrons,
In this video I show you how to create a production ready lookdev rig for Houdini, or what I like to call, a single click render solution for your lookdevs.
It is in a way similar to the one we did for Katana a while ago, but using all the power and flexibility of Houdini's HDA system.
Talking about HDA's, I will be introducing the new features for HDA's that come with Houdini 18.5.633 that I think are really nice, specially for smaller studios that don't have enough resources to build a pipeline around HDA's.
By the end of this video you should be able to build your own lookdev tool and adapt it to the needs of your projects.
We'll be working with the latest versions of Houdini, Arnold and ACES.
As usually, the video starts with some slides where I try to explain why building a lookdev rig is a must before you do any work on your project, don't skip it, I know it is boring but very much needed. Downloadable material will be attached in the next post.
Thank you very much for your support!
Head over to my Patreon feed.
Xuan.
Simple spatial lighting /
Hello patrons,
I'm about to release my new video "Simple spatial lighting". Here is a quick summary of everything we will be covering. The length of this video is about 3 hours.
- Differences between HDRIs and spatial lighting.
- Simple vs complex workflows for spatial lighting.
- Handling ACES in Nuke, Mari and Houdini.
- Dealing with spherical projections.
- Treating HDRIs and practical lights.
- Image based modelling.
- Baking textures in Arnold/Maya.
- Simple look-dev in Houdini/RenderMan.
- Spatial lighting setup in Houdini/RenderMan.
- Slap comp in Nuke.
Thanks,
Xuan.
Head over my Patreon site to access this video and many more.
Real time rendering for vfx, episode 01 /
Episode 01 for "real time rendering for vfx" is dropping later today. I called this video "your first day in Unreal engine". We will do from scratch a very simple environment that we can scout in virtual reality.
These are some of the topics covered in this (almost) 4 hours video.
- Assets considerations for real time.
- Exporting/importing assets in Maya/Unreal.
- Using templates in Unreal.
- Collisions.
- Materials basics.
- Lighting/atmospherics basics.
- Exporting projects.
- VR scouting.
Please head over my Patreon site and check this out.
Thanks for your support,
Xuan.
Intro to LOPs and USD /
My introduction to Houdini Solaris LOPs and USD is already available on my Patreon feed.
These are the topics that we are going to be covering.
- Introduction to USD and LOPs
- Asset creation worflow
- Simple assets
- Complex assets
- Manual layout and set dressing
- Using instances in LOPs
- Set dressing using information from Maya
- Using departments inputs/outputs
- Publishing system
- Setup for sequence lighting
- Random bits
This introduction is around 4.30 hours long.
Check it out here.
Introduction to Redshift - little project /
My Patreon series “Introduction to Redshift for VFX” is coming to an end. We have already discussed in depth the most basics features like global illumination and sampling. I shared with you my own “cheat sheets” to deal with GI and sampling. We also talked about Redshift lighting tools, built-in atmospheric effects, and cameras. In the third episode we talked about camera mapping, surface shaders, texturing, displacement maps from Mari and Zbrush, how to ingest Substance Painter textures and did a few surfacing exercises.
This should give you a pretty good base to start your projects in Houdini and Redshift, or whatever 3D app you want to use with Redshift.
The next couple of videos about this series are going to be dedicated to doing from scratch to finish a little project using Redshift. We are going to be able to cover more features of the render engine and also discover more broad techniques that hopefully you will find interesting. Let me explain to you what is all of this about.
We’ll be doing this simple shot below from start to finish, it is quite simple and graphic I know, but to get there I’m going to explain to you many things that you are going to be using quite a lot in visual effects shots, more than we actually end up using in the shot.
We are going to start by having a quick introduction to SpeedTree Cinema 8 to see how to create procedural trees. We will create from scratch a few trees that later will be used in Houdini. Once we have all the models ready, we will see how to deal with SpeedTree textures to use them in Redshift in an ACES pipeline.
These trees will be used in Houdini to create re-usable assets llibraries and later converted to Redshift proxies for memory efficiency and scattering, also to be easily picked up by lighting artists when working on shots.
With all these trees we will take a look at how to create procedural scattering systems in Houdini using Redshift proxies. We will create multiple configurations depending on our needs. We are also going to learn how to ingest Quixel Megascans assets, again preparing them to work with ACES and creating an additional asset for our library. We will also re-use the scatterers made for trees to scatter rocks and pebbles.
To scatter all of that will be used as a base Houdini’s height fields. For this particular shot, we are going a very simple ground made with height fields and Megascans, but I’m going to give you a pretty comprehensive introduction to height fields, way more than what you see in the final shot.
Once all the natural assets are created, we’ll be looking at the textures and look-dev of the character. Yes, there is a character in the shot, you don’t see much but hey, this is what happens in VFX all the time. You spend months working on something barely noticeable. We will look into speed texturing and how to use Substance Painter with Redshift.
Now that we are dealing with characters, what if I show you how to “guerrilla” deal with motion capture? So you can grab some random motion capture from any source and apply it to your characters. Look at the clip below, nothing better than a character moving to see if the look actually works.
It looks better when moving, doesn’t it? There is no cloth simulation btw, it is a Redshift course, we are not going that far! Not yet.
Any environment work, of course, needs some kind of volumetrics. They create nice lighting effects, give a sense of scale, look good and make terrible render times. We need to know how to deal with different types of volumetrics in Redshift, so I’m going to show you how to create a couple of different atmospherics using Houdini’s volumes. Quite simple but effective.
Finally, we will combine everything together in a shot. I will show you how to organize everything properly using bundles and smart bundles to configure your render passes. We will take a look at how Redshift deals with AOVs, render settings, etc. Finally, we will put everything together in Nuke to output a nice render.
Just to summarize, this is what I’m planning to show you while working on this little project. My guess is that it will take me a couple of sessions to deliver all this video training.
Speed Tree introduction and tree creation
ACES texture conversion
ACES introduction in Houdini and Redshift
Creation of tree assets library in Houdini
Megascans ingestion
Character texturing and look-dev
Guerrilla techniques to apply mocap
Introduction to Houdini’s height fields
Redshift proxies
Scattering systems in Houdini
Volume creation in Houdini for atmospherics
Scene assembly
Redshift render settings
Compositing
Something that I probably forgot
All of this and much more training will be published on my Patreon. Please consider supporting me.
Thanks,
Xuan.
Arnold interoperability /
In this video I will guide you trough arnold operators in both Maya and Houdni to show you advanced methods for creating looks, and potentially anything arnold related. Working with arnold operators can be very beneficial in your visual effect pipeline, among other things you are going to be able to transfer "for free" pretty much anything from one 3D package to another, in this case from Maya to Houdini and vice-versa.
These days it is very common to work in a traditional 3D package like Maya while creating assets and then moving to a scene assembler like Houdini or Katana to do shots. With this workflow you are going to be able to do so in a very clean, tidy and efficient way.
On top of that, I'm going to show you how to create look files that can be easily exported to use in lighting shots, independently in Maya or Houdini. You also are going to be able to override looks, versioning looks in Shotgun and many more things.
This is a two plus hours video tutorial posted on my Patreon feed.
Thanks a lot for your support.
Xuan.
Wade Tillman - spec job /
This is just a spec job for Wade Tillman’s character on HBO’s Watchmen. After watching the series I enjoyed the work done by Marz VFX on Tillman’s mask, that I wanted to do my own. Unfortunately, I don’t have much time so creating this asset seemed like something doable to do in a few hours over the weekend. It is just a simple test, it will require a lot more work to be a production-ready asset of course. I’m just playing here the role of a visual effects designer trying to come up with an idea of how to implement this mask into the VFX production pipeline.
I’m planning to do more work in the future with this asset, including mocap, cloth simulation, proper animated HDRI lighting, etc. I also changed the design that they did on the series. Instead of having the seams in the middle of the head from ear to ear I just place my seams in the middle of the face dividing the face in two. I believe the one they did for the real series works much better but I just wanted to try something different. I will definitely do another test mimicking the other design.
So far I just tried one design in two different stages. The mask covering the entire head and the mask pulled up to reveal the mouth and chin of the character, as seen many times in the series. I also tried a couple of looks, one more mirror-like with small imperfections in the reflections. And another one rougher. I believe they tried similar looks but in the end, the went with the one with more pristine reflections.
I think it would be interesting to see another test with different types of materials, introducing some iridescence would also be fun. I will try something else next time.
Capturing lighting and reflections to lit this asset properly has to be the most exciting part of this task. That is something that I haven’t done yet but I will try as soon as I can. It is pretty much like having a mirror ball in the shots. Capturing animated panoramic HDRIs is definitely the way to go or at least the more simple one. Let’s try it next time.
Finally, I did a couple of cloth simulation tests for both stages of the mask. Just playing a bit with vellum in Houdini.
Just trying different looks here for both stages of the mask.
Simple cloth simulation test. From t-pose to anim pose and walk cycle.
Creases from Maya to Houdini /
This is a quick tip on how to take creases information from Maya to Houdini to be rendered with Arnold. If you are like me and you are using Houdini as scene assembler this is something that you will have to deal with sooner or later.
In Maya, I have a simple cube with creases, on the right side you can see how it looks once subdivided twice.
Not only you can take creases information into Houdini, you can also export subdivision information and HtoA will interpret it automatically. Make sure you add catclark subdivision type and 2 iterations, or whatever you need.
When exporting the alembic caches you need to include the arnold parameters that take care of subdivision and creases. Actually there is no extra parameter for creases, by including subdivision parameters you will already get the creases information.
Note that the arnold parameters in Maya start with ar_arnold_parameter, for example ar_subdiv_iterations. But in Houdini arnold parameters don’t use de ar prefix. Because of that make sure you are exporting the parameter without the ar prefix.
All this can be of course happen automatically in your pipeline while publishing assets. It actually should to make artists life easier and avoid mistakes.
That’s it, if you import the alembic cache in Houdini both creases and subdivisions should render as expected. This information can be overwritten in sops with arnold parameters.
Mari 4.6 new features and production template /
Hello patrons,
I recorded a new video about the new features in Mari 4.6 released just a few weeks ago. I will also talk about some of the new features in the extension pack 5 and finally I will show you my production template that I've been using lately to do all the texturing and pre-lookDev on many assets for film and tv projects.
This is a big picture of the topics covered in this video. The video will be about 2.5 hours long, and it will be published on my Patreon site.
- Mari 4.6 new features
- New material system explained in depth
- Material ingestion tool
- Optimization settings
- How and where to use geo channels
- New camera projection tools
- Extension pack 5 new features (or my most used tools)
- Production template for texturing and pre-lookDev
All the information on my Patreon feed.
Thanks for your support!
Xuan.
Katana Fastrack episode 06 /
Episode 06 of "Katana Fastrack" is already available.
In this episode we will light the first of two shots that I've prepared for this series. A simple full CG shot that will guide you trough the lighting workflow in Katana using the lighting template that we created in the previous episode.
On top of that, we also cover the look-dev of the environment used in this shot.
We'll take a look at how to implement delivery requirements in our lighting template, such as specific resolutions based on production decisions.
We also will take a look and how to create and use interactive render filters, a very powerful feature in Katana. And finally, we will do the lighting and slapcomp of the first shot of this course.
All the info on my Patreon feed.
Lighting a full cg shot in Houdini, part 01 /
Part 01 of "Lighting a full cg shot in Houdini" is out.
In this first episode I go through everything you need to convert Houdini into a powerful scene assembler, specially focused on look-dev. I will go through other assembly capabilities and lighting/render in future videos.
In this episode we will cover:
- How to organize and prepare assets in Maya to be used in Houdini for assembly and render
- Good uv workflows for vfx and animation productions
- How to assemble multiple assets in Houdini in a scene assembly fashion
- Quick look at speed texturing in Substance Painter
- How to create digital assets and presets in Houdini to re-use in your projects
- Look-dev workflow in Houdini and Arnold
All the information on my Patreon feed.
Thanks for your support,
Xuan.
Katana Fastrack episode 05 /
Episode 05 of Katana Fastrack is already published. In this episode we are going to take a look at the lighting pipeline that we could find in any visual effects studio.
First, I will explain quickly what is the most common workflow when starting a vfx production, from the lighting point of view.
Then, I will explain the recipe that we are going to cook in Katana for lighting shots. And finally, we will jump into Katana to build our lighting template, a tool that we are going to be able to use on many shots and sequences in the future.
Before finishing this episode, we will try our lighting template with very simple assets, testing features like importing look files, shading override, shading edits, geometry edits, etc.
All the info on my Patreon feed.
Clarisse scatterers, part 01 /
Hello patrons,
I just posted the first part of Clarisse scatterers, in this video I'll walk you through some of the point clouds and scatterers available in Clarisse. We will do three production exercises, very simple but hopefully you will understand the workflow to use these tools to create more complicated shots.
In the first exercise we'll be using the point array to create a simple but effective crowd of soldiers. Then we will use the point cloud particle system to generate the effect that you can see in the video attached to this post. A very common effect these days.
And finally we will use the point uv sampler to generate huge environments like forests or cities.
We will continue with more exercises in the second and last part of these scatterers series in Clarisse.
Check it out on my Patreon feed.
Thanks,
Xuan.