USD in depth, part 02 by Xuan Prada

Hello patrons,

USD in depth part 02 is here.
This is another dense video, about 3 hours long talking about USD.
In the first video we talked about data and structuring, in this second video we will be talking about composition in USD.

USD layers can be written and read in different ways, it is very important to know how to use the different composition arcs, specifiers, and everything else that composes a USD file. This will dictate the way you structure your assets and your whole USD pipeline.

Another heavy video, so be patient and drink coffe, you will need both.

All the info on my Patreon.

Thanks!
Xuan.

USD in depth, part 01 by Xuan Prada

Hello Patrons,

I'm starting a new series called "USD in depth", where I will be exploring everything USD related. As you probably know USD is a new standard pipeline system/file format based on layers that contain scene descriptions. It is supported by the VFX platform and is or will be the core of many visual effects studios and animation companies around the globe.

In this first video (more than 3 hours) we will be talking about what defines USD. Type of data, USD structure, terminology, attributes and layers. This is a very dense video, so grab a pot of coffee and enjoy.

All the info on my Patreon.

Thanks,
Xuan.

Deep compositing - going deeper by Xuan Prada

Hello patrons,

This is a continuation of the intro to deep compositing, where we go deeper into compositing workflows using depth.

I will show you how to properly use deep information in a flat composition, to work fast and efficient with all the benefits of depth data but none of the caveats.

The video is more than 3 hours long and we will explore:

- Quick recap of pros and cons of using deep comp.
- Quick recap of basic deep tools.
- Setting up render passes in a 3D software for deep.
- Deep holdouts.
- Organizing deep comps.
- How to use AOVs in deep.
- How to work with precomps.
- Creating deep templates.
- Using 3D geometry in deep.
- Using 2D elements in deep.
- Using particles in deep.
- Zdepth using depth information.

Thanks for your support!
Head over to my Patreon for all the info.

Xuan.

Mix 04 by Xuan Prada

Hello patrons,

First video of 2022 will be a mix of topics.

The first part of the video will be dedicated to talk about face building and face tracking in Nuke. Using these tools and techniques will allow us to generate 3D heads and faces using only a few photos with the help of AI. Once we have the 3D model, we should be able to track and matchmove a shot to do a full head replacement or to extend/enhance some facial features.

In the second part of the video I will show you a technique that I used while working on Happt Feet to generate foot prints and foot trails. A pretty neat technique that relies on transferring information between surfaces instead of going full on with complex simulations.

This is a 3.30 hours video, so grab yourself a cup of coffee and enjoy!
All the information on my Patreon channel.

As always, thanks for your support!

Xuan.

Scan based materials on a budget by Xuan Prada

Hello patrons,

Last post of the year!

In this two and half hours video I will show you my workflow to create smart materials based on photogrammetry. A technique wideky used in VFX and the game industry.

But we won't be using special hardware or very expensive photographic equipment, we are going to be using only a cheap digital camera or even a smartphone.

In this video you will learn:

- How to shoot photogrammetry for material creation.
- How to process photogrammetry in Reality Capture.
- How to bake textures maps from high resolution geometry in Zbrush.
- How to create smart materials in Substance Designer for Substance Painter or for 3D applications.
- How to use photogrammetry based materials in real time engines.

Thanks for your support and see you in 2022!
Stay safe.

Xuan.

VDB as displacement by Xuan Prada

The sphere is the surface that needs to be deformed by the presence of the cones. The surface can't be modified in any way, we need to stick to its topology and shape. We want to do this dynamically just using a displacement map but of course we don't want to sculpt the details by hand, as the animation might change at any time and we would have to re-sculpt again.

The cones are growing from frame 0 to 60 and moving around randomly.

I'm adding a for each connected piece and inside the loop adding an edit to increase the volume of the original cones a little bit.

Just select all in the group field, and set the transform space to local origin by connectivity, so each cone scales from it's own center.

Add a vdb from polygons, set it to distance VDB and add some resolution, it doesn't need to be super high.

Then I just cache the VDB sequence.

Create an attribute from volume to pass the Cd attribute from the vdb cache to the sphere.

To visualize it better you can just add a visualizer mapped to the attribute.

In shading, create an user data float and read the Cd attribute and connect it to the displacement.

If you are looking for the opposite effect, you can easily invert the displacement map.

Dealing with lidars by Xuan Prada

Hello patrons,

We haven't talked that much about lidar scanning, and it is time to say a few words about it. Lidar scans are a fundamental piece of the VFX pipeline, used by every single visual effects studio on the planet and counted by dozens on every film or tv show.

Sooner or later you would have to deal with lidar scans, that's why I have recorded this video, more than 3 hours of professional vfx training about how to use lidars scans.

In this video we will learn:

- What is lidar scanning.
- How we use lidars in VFX.
- How lidar technology works.
- Basics of working on-set with lidars.
- Lidar hardware.
- Lidar software.
- How to process point clouds.
- How to generate meshes for 3D work.

All the information on my Patreon feed.

Thanks!

Xuan.

Detailing digi doubles using generic humans by Xuan Prada

This is probably the last video of the year, let's see about that.

This time is all about getting your concept sculpts into the pipeline. To do this, we are going to use a generic humanoid, usually provided by your visual effects studio. This generic humanoid would have perfect topology, great uv mapping, some standard skin shaders, isolation maps to control different areas, grooming templates, etc.

This workflow will speed drastically the way you approach digital doubles or any other humanoid character, like this zombie here.

In this video we will focus mainly on wrapping a generic character around any concep sculpt to get a model that can be used for rigging, animation, lookdev, cfx, etc. And once we have that, we will re-project back all the details from the sculpt and we will apply high resolution displacement maps to get all the fine details like skin pores, wrinkles, skin imperfections, etc.

The video is about 2 hours long and we can use this character in the future to do some other videos about character/creature work.

All the info on my Patreon site.

Thanks!

Xuan.

Deep compositing by Xuan Prada

Hello patrons,

In this 2 hour video we are going to be talking about deep compositing workflows.

I will show you how to use deep compositing and why you should be using it for most of your shots.
I will explain the basics behind deep rendering and compositing techniques and also we'll go through all the deep tools available in Nuke while comping some simple shots. From volumes and atmospheric effects to solid assets.

Video and downloadable material will be included in the next posts.
All the information on my Patreon.

Thanks for your support!

Xuan.

Lookdev rig for Houdini by Xuan Prada

Hello patrons,

In this video I show you how to create a production ready lookdev rig for Houdini, or what I like to call, a single click render solution for your lookdevs.

It is in a way similar to the one we did for Katana a while ago, but using all the power and flexibility of Houdini's HDA system.

Talking about HDA's, I will be introducing the new features for HDA's that come with Houdini 18.5.633 that I think are really nice, specially for smaller studios that don't have enough resources to build a pipeline around HDA's.

By the end of this video you should be able to build your own lookdev tool and adapt it to the needs of your projects.

We'll be working with the latest versions of Houdini, Arnold and ACES.

As usually, the video starts with some slides where I try to explain why building a lookdev rig is a must before you do any work on your project, don't skip it, I know it is boring but very much needed. Downloadable material will be attached in the next post.

Thank you very much for your support!

Head over to my Patreon feed.

Xuan.

Small dynamic clouds by Xuan Prada

Hello,

I don't think I will be able to publish a video this month, let's see, but in the meantime here you can download five caches of small dynamic clouds that I simulated in Houdini.
They are a 1000 frames simulation and should work pretty good to create vast cloudscapes.

They are .bgeo caches, feel free to convert them to .vdb if you want to use them in any other software.


The videos below are flipbooks of the animated clouds, not renders.

The downloadable link will be published in the next post.
This is free of charge for all tiers with downloadable resources.

Thanks,
Xuan.

Mix 03 by Xuan Prada

Hello patrons,

This month I have another mix video for you. In this case I'm talking about two different ways of using the camera frustum to optimize your scenes. The first method is using the camera frustum to control the amount of subdivisions, a very common practice when dealing with large terrains that need a lot of displacement detail. We will use Houdini and Arnold, but this technique can be used in any DCC that supports Arnold. Other renderers have similar features.

The second method will use the camera frustum to blast parts of the scene not seeing by the camera. This is a tool that we will build in Houdini and can be used with any render engine.

Then we will move to another important topic in VFX, motion blur. I will show you how to use it properly to achieve photo realism. Motion blur should be taken very siriously, specially by lighters and FX TD's.

All the information on my Patreon.

Thanks!
Xuan.

Mix 01 and mix 02 by Xuan Prada

Hello patrons,

Today's video is not part of any series, it is just a bunch of things that I consider important or relevant for this channel. More of these videos with mixed topics will be published more frequently.

I actually recorded two videos, almost 4 hours of training, and will be available as usual in the next post, only accessible to patrons.

Today's topics are:

- New features in Katana 4.x
- Katana new render queue.
- Multiple rendering using graph state variables.
- Improvements in catalog and monitor.
- USD viewport with Hydra.
- New lighting tools.
- New network material edit.
- New lighting tools.
- Modelling desdert dunes in Houdini and Blender.
- New solaris features relevant for assets creation and set dressing.

All the info on my Patreon site.

As always, thanks a lot for your support!
Xuan.

Camera projection masterclass, episode 03 by Xuan Prada

Hello patrons,

I'm about to post "Camera projection masterclass, episode 03".
In this episode we are going to create a nested projection setup, where the camera is moving from far away at the begining of the shot, to end up closer to the subject by the end of the shot. A very common setup that you will see a lot in matte painting and environment tasks.

Then, we are going to take a look at the concept of overscan for camera projection. I will show you different ways of creating overscan, I will explain why overscan is extremely important for all your camera projection setups, and finally we will do a complex overscan camera projection exercise, using an impossible camera.

Make a big pot of black cofee because this is around 5 hours of professional training divided in two videos. Oh and remember that you can download supporting files if your tier includes downloadable material.

All the info on my patreon feed.

Thanks!
Xuan.

Houdini scatterers 2/2 by Xuan Prada

Hello patrons,

This is the second (and last) part of Houdini scatterers. We are going to take the tools that we made in the first video and see how we can use them to create complex and efficient scattering systems for your VFX shots, specially useful when dealing with huge environments.

I will show you some of my favourite worflows and share some techniques that I've used in the past in combination with the HDAs that we created in this series.

For those of you in tiers with downloadable material, you will have access to another post with some links to get the files.

As usual feel free to contact me with questions, suggestions, ideas, etc.
And if you like my content, please help me out and recommend it to you work mates.

All the info on my Patreon feed.

Thanks a lot for your suppor!
Xuan.

Real time rendering for vfx, episode 04 by Xuan Prada

Happy New Year!

Real time rendering for vfx episode 04 is here!
This is a long one, around 4 hours split in two different videos, both of them available already for you.

In these two videos I cover a lot of things related with lighting and rendering in Unreal. We will cover all the rendering methods, rasterization, raytracing, hybrid rendering and path tracing.

Some of the topics covered in this video are:

- Rendering methods in Unreal.
- Lightmass.
- Type of lights.
- Volumetric lighting.
- Modulate lighting.
- Global illumination.
- Mesh lights.
- Reflection methods.
- Post processing volumes.
- Particles lighting.
- Blueprints for lighting.
- Light function.
- Core components of a lighting scene.
- Neutral lighting conditions.
- Rasterization.
- Raytracing.
- Hybrid methdos.
- Path tracing.

All the info on my Patreon.

Houdini topo transfer - aka wrap3 by Xuan Prada

For a little while I have been using Houdini topo transfer tools instead of Wrap 3. Not saying that I can fully replace Wrap3 but for some common and easy tasks, like wrapping generic humans to scans for both modelling and texturing, I can definitely use Houdini now instead of Wrap 3.

Wrapping generic humans to scans

  • This technique will allow you to easily wrap a generic human to any actor’s scan to create digital doubles. This workflow can be used during modeling the digital double and also while texturing it. Commonly, a texture artist gets a digital double production model in t-pose or a similar pose that doesn’t necessary match the scan pose. It is a great idea to match both poses to easily transfer color details and surface details between the scan and the production model.

  • For both situations, modeling or texturing, this is a workflow that usually involves Wrap3 or other proprietary tools for Maya. Now it can also easily be done in Houdini.

  • First of all, open the ztool provided by the scanning vendor in Zbrush. These photogrammetry scans are usually something around 13 – 18 million of polygons. Too dense for the wrapping process. You can just decimate the model and export it as .obj

  • In Maya align roughly your generic human and the scan. If the pose is very different, use your generic rig to match (roughly) the pose of the scan. Also make sure both models have the same scale. Scaling issues can be fixed in Wrap3 or Houdini in this case, but I think it is better to fix it beforehand, in a vfx pipeline you will be publishing assets from Maya anyway. Then export both models as .obj

  • It is important to remove teeth, the interior of the mouth and other problematic parts from your generic human model. This is something you can do in Houdini as well, even after the wrapping, but again, better to do it beforehand.

  • Import the scan in Houdni.

  • Create a topo transfer node.

  • Connect the scan to the target input of the topo transfer.

  • Bring the base mesh and connect it to the source input of the topo transfer.

  • I had issues in the past using Maya units (decimeters) so better to scale by 0.1 just in case.

  • Enable the topo transfer, press enter to activate it. Now you can place landmarks on the base mesh.

  • Add a couple of landmarks, then ctrl+g to switch to the scan mesh, and align the same landmarks.

  • Repeat the process all around the body and click on solve.

  • Your generic human will be wrapped pretty much perfectly to the actor’s scan. Now you can continue with your traditional modeling pipeline, or in case you are using this technique for texturing, move into Zbrush, Mari and or Houdini for transferring textures and displacement maps. There are tutorials about these topics on this site.

Transferring texture data

  • Import the scan and the wrapped model into Houdini.

  • Assign a classic shader with the photogrammetry texture connected to its emission color to the scan. Disable the diffuse component.

  • Create a bakeTexture rop with the following settings.

    • Resolution = 4096 x 4096.

    • UV object = wrapped model.

    • High res object = scan.

    • Output picture = path_to_file.%(UDIM)d.exr

    • Format = EXR.

    • Surface emission color = On.

    • Baking tab = Tick off Disable lighting/emission and Add baking exports to shader layers.

    • If you get artifacts in the transferred textures, in the unwrapping tab change the unwrap method to trace closest surface. This is common with lidar, photogrammetry and other dirty geometry.

    • You can run the baking locally or on the farm.

  • Take a look at the generated textures.

Simple spatial lighting by Xuan Prada

Hello patrons,

I'm about to release my new video "Simple spatial lighting". Here is a quick summary of everything we will be covering. The length of this video is about 3 hours.

- Differences between HDRIs and spatial lighting.
- Simple vs complex workflows for spatial lighting.
- Handling ACES in Nuke, Mari and Houdini.
- Dealing with spherical projections.
- Treating HDRIs and practical lights.
- Image based modelling.
- Baking textures in Arnold/Maya.
- Simple look-dev in Houdini/RenderMan.
- Spatial lighting setup in Houdini/RenderMan.
- Slap comp in Nuke.

Thanks,
Xuan.

Head over my Patreon site to access this video and many more.

Houdini scatterers, part 1/2 by Xuan Prada

Hello patrons,

This new video is part of the "little project" I made in Houdini and Redshift a while ago. In this part 1 of 2 video I show you how to create efficient tools in Houdini to deal with scattering, specifically focusing on environments. We are going to create a setup that takes care of randomization while using scattering techniques. Random rotation, random scale, random assets, etc.

Once the setups are done we are going to create tools or "digital assets" so you can re-use these tools as many times as you need in future porjects without re-doing the setups. We will create an interactive user interface to manipulate the tools.

In part 2 of this video I will show you different scattering techniques using these tools that we are going to build today.

The video is available on my Patreon site.

Thanks for your support!
Xuan.