Nuke

Deep compositing - going deeper by Xuan Prada

Hello patrons,

This is a continuation of the intro to deep compositing, where we go deeper into compositing workflows using depth.

I will show you how to properly use deep information in a flat composition, to work fast and efficient with all the benefits of depth data but none of the caveats.

The video is more than 3 hours long and we will explore:

- Quick recap of pros and cons of using deep comp.
- Quick recap of basic deep tools.
- Setting up render passes in a 3D software for deep.
- Deep holdouts.
- Organizing deep comps.
- How to use AOVs in deep.
- How to work with precomps.
- Creating deep templates.
- Using 3D geometry in deep.
- Using 2D elements in deep.
- Using particles in deep.
- Zdepth using depth information.

Thanks for your support!
Head over to my Patreon for all the info.

Xuan.

Mix 04 by Xuan Prada

Hello patrons,

First video of 2022 will be a mix of topics.

The first part of the video will be dedicated to talk about face building and face tracking in Nuke. Using these tools and techniques will allow us to generate 3D heads and faces using only a few photos with the help of AI. Once we have the 3D model, we should be able to track and matchmove a shot to do a full head replacement or to extend/enhance some facial features.

In the second part of the video I will show you a technique that I used while working on Happt Feet to generate foot prints and foot trails. A pretty neat technique that relies on transferring information between surfaces instead of going full on with complex simulations.

This is a 3.30 hours video, so grab yourself a cup of coffee and enjoy!
All the information on my Patreon channel.

As always, thanks for your support!

Xuan.

Deep compositing by Xuan Prada

Hello patrons,

In this 2 hour video we are going to be talking about deep compositing workflows.

I will show you how to use deep compositing and why you should be using it for most of your shots.
I will explain the basics behind deep rendering and compositing techniques and also we'll go through all the deep tools available in Nuke while comping some simple shots. From volumes and atmospheric effects to solid assets.

Video and downloadable material will be included in the next posts.
All the information on my Patreon.

Thanks for your support!

Xuan.

Camera projection masterclass, episode 03 by Xuan Prada

Hello patrons,

I'm about to post "Camera projection masterclass, episode 03".
In this episode we are going to create a nested projection setup, where the camera is moving from far away at the begining of the shot, to end up closer to the subject by the end of the shot. A very common setup that you will see a lot in matte painting and environment tasks.

Then, we are going to take a look at the concept of overscan for camera projection. I will show you different ways of creating overscan, I will explain why overscan is extremely important for all your camera projection setups, and finally we will do a complex overscan camera projection exercise, using an impossible camera.

Make a big pot of black cofee because this is around 5 hours of professional training divided in two videos. Oh and remember that you can download supporting files if your tier includes downloadable material.

All the info on my patreon feed.

Thanks!
Xuan.

Camera projection masterclass, episode 01 by Xuan Prada

The very first episode for "Camera projection masterclass" has dropped. For more than two and a half hours I will be introducing you to the fascinating work of camera projection for visual effects. This is a long format series where I will be covering many concepts, ideas and practical exercises. Let's see what today's episode is all about.

- Introduction to the course
- Matte painting evolution
- Matte painting in the visual effects pipeline
- Matte painting workflows and tools
- Camera projection fundamentals
- Types of camera projections
- Common issues
- Camera projection elements in Nuke and Maya
- Recipes for all type of camera projections in Nuke
- A few words about Photoshop

Downloadable material will be available for certain tiers.

As always, thanks a lot for your support, you make this channel.
Check out my Patreon for more information.
Xuan.

Nuke IBL templates by Xuan Prada

Hello,

I just finished recording about 3 hours of content going through a couple of my Nuke IBL templates. The first one is all about cleaning up and artifacts removal. You know, how to get rid of chunky tripods, removing people from set and what not. I will explain to you a couple of ways of dealing with these issues, both in 2D and in 3D using the powerful Nuke's 3D system.

In the second template, I will guide you through the neutralization process, that includes both linearization and white balance. Some people knows this process as technical grading. A very important step that usually lighting supervisors or sequence supervisor deal with before starting to light any VFX shot.

Footage, scripts and other material will be available to you if you are supporting one of the tiers with downloadable material.

Thanks again for your support! and if you like my Patreon feed, please help me to spread the word, I would love to get at least 50 patrons, we are not that far away!

All the info on my Patreon feed.

UDIM workflow in Nuke by Xuan Prada

Texture artists, matte painters and environment artists often have to deal with UDIMs in Nuke. This is a very basic template that hopefully can illustrate how we usually handle this situation.

Cons

  • Slower than using Mari. Each UDIM is treated individually.
  • No virtual texturing, slower workflow. Yes, you can use Nuke's proxies but they are not as good as virtual texturing.

Pros

  • No paint buffer dependant. Always the best resolution available.
  • Non destructive workflow, nodes!
  • Save around £1,233 on Mari's license.

Workflow

  • I'll be using this simple footage as base for my matte.
  • We need to project this in Nuke and bake it on to different UDIMs to use it later in a 3D package.
  • As geometry support I'm using this plane with 5 UDIMs.
  • In Nuke, import the geometry support and the footage.
  • Create a camera.
  • Connect the camera and footage using a Project 3D node.
  • Disable the crop option of the Project 3D node. If not the proejctions wouldn't go any further than UV range 0-1.
  • Use a UV Tile node to point to the UDIM that you need to work on.
  • Connect the img input of the UV Tile node to the geometry support.
  • Use  a UV Project node to connect the camera and the geometry support.
  • Set projection to off.
  • Import the camera of the shot.
  • Look through the camera in the 3D view and the matte should be projected on to the geometry support.
  • Connect a Scanline Render to the UV Project.
  • Set the projection model to UV.
  • In the 2D view you should see the UDIM projection that we set previously.
  • If you need to work with a different UDIM just change the UV Tile.
  • So this is the basic setup. Do whatever you need in between like projections, painting and so on to finish your matte.
  • Then export all your UDIMs individually as texture maps to be used in the 3D software.
  • Here I just rendered the UDIMs extracted from Nuke in Maya/Arnold.

Stmaps by Xuan Prada

One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.

  • Using lens grids is always the easiest, fastest and most accurate way of delensing.
  • Set the output type to displacement and look through the forward channel to see the uvs in viewport.
  • Write the image as .exr 32 bits
  • This will output the uv information and can be read in any software.
  • To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.

Bake from Nuke to UVs by Xuan Prada

  • Export your scene from Maya with the geometry and camera animation.
  • Import the geometry and camera in Nuke.
  • Import the footage that you want to project and connect it to a Project 3D node.

 

  • Connect the cam input of the Project 3D node to the previously imported camera.
  • Connect the img input of the ReadGeo node to the Project 3D node.
  • Look through the camera and you will see the image projected on to the geometry through the camera.
  • Paint or tweak whatever you need.
  • Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
  • Projection option of the UVProjection should be set as off.

 

  • Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
  • Set the projection mode to UV.
  • If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
  • Finally use a write node to output your DMP work.
  • Render in Maya as expected.

Photography assembly for matte painters by Xuan Prada

In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.

I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.

The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.

  • I'm using these 7 images taken from a bridge in Chiswick, West London. The resolution of the images is 7000px wide so I created a proxy version around 3000px wide.
  • All the pictures were taken with same focal, same exposure and with the ISO and White Balance locked.
  • We need to know some information about these pictures. In order to blend the images in to a panoramic image we need to know the focal length and the film back or sensor size.
  • Connect a view meta data node to every single image to check this information. In this case I was the person who took the photos, so I know all of them have the same settings, but if you are not sure about the settings, check one by one.
  • I can see that the focal length is 280/10 which means the images were taken using a 28mm lens.
  • I don't see film back information but I do see the camera model, a Nikon D800. If I google the film back for this camera I see that the size is 35.9mm x 24mm.
  • Create a camera node with the information of the film back and the focal length.
  • At this point it would be a good idea to correct the lens distortion in your images. You can use a lens distortion node in Nuke if you shot a lens distortion grid, or just do eyeballing.
  • In my case I'm using the great lens distortion tools in Adobe Lightroom, but this is only possible because I'm using stills. You should always shot lens distortion grids.
  • Connect a card node to the image and remove all the subdivisions.
  • Also deactivate the image aspect to have 1:1 cards. We will fix this later.
  • Connect a transfer geo node to the card, and it's axis input to the camera.
  • If we move the camera, the card is attached to it all the time.
  • Now we are about to create a custom parameter to keep the card aligned to the camera all the time, with the correct focal length and film back. Even if we play with the camera parameters, the image will be updated automatically.
  • In the transform geo parameters, RMB and select manage user knobs and add a floating point slider. Call it distance. Set the min to 0 and the max to 10
  • This will allow us to place the card in space always relative to the camera.
  • In the transform geo translate z press = to type an expression. write -distance
  • Now if we play with the custom distance value it works.
  • Now we have to refer to the film back and focal length so the card matches the camera information when it's moved or rotated.
  • In the x scale of the transform geo node type this expression (input1.haperture/input1.focal)*distance and in the y scale type: (input1.vaperture/input1.focal)*distance being input1 the camera axis.
  • Now if we play with the distance custom parameter everything is perfectly aligned.
  • Create a group with the card, camera and transfer geo nodes.
  • Remove the input2 and input3 and connect the input1 to the card instead of the camera.
  • Go out of the group and connect it to the image. There are usually refreshing issues so cut the whole group node and paste it. This will fix the problem.
  • Manage knobs here and pick the focal length and film back from the camera (just for checking purposes)
  • Also pick the rotation from the camera and the distance from the transfer geo.
  • Having these controls here we won't have to go inside of the group if we need to use them. And we will.
  • Create a project 3D node and connect the camera to the camera input and the input1 to the input.
  • Create a sitch node below the transfer geo node and connect the input1 to the project3D node.
  • Add another custom control to the group parameters. Use the pulldown choice, call it mode and add two lines: card and project 3D.
  • In the switch node add an expression: parent.mode
  • Put the mode to project 3D.
  • Add a sphere node, scale it big and connect it to the camera projector.
  • You will se the image projected in the sphere instead of being rendered in a flat card.

Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them

In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.

  • Set the camera in the viewport and lock it.
  • Now you can zoom in and out without loosing the camera.
  • Set the horizon line playing with the rotation.
  • Copy and paste the camera projector group and set the horizon in the next image by doing the same than before; locking the camera and playing with camera rotation.
  • Create a scene node and add both images. Check that all the images have an alpha channel. Auto alpha should be fine as long as the alpha is completely white.
  • Look through the camera of the first camera projector and lock the viewport. Zoom out and start playing with the rotation and distance of the second camera projection until both images are perfectly blended.
  • Repeat the process with every single image. Just do the same than before; look through the previous camera, lock it, zoom out and play with the controls of the next image until they are perfectly aligned.
  • Create a camera node and call it shot camera.
  • Create a scanline render node.
  • Create a reformat node and type the format of your shot. In this case I'm using a super 35 format which means 1920x817
  • Connect the obj/scene input of the scanline render to the scene node.
  • Connect the camera input of the scanline render to the shot camera.
  • Connect the reformat node to the bg input of the scanline render node.
  • Look through the scanline render in 2D and you will see the panorama through the shot camera.
  • Play with the rotation of the camera in order to place the panorama in the desired position.

That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.

  • Create another scanline render node and change the projection mode to spherical. Connect it to the scene.
  • Create a reformat node with an equirectangular format and connect it to the bg input of the scanline render. In this case I'm using a 4000x2000 format.
  • Create a sphere node and connect it to the spherical scanline render. Put a mirror node in between to invert the normal of the sphere.
  • Create another scanline render and connect it's camera input to the shot camera.
  • Connect the bg input of the new scanline render to the shot reformat node (super 35).
  • Connect the scn/obj of the new scanline render and connect it to the sphere node.
  • That's all that you need.
  • You can look through the scanline render in the 2D and 3D viewport. We got all the images projected in 3D and rendered through the shot camera.

You can download the sample scene here.

Fixing “nadir” in Nuke by Xuan Prada

Sometimes you may need to fix the nadir of the HDRI panoramas used for lighting and look-development.
It’s very common that your tripod is placed on the ground of your pictures, specially if you use a Nodal Ninja panoramic head or similar. You know, one of those pano heads that you need to shoot images for zenit and nadir.

I usually do this task in another specific tools for VFX panoramas like PtGui, but if you dont’ have PtGui the easiest way to handle this is in Nuke.
It is also very common when you work on a big VFX facility, that other people work on the stitching process of the HDRI panoramas. If they are in a hurry they might stitch the panorama and deliver it for lighting forgetting to fix small (or big) imperfections.
In that case, I’m pretty sure that you as lighting or look-dev artist will not have PtGui installed on your machine, so Nuke will be your best friend to fix those imperfections.

This is an example that I took while ago.One of the brackets for one of the angles. As you can see I’m shooting remote with my laptop but it’s covering a big chunk of the ground.

When the panorama was stitched, the laptop became a problem. This panorama is just a preview, sorry for the low image quality.
Fixing this in an aquirectangular panorama would be a bit tricky, even worse if you are using a Nodal Ninja type pano head.
So, find below how to fix it in Nuke. I’m using a high resolution panorama that you can download for free at akromatic.com

  • First of all, import your equirectangular panorama in Nuke and use your desired colour space.
  • Use a spherical transform node to see the panorama as a mirror ball.
  • Change the input type to “Lat Long map” and the output type to “Mirror Ball“.
  • In this image you can see how your panorama will look in the 3D software. If you think that something is not looking good in the “nadir” just get rid of it before rendering.
  • Use another spherical transform node but in this case change the output type to “Cube” and change the rx to -90 so we can see the bottom side of the cube.
  • Using a roto paint node we can fix whatever you need/want to fix.
  • Take another spherical transform node, change the input type to “Cube” and the output type to “Lat Long map“.
  • You will notice 5 different inputs now.
  • I’m using constant colours to see which input corresponds to each specific part of the panorama.
  • The nadir should be connected to the input -Y
  • The output format for this node should be the resolution of the final panorama.
  • I replace each constant colour by black colours.
  • Each black colour should have also alpha channel.
  • This is what you get. The nadir that you fixed as a flat image is now projected all the way along on the final panorama.
  • Check the alpha channel of the result.
  • Use a merge node to blend the original panorama with the new nadir.
  • That’s it, use another spherical transform node with the output type set to Mirror Ball to see how the panorama looks like now. As you can see we got rid of the distortions on the ground.

P-maps by Xuan Prada

P-maps or position maps are one of those render passes that can save your life sometimes. They are really useful for compositing artist, matte painters or texture artists. You can save a lot of time rendering p-maps out from your rendering engine and avoid those tiny changes in a 3D software to rely on 2D or 2.5D techniques.

I personally use p-maps for different purposes, let me tell you some of them.

To place cards or another 2,5D or 3D elements

  • This is the render that I’m using for this small article. Nothing fancy there, just a few cubes and a couple of direct lights. This image has been rendered in Maya and V-Ray but of course you can render p-maps with any other combination of 3D program and render engine.
  • As additional render passes for this image I got the “normal pass” and the “world position pass”. You probably know that there are three different p-maps: World position, camera position and object position. They can be used for different purposes but all of them have the same kind of information, which is position.

    As they name says, one is the position information based on the world centre, the second one has the position in relationship with the camera, and the third one the position based on the centre of the object.

    You can render out all of them if you need, but for this example I’m going to use only world position map. All the techniques shown here can be extrapolated to the other maps.
  • This image is an .exr with all the render passes embedded so you can easily switch between them.
  • This is how the world position map looks like.
  • And this is how the normals pass looks like.
  • Use a shuffle node to read the p-map. If you need to un-premultiply it you can do it before the shuffle.
  • Use a position to points node to read the p-map and convert it to a 3D view.
  • Now you can move around the scene like any 3D software. This is extremely useful if you need to place any 2,5D or 3D element, like cards for example. Cards are extremely useful to place matte paintings, animated 2D elements, etc. You don’t need to guess anymore, just place your card in the right position.

To re-lit completely your scene

  • This is how your scene looks like out from the render package.
  • As we did before shuffle the p-map.
  • Take a re-light node. Connect the rgb to the color and the shuffle to the material. Then tweak the re-light node and select the normals and the point position.
  • Finally create a camera node and a scene node. Hook them all to the re-light node.
  • Now create a light node and connect it to the scene node. Play with the light to re-lit your scene.
  • As we see before, you can use your 3D scene to place the light.

To add subtle lighting information (or not that subtle)

  • Use a p-matte node to input your p-map and to output the information to the alpha channel. If you play with the shape, position and scale you will see the information of your new light in the alpha channel.
  • Connect the p_matte to the mask input of a grade node and play with it to tweak the intensity and color of the light.
  • Use a plus node to add as many lights as you need.
  • Of course you can help yourself to place the lights using the 3D view provided by the p-maps.

To project through camera

  • Projecting mattes, smoke, or any other information has never been so easy. Use the 3D view and a 3D camera to project detail on to the current render.
  • You can create a new camera or import it from your 3D package.
  • Use a re-project node to connect the camera, the image that you want to project (in this case a grid) and use a shuffle node with the p-map to provide vector information.
  • Finally I’m using a merge node to combine the grid projection with the original render and it’s being masked out using the embedded alpha from the .exr render.
  • Of course you can use the 3D view to place or modify the camera.