HDRI

Shooting HDRIs by Xuan Prada

In my next Patreon video I will explain how to capture HDRIs for lighting and lookdev in visual effects. Then we will process all the data in Nuke and Ptgui to create the final textures. Finally everything will be tested using IBL in Houdini and Karma.

This video will be available on my Patreon very soon. Please consider becoming a subscriber to have full access to my library of VFX training.

https://www.patreon.com/elephantvfx

Thanks!

Simple spatial lighting by Xuan Prada

Hello patrons,

I'm about to release my new video "Simple spatial lighting". Here is a quick summary of everything we will be covering. The length of this video is about 3 hours.

- Differences between HDRIs and spatial lighting.
- Simple vs complex workflows for spatial lighting.
- Handling ACES in Nuke, Mari and Houdini.
- Dealing with spherical projections.
- Treating HDRIs and practical lights.
- Image based modelling.
- Baking textures in Arnold/Maya.
- Simple look-dev in Houdini/RenderMan.
- Spatial lighting setup in Houdini/RenderMan.
- Slap comp in Nuke.

Thanks,
Xuan.

Head over my Patreon site to access this video and many more.

Katana Fastrack episode 06 by Xuan Prada

Episode 06 of "Katana Fastrack" is already available.

In this episode we will light the first of two shots that I've prepared for this series. A simple full CG shot that will guide you trough the lighting workflow in Katana using the lighting template that we created in the previous episode.

On top of that, we also cover the look-dev of the environment used in this shot.
We'll take a look at how to implement delivery requirements in our lighting template, such as specific resolutions based on production decisions.

We also will take a look and how to create and use interactive render filters, a very powerful feature in Katana. And finally, we will do the lighting and slapcomp of the first shot of this course.

All the info on my Patreon feed.

Nuke IBL templates by Xuan Prada

Hello,

I just finished recording about 3 hours of content going through a couple of my Nuke IBL templates. The first one is all about cleaning up and artifacts removal. You know, how to get rid of chunky tripods, removing people from set and what not. I will explain to you a couple of ways of dealing with these issues, both in 2D and in 3D using the powerful Nuke's 3D system.

In the second template, I will guide you through the neutralization process, that includes both linearization and white balance. Some people knows this process as technical grading. A very important step that usually lighting supervisors or sequence supervisor deal with before starting to light any VFX shot.

Footage, scripts and other material will be available to you if you are supporting one of the tiers with downloadable material.

Thanks again for your support! and if you like my Patreon feed, please help me to spread the word, I would love to get at least 50 patrons, we are not that far away!

All the info on my Patreon feed.

Ricoh Theta for image acquisition in VFX by Xuan Prada

This is a very quick overview of how I use my tiny Ricoh Theta for lighting acquisition in VFX. I always use one of my two traditional setups for capturing HDRI and bracketed textures but on top of that, I use a Theta as backup. Sometimes if I don't have enough room on-set I might only use a Theta, but this is not ideal.

There is no way to manually control this camera, shame! But using an iPhone app like Simple HDR at least you can do bracketing. Still can't control it, but it is something.

As always capturing any camera data, you will need a Macbeth chart.

For HDRI acquisition it is always extremely important to have good references for you lighting distribution, density, temperature, reflection and shadow. Spheres are a must.

For this particular exercise I'm using a Mini Manfrotto tripod to place my camera above 50cm from the ground aprox.

This is the equitectangular map that I got after merging 7 brackets generated automatically with the Theta. There are 2 major disadvantages if you compare this panorama with the ones you typically get using a traditional DSLR + fisheye setup.

  • Poor resolution, artefacts and aberrations
  • Poor dynamic range

I use HDR merge pro in Photoshop to merge my brackets. It is very fast and it actually works. But never use Photoshop to work with data images.

Once the panorama has been stitched, move to Nuke to neutralise it.

Start by neutralising the plate.
Linearization first, followed by white balance.

Copy the grading from the plate to the panorama.

Save the maps, go to Maya and create an IBL setup.
The dynamic range in the panorama is very low compared with what we would have if were using a traditional DSLR setup. This means that our key light is not going to work very well I'm afraid.

If we compare the CG against the plate, we can easily see that the sun is not working at all.

The best way to fix this issue at this point is going back to Nuke and remove the sun from the panorama. Then crop it and save it as a HDR texture to be mapped in a CG light.

Map the HDR texture to a area light in Maya and place it accordingly.

Now we should be able to match the key light much better.

Final render.

Quick and dirty free IBLs by Xuan Prada

Some of my spare IBLs that I shot while ago using a Ricoh Theta. They contain around 12EV dynamic range. Resolution is not pretty good but it stills holds up for look-dev and lighting tasks.

Feel free to download the equirectangular .exrs here.
Please do not use in commercial projects.

Cafe in Barcelona.

Cafe in Barcelona render test.

Hobo hotel.

Hobo hotel render test.

Campus i12 green room.

Campus i12 green room render test.

Campus i12 class.

Campus i12 class render test.

Chiswick Gardens.

Chiswick Gardens render test.

Environment reconstruction + HDR projections by Xuan Prada

I've been working on the reconstruction of this fancy environment in Hackney Wick, East London.
The idea behind this exercise was recreating the environment in terms of shape and volume, and then project HDRIs on the geometry. Doing this we can get more accurate lighting contribution, occlusion, reflections and color bleeding. Much better environment interaction between 3D assets. Which basically means better integrations for our VFX shots.

I tried to make it as simple as possible, spending just a couple of hours on location.

  • The first thing I did was drawing some diagrams of the environment and using a laser measurer cover the whole place writing down all the information needed for later when working on the virtual reconstruction.
  • Then I did a quick map of the environment in Photoshop with all the relevant information. Just to keep all my annotations clean and tidy.
  • With drawings and annotations would have been good enough for this environment, just because it's quite simple. But in order to make it better I decided to scan the whole place. Lidar scanning is probably the best solution for this, but I decided to do it using photogrammetry. I know it takes more time but you will get textures at the same time. Not only texture placeholders, but true HDR textures that I can use later for projections.
  • I took around 500 images of the whole environment and ended up with a very dense point cloud. Just perfect for geometry reconstruction.
  • For the photogrammetry process I took around 500 shots. Every single one composed of 3 bracketed exposures, 3 stops apart. This will give me a good dynamic range for this particular environment.
  • Combined the 3 brackets to create rectilinear HDR images. Then exported them as both HDR and LDR. The exr HDRs will be used for texturing and the jpg LDR for photogrammetry purpose.
  • Also did a few equirectangular HDRIs with even higher dynamic ranger. Then I projected these in Mari using the environment projection feature. Once I completed the projections from different tripod positions, cover the remaining areas with the rectilinear HDRs.
  • These are the five different HDRI positions and some render tests.
  • The next step is to create a proxy version of the environment. Having the 3D scan this so simple to do, and the final geometry will be very accurate because it's based on photos of the real environment. You could also do a very high detail model but in this case the proxy version was good enough for what I needed.
  • Then, high resolution UV mapping is required to get good texture resolution. Every single one of my photos is 6000x4000 pixels. The idea is to project some of them (we don't need all of them) through the photogrammetry cameras. This means great texture resolution if the UVs are good. We could even create full 3D shots and the resolution would hold up.
  • After that, I imported in Mari a few cameras exported from Photoscan and the correspondent rectilinear HDR images. Applied same lens distortion to them and project them in Mari and/or Nuke through the cameras. Always keeping the dynamic range.
  • Finally exported all the UDIMs to Maya (around 70). All of them 16 bit images with the original dynamic range required for 3D lighting.
  • After mipmapped them I did some render tests in Arnold and everything worked as expected. I can play with the exposure and get great lighting information from the walls, floor and ceiling. Did a few render tests with this old character.

IBL and sampling in Clarisse by Xuan Prada

Using IBLs with huge ranges for natural light (sun) is just great. They give you a very consistent lighting conditions and the behaviour of the shadows is fantastic.
But sampling those massive values can be a bit tricky sometimes. Your render will have a lot of noise and artifacts, and you will have to deal with tricks like creating cropped versions of the HDRIs or clampling values out of Nuke.

Fortunately in Clarisse we can deal with this issue quite easily.
Shading, lighting and anti-aliasing are completely independent in Clarisse. You can tweak on of them without affecting the other ones saving a lot of rendering time. In many renderers shading sampling is multiplied by anti-aliasing sampling which force the users to tweak all the shaders in order to have decent render times.

  • We are going to start with this noisy scene.
  • The first thing you should do is changing the Interpolation Mode to 
    MipMapping
    in the Map File of your HDRI.
  • Then we need to tweak the shading sampling.
  • Go to raytracer and activate previz mode. This will remove lighting 
    information from the scene. All the noise here comes from the shaders.
  • In this case we get a lot of noise from the sphere. Just go to the sphere's material and increase the reflection quality under sampling.
  • I increased the reflection quality to 10 and can't see any noise in the scene any more. 
  • Select again the raytracer and deactivate the previz mode. All the noise here is coming now from lighting.
  • Go to the gi monte carlo and disable affect diffuse. Doing this gi won't affect lighting. We have now only direct lighting here. If you see some noise just increase the sampling of our direct lights.
  • Go to the gi monte carlo and re-enable affect diffuse. Increase the quality until the noise disappears.
  • The render is noise free now but it still looks a bit low res, this is because of the anti-aliasing. Go to raytracer and increase the samples. Now the render looks just perfect.
  • Finally there is a global sampling setting that usually you won't have to play with. But just for your information, the shading oversampling set to 100% will multiply the shading rays by the anti-aliasing samples, like most of the render engines out there. This will help to refine the render but rendering times will increase quite a bit.
  • Now if you want to have quick and dirt results for look-dev or lighting just play with the image quality. You will not get pristine renders but they will be good enough for establishing looks.

HDRI shooting (quick guide) by Xuan Prada

This is a quick introduction to HDRI shooting on set for visual effects projects.
If you want to go deeper on this topic please check my DT course here.

Equipment

This list below is a professional equipment for HDRI shooting. Good results can be achieved using amateur gear, don't necessary need to spend a lot of money for HDRI capturing, but the better equipment you own the easier, faster and better result you'll get. Obviously this gear is based on my taste.

  • Lowepro Vertex 100 AW backpack
  • Lowepro Flipside Sport 15L AW backpack
  • Full frame digital DSLR (Nikon D800)
  • Fish-eye lens (Nikkor 10.5mm)
  • Multi purpose lens (Nikkor 28-300mm)
  • Remote trigger
  • Tripod
  • Panoramic head (360 precision Atome or MK2)
  • akromatic kit (grey ball, chrome ball, tripod plates)
  • Lowepro Nova Sport 35L AW shoulder bag (for aromatic kit)
  • Macbeth chart
  • Material samples (plastic, metal, fabric, etc)
  • Tape measurer
  • Gaffer tape
  • Additional tripod for akromatic kit
  • Cleaning kit
  • Knife
  • Gloves
  • iPad or laptop
  • External hard drive
  • CF memory cards
  • Extra batteries
  • Data cables
  • Witness camera and/or second camera body for stills

All the equipment packed up. Try to keep everything small and tidy.

All your items should be easy to pick up.

Most important assets are: Camera body, fish-eye lens, multi purpose lens, tripod, nodal head, macbeth chart and lighting checkers.

Shooting checklist

  • Full coverage of the scene (fish-eye shots)
  • Backplates for look-development (including ground or floor)
  • Macbeth chart for white balance
  • Grey ball for lighting calibration 
  • Chrome ball for lighting orientation
  • Basic scene measurements
  • Material samples
  • Individual HDR artificial lighting sources if required

Grey and chrome spheres, extremely important for lighting calibration.

Macbeth chart is necessary for white balance correction.

Before shooting

  • Try to carry only the indispensable equipment. Leave cables and other stuff in the van, don’t carry extra weight on set.
  • Set-up the camera, clean lenses, format memory cards, etc, before start shooting. Extra camera adjustments would be required at the moment of the shooting, but try to establish exposure, white balance and other settings before the action. Know you lighting conditions.
  • Have more than one CF memory card with you all the time ready to be used.
  • Have a small cleaning kit with you all the time.
  • Plan the shoot: Write a shooting diagram with your own checklist, with the strategies that you would need to cover the whole thing, knowing the lighting conditions, etc.
  • Try to plant your tripod where the action happens or where your 3D asset will be placed.
  • Try to reduce the cleaning area. Don’t put anything on your feet or around the tripod, you will have to hand paint it out later in Nuke.
  • When shooting backplates for look-dev use a wide lens, something around 24mm to 28mm and cover always more space, not only where the action occurs.
  • When shooting textures for scene reconstruction always use a Macbeth chart and at least 3 exposures.

Methodology

  • Plant the tripod where the action happens, stabilise it and level it
  • Set manual focus
  • Set white balance
  • Set ISO
  • Set raw+jpg
  • Set apperture
  • Metering exposure
  • Set neutral exposure
  • Read histogram and adjust neutral exposure if necessary
  • Shot slate (operator name, location, date, time, project code name, etc)
  • Set auto bracketing
  • Shot 5 to 7 exposures with 3 stops difference covering the whole environment
  • Place the aromatic kit where the tripod was placed, and take 3 exposures. Keep half of the grey sphere hit by the sun and half in shade.
  • Place the Macbeth chart 1m away from tripod on the floor and take 3 exposures
  • Take backplates and ground/floor texture references
  • Shoot reference materials
  • Write down measurements of the scene, specially if you are shooting interiors.
  • If shooting artificial lights take HDR samples of each individual lighting source.

Final HDRI equirectangular panorama.

Exposures starting point

  • Day light sun visible ISO 100 F22
  • Day light sun hidden ISO 100 F16
  • Cloudy ISO 320 F16
  • Sunrise/Sunset ISO 100 F11
  • Interior well lit ISO 320 F16
  • Interior ambient bright ISO 320 F10
  • Interior bad light ISO 640 F10
  • Interior ambient dark ISO 640 F8
  • Low light situation ISO 640 F5

That should be it for now, happy shooting :)

Photography assembly for matte painters by Xuan Prada

In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.

I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.

The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.

  • I'm using these 7 images taken from a bridge in Chiswick, West London. The resolution of the images is 7000px wide so I created a proxy version around 3000px wide.
  • All the pictures were taken with same focal, same exposure and with the ISO and White Balance locked.
  • We need to know some information about these pictures. In order to blend the images in to a panoramic image we need to know the focal length and the film back or sensor size.
  • Connect a view meta data node to every single image to check this information. In this case I was the person who took the photos, so I know all of them have the same settings, but if you are not sure about the settings, check one by one.
  • I can see that the focal length is 280/10 which means the images were taken using a 28mm lens.
  • I don't see film back information but I do see the camera model, a Nikon D800. If I google the film back for this camera I see that the size is 35.9mm x 24mm.
  • Create a camera node with the information of the film back and the focal length.
  • At this point it would be a good idea to correct the lens distortion in your images. You can use a lens distortion node in Nuke if you shot a lens distortion grid, or just do eyeballing.
  • In my case I'm using the great lens distortion tools in Adobe Lightroom, but this is only possible because I'm using stills. You should always shot lens distortion grids.
  • Connect a card node to the image and remove all the subdivisions.
  • Also deactivate the image aspect to have 1:1 cards. We will fix this later.
  • Connect a transfer geo node to the card, and it's axis input to the camera.
  • If we move the camera, the card is attached to it all the time.
  • Now we are about to create a custom parameter to keep the card aligned to the camera all the time, with the correct focal length and film back. Even if we play with the camera parameters, the image will be updated automatically.
  • In the transform geo parameters, RMB and select manage user knobs and add a floating point slider. Call it distance. Set the min to 0 and the max to 10
  • This will allow us to place the card in space always relative to the camera.
  • In the transform geo translate z press = to type an expression. write -distance
  • Now if we play with the custom distance value it works.
  • Now we have to refer to the film back and focal length so the card matches the camera information when it's moved or rotated.
  • In the x scale of the transform geo node type this expression (input1.haperture/input1.focal)*distance and in the y scale type: (input1.vaperture/input1.focal)*distance being input1 the camera axis.
  • Now if we play with the distance custom parameter everything is perfectly aligned.
  • Create a group with the card, camera and transfer geo nodes.
  • Remove the input2 and input3 and connect the input1 to the card instead of the camera.
  • Go out of the group and connect it to the image. There are usually refreshing issues so cut the whole group node and paste it. This will fix the problem.
  • Manage knobs here and pick the focal length and film back from the camera (just for checking purposes)
  • Also pick the rotation from the camera and the distance from the transfer geo.
  • Having these controls here we won't have to go inside of the group if we need to use them. And we will.
  • Create a project 3D node and connect the camera to the camera input and the input1 to the input.
  • Create a sitch node below the transfer geo node and connect the input1 to the project3D node.
  • Add another custom control to the group parameters. Use the pulldown choice, call it mode and add two lines: card and project 3D.
  • In the switch node add an expression: parent.mode
  • Put the mode to project 3D.
  • Add a sphere node, scale it big and connect it to the camera projector.
  • You will se the image projected in the sphere instead of being rendered in a flat card.

Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them

In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.

  • Set the camera in the viewport and lock it.
  • Now you can zoom in and out without loosing the camera.
  • Set the horizon line playing with the rotation.
  • Copy and paste the camera projector group and set the horizon in the next image by doing the same than before; locking the camera and playing with camera rotation.
  • Create a scene node and add both images. Check that all the images have an alpha channel. Auto alpha should be fine as long as the alpha is completely white.
  • Look through the camera of the first camera projector and lock the viewport. Zoom out and start playing with the rotation and distance of the second camera projection until both images are perfectly blended.
  • Repeat the process with every single image. Just do the same than before; look through the previous camera, lock it, zoom out and play with the controls of the next image until they are perfectly aligned.
  • Create a camera node and call it shot camera.
  • Create a scanline render node.
  • Create a reformat node and type the format of your shot. In this case I'm using a super 35 format which means 1920x817
  • Connect the obj/scene input of the scanline render to the scene node.
  • Connect the camera input of the scanline render to the shot camera.
  • Connect the reformat node to the bg input of the scanline render node.
  • Look through the scanline render in 2D and you will see the panorama through the shot camera.
  • Play with the rotation of the camera in order to place the panorama in the desired position.

That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.

  • Create another scanline render node and change the projection mode to spherical. Connect it to the scene.
  • Create a reformat node with an equirectangular format and connect it to the bg input of the scanline render. In this case I'm using a 4000x2000 format.
  • Create a sphere node and connect it to the spherical scanline render. Put a mirror node in between to invert the normal of the sphere.
  • Create another scanline render and connect it's camera input to the shot camera.
  • Connect the bg input of the new scanline render to the shot reformat node (super 35).
  • Connect the scn/obj of the new scanline render and connect it to the sphere node.
  • That's all that you need.
  • You can look through the scanline render in the 2D and 3D viewport. We got all the images projected in 3D and rendered through the shot camera.

You can download the sample scene here.

Image Based Lighting in Clarisse by Xuan Prada

I've been using Isotropix Clarisse in production for a little while now. Recently the VFX Facility where I work announced the usage of Clarisse as primary Look-Dev and Lighting tool, so I decided to start talking about this powerful raytracer on my blog.

Today I'm writing about how to set-up Image Based Lighting.

  • We can start by creating a new context called ibl. We will put all the elements needed for ibl inside this context.
  • Now we need to create a sphere to use as "world" for the scene.
  • This sphere will be the support for the equirectangular HDRI texture.
  • I just increased the radius a lot. Keep in mind that this sphere will be covering all your assets inside of it.
  • In the image view tab we can see the render in real time.
  • Right now the sphere is lit by the default directional light.
  • Delete that light.
  • Create a new matte material. This material won't be affected by lighting.
  • Assign it to the sphere.
  • Once assigned the sphere will look black.
  • Create an image to load the HDRI texture.
  • Connect the texture to the color input of the matte shader.
  • Select the desired HDRI map in the texture path.
  • Change the projection type to "parametric".
  • HDRI textures are usually 32bit linear images. So you need to indicate this in the texture properties.
  • I created two spheres to check the lighting. Just press "f" to fit them in the viewport.
  • I also created two standard materials, one for each sphere. I'm creating lighting checkers here.
  • And a plane, just to check the shadows.
  • If I go back to the image view, I can see that the HDRI is already affecting the spheres.
  • Right now, only the secondary rays are being affected, like the reflection.
  • In order to create proper lighting, we need to use a light called "gi_monte_carlo".
  • Right now the noise in the scene is insane. This is because all the crazy detail in the HDRI map.
  • First thing to reduce noise would be to change the interpolation of the texture to Mipmapping.
  • To have a noise free image we will have to increase the sampling quality of the "gi_monte_carlo" light.
  • Noise reduction can be also managed with the anti aliasing sampling of the raytracer.
  • The most common approach is to combine raytracer sampling, lighting sampling and shading sampling.
  • Around 8 raytracing samples and something around 12 lighting samples are common settings in production.
  • There is another method to do IBL in Clarisse without the cost of GI.
  • Delete the "gi_monte_carlo" light.
  • Create an "ambient_occlusion" light.
  • Connect the HDRI texture to the color input.
  • In the render only the secondary rays are affected.
  • Select the environment sphere and deactivate the "cast shadows" option.
  • Now everything works fine.
  • To clean the noise increase the sampling of the "ambient_occlusion" light.
  • This is a cheaper IBL method.

Animated HDRI with Red Epic and GoPro by Xuan Prada

Not too long ago, we needed to create a lightrig to lit a very reflective character, something like a robot made of chrome. This robot is placed in a real environment with a lot of practical lights, and this lights are changing all the time.
The robot will be created in 3D and we need to integrate it in the real environment, and as I said, all the lights will be changing intensity and temperature, some of then flickering all the time and very quickly.

And we are talking about a long sequence without cuts, that means we can’t cheat as much as we’d like.
In this situation we can’t use standard equirectangular HDRIs. They won’t be good enough to lit the character as the lighting changes will not be covered by a single panoramic image.

Spheron

The best solution for this case is probably the Spheron. If you can afford it or rent it on time, this is your tool. You can get awesome HDRI animations to solve this problem.
But we couldn’t get it on time, so this is not an option for us.

Then we thought about shooting HDRI as usual, one equirectangular panorama for each lighting condition. It worked for some shots but in others when the lights are changing very fast and blinking, we needed to capture live action videos. Tricks animating the transition between different HDRIs wouldn’t be good enough.
So the next step it would be to capture HDRI videos with different exposures to create our equirectangular maps.

The regular method

0002.jpeg

The fastes solution would be to use our regular rigs (Canon 5D Mark III and Nikon D800) mounted in a custom base to support 3 cameras with 3 fisheye lenses. They will have to be overlapped by around 33%.
With this rig we should be able to capture the whole environment while recording with a steady cam, just walking around the set.
But obviously those cameras can’t record true HDR. They always record h264 or another compression video. And of course we can’t bracket videos with those cameras.

Red Epic

To solve the .RAW video and the multi brackting we end up using Red Epic cameras. But using 3 cameras plus 3 lenses is quite expensive for on set survey work, and also quite heavy rig to walk all around a big set.
Finally we used only one Red Epic with a 18mm lens mounted in an steady cam, and in the other side of the arm we placed a big akromatic chrome ball. With this ball we can get around 200-240 degrees, even more than using a fisheye lens.
Obviously we will get some distorsion on the sides of the panorama, but honestly, have you ever seen a perfect equirectangular panorama for 3D lighting being used in a post house?

With the Epic we shot .RAW video a 5 brackets, rocedording the akromatic ball all the time and just walking around the set. The final resolution was 4k.
We imported the footage in Nuke and convert it using a simple spherical transform node to create true HDR equirectangular panoramas. Finally we combined all the exposures.

With this simple setup we worked really fast and efficient. Precision was accurate in reflections and lighting and the render time was ridiculous.
Can’t show any of this footage now but I’ll do it soon.

GoPro

We had a few days to make tests while the set was being built. Some parts of the set were quite inaccessible for a tall person like me.
In the early days of set constructing we didn’t have the full rig with us but we wanted to make quick test, capture footage and send it back to the studio, so lighting artists could make some Nuke templates to process all the information later on while shooting with the Epic.

We did a few tests with the GoPro hero 3 Black Edition.
This little camera is great,  light and versatile. Of course we can’t shot .RAW but at least it has a flat colour profile and can shot 4k resolution. You can also control the white balance and the exposure. Good enough for our tests.

We used an akromatic chrome ball mounted on an akromatic base, and on the other side we mounted the GoPro using a Joby support.
We shot using the same methodology that we developed for the Epic. Everything worked like a charm getting nice panormas for previs and testing purposes.

It also was fun to shot with quite unusual rig, and it helped us to get used to the set and to create all the Nuke templates.
We also did some render tests with the final panoramas and the results were not bad at all. Obviously these panoramas are not true HDR but for some indie projects or low budget projects this would be an option.

Footage captured using a GoPro and akromatic kit

In this case I’m in the center of the ball and this issue doesn’t help to get the best image. The key here is to use a steady cam to reduce this problem.

Nuke

Nuke work is very simple here, just use a spherical transform node to convert the footage to equirectangular panoramas.

Final results using GoPro + akromatic kit

Few images of the kit

Fixing “nadir” in Nuke by Xuan Prada

Sometimes you may need to fix the nadir of the HDRI panoramas used for lighting and look-development.
It’s very common that your tripod is placed on the ground of your pictures, specially if you use a Nodal Ninja panoramic head or similar. You know, one of those pano heads that you need to shoot images for zenit and nadir.

I usually do this task in another specific tools for VFX panoramas like PtGui, but if you dont’ have PtGui the easiest way to handle this is in Nuke.
It is also very common when you work on a big VFX facility, that other people work on the stitching process of the HDRI panoramas. If they are in a hurry they might stitch the panorama and deliver it for lighting forgetting to fix small (or big) imperfections.
In that case, I’m pretty sure that you as lighting or look-dev artist will not have PtGui installed on your machine, so Nuke will be your best friend to fix those imperfections.

This is an example that I took while ago.One of the brackets for one of the angles. As you can see I’m shooting remote with my laptop but it’s covering a big chunk of the ground.

When the panorama was stitched, the laptop became a problem. This panorama is just a preview, sorry for the low image quality.
Fixing this in an aquirectangular panorama would be a bit tricky, even worse if you are using a Nodal Ninja type pano head.
So, find below how to fix it in Nuke. I’m using a high resolution panorama that you can download for free at akromatic.com

  • First of all, import your equirectangular panorama in Nuke and use your desired colour space.
  • Use a spherical transform node to see the panorama as a mirror ball.
  • Change the input type to “Lat Long map” and the output type to “Mirror Ball“.
  • In this image you can see how your panorama will look in the 3D software. If you think that something is not looking good in the “nadir” just get rid of it before rendering.
  • Use another spherical transform node but in this case change the output type to “Cube” and change the rx to -90 so we can see the bottom side of the cube.
  • Using a roto paint node we can fix whatever you need/want to fix.
  • Take another spherical transform node, change the input type to “Cube” and the output type to “Lat Long map“.
  • You will notice 5 different inputs now.
  • I’m using constant colours to see which input corresponds to each specific part of the panorama.
  • The nadir should be connected to the input -Y
  • The output format for this node should be the resolution of the final panorama.
  • I replace each constant colour by black colours.
  • Each black colour should have also alpha channel.
  • This is what you get. The nadir that you fixed as a flat image is now projected all the way along on the final panorama.
  • Check the alpha channel of the result.
  • Use a merge node to blend the original panorama with the new nadir.
  • That’s it, use another spherical transform node with the output type set to Mirror Ball to see how the panorama looks like now. As you can see we got rid of the distortions on the ground.