Not too long ago, we needed to create a lightrig to lit a very reflective character, something like a robot made of chrome. This robot is placed in a real environment with a lot of practical lights, and this lights are changing all the time.
The robot will be created in 3D and we need to integrate it in the real environment, and as I said, all the lights will be changing intensity and temperature, some of then flickering all the time and very quickly.

And we are talking about a long sequence without cuts, that means we can’t cheat as much as we’d like.
In this situation we can’t use standard equirectangular HDRIs. They won’t be good enough to lit the character as the lighting changes will not be covered by a single panoramic image.

Spheron

spheron

The best solution for this case is probably the Spheron. If you can afford it or rent it on time, this is your tool. You can get awesome HDRI animations to solve this problem.
But we couldn’t get it on time, so this is not an option for us.

Then we thought about shooting HDRI as usual, one equirectangular panorama for each lighting condition. It worked for some shots but in others when the lights are changing very fast and blinking, we needed to capture live action videos. Tricks animating the transition between different HDRIs wouldn’t be good enough.
So the next step it would be to capture HDRI videos with different exposures to create our equirectangular maps.

The regular method

canon nikon

The fastes solution would be to use our regular rigs (Canon 5D Mark III and Nikon D800) mounted in a custom base to support 3 cameras with 3 fisheye lenses. They will have to be overlapped by around 33%.
With this rig we should be able to capture the whole environment while recording with a steady cam, just walking around the set.
But obviously those cameras can’t record true HDR. They always record h264 or another compression video. And of course we can’t bracket videos with those cameras.

Red Epic

epic

To solve the .RAW video and the multi brackting we end up using Red Epic cameras. But using 3 cameras plus 3 lenses is quite expensive for on set survey work, and also quite heavy rig to walk all around a big set.
Finally we used only one Red Epic with a 18mm lens mounted in an steady cam, and in the other side of the arm we placed a big akromatic chrome ball. With this ball we can get around 200-240 degrees, even more than using a fisheye lens.
Obviously we will get some distorsion on the sides of the panorama, but honestly, have you ever seen a perfect equirectangular panorama for 3D lighting being used in a post house?

With the Epic we shot .RAW video a 5 brackets, rocedording the akromatic ball all the time and just walking around the set. The final resolution was 4k.
We imported the footage in Nuke and convert it using a simple spherical transform node to create true HDR equirectangular panoramas. Finally we combined all the exposures.

With this simple setup we worked really fast and efficient. Precision was accurate in reflections and lighting and the render time was ridiculous.
Can’t show any of this footage now but I’ll do it soon.

GoPro

0001

0002

We had a few days to make tests while the set was being built. Some parts of the set were quite inaccessible for a tall person like me.
In the early days of set constructing we didn’t have the full rig with us but we wanted to make quick test, capture footage and send it back to the studio, so lighting artists could make some Nuke templates to process all the information later on while shooting with the Epic.

We did a few tests with the GoPro hero 3 Black Edition.
This little camera is great,  light and versatile. Of course we can’t shot .RAW but at least it has a flat colour profile and can shot 4k resolution. You can also control the white balance and the exposure. Good enough for our tests.

We used an akromatic chrome ball mounted on an akromatic base, and on the other side we mounted the GoPro using a Joby support.
We shot using the same methodology that we developed for the Epic. Everything worked like a charm getting nice panormas for previs and testing purposes.

It also was fun to shot with quite unusual rig, and it helped us to get used to the set and to create all the Nuke templates.
We also did some render tests with the final panoramas and the results were not bad at all. Obviously these panoramas are not true HDR but for some indie projects or low budget projects this would be an option.

Footage captured using a GoPro and akromatic kit

In this case I’m in the center of the ball and this issue doesn’t help to get the best image. The key here is to use a steady cam to reduce this problem.

 

 

 

Nuke

Nuke work is very simple here, just use a spherical transform node to convert the footage to equirectangular panoramas.

nuke0001

nuke0002

nuke0003

 

Final results using GoPro + akromatic kit

 

 

Few images of the kit

0001

0002

0004

0005

Another of those steps that I need to do when I’m working on any kind of vfx project and I consider “a must”.
This is how I set up my Zbrush displacements in Modo.

- Once you have finished your sculpting work in Zbrush, with all the layers activated go to the lowest subdivision level.

 

 

- Go to the morph target panel, click on StoreMT and import your base geometry. Omit this step if you started your model in Zbrush.

 

 

- Once the morph targer is created, you will se it in viewport. Go back to your sculpted mesh by clicking on the switch button.

 

 

- Export all the displacement maps using the multi map exporter. I would recommend you to use always 32bit maps.
- Check my settings to export the maps. The most important parameters are scale and intensity. Scale should be 1 and intensity will be calculated automatically.

 

- Check the maps in Nuke and use the roto paint tool to fix small issues.

 

- Once in Modo, import your original asset. Select your asset in the item list and check linear uvs and set the amount of subdivisions that you want to use.

 

- Assign a new shader to your asset, add the displacement texture as texture layer and set the effect as displacement.
- Low value 
and high value should be set to 0 and 100.

 

- In the gamma texture options, set the value to 1.0
- We are working in a linear workflow, which means that scalar textures don’t need to be gamma corrected.

 

- In the shader options, go to the surface normal options and use 1m as value for the displacement distance. If you are using 32bit displacements this value should be the standard.

 

- Finally in the render options, play with the displacement rate to increase the quality of your displacement maps.
0.5 to 1 are welcome. Lower values are great but take more time to render, so be careful.

 

- Render a displacement checker to see if everything works fine.

If you look through my blog you will find a lot of posts talking about linear workflow, gamma correction and other colour related stuff using different 3D apps.
Someone ask me on twitter about it and I’d like to answer here with a quick practical example.

Question: He Xuan, do I need to gamma correct a 16bit floating point ptex texture if I’m working in a linear workflow?
Answer: Nope, yo don’t. It’s linear 1.0 which means is already in a linear workspace.

That said, let me explain it to you with a quick and simple case scene.

If the inputs are .hdr or .exr 16bit or 32bit images, they don’t need to be gamma corrected. If you are using sRGB 8bit maps with a 2.2gamma baked, then you need to gamma correct them.

 

I have here two different nodes in Nuke. Both of them have the same cropped image from an 32bit floating point .exr panorama. I saved them as two different files, one of the nodes is a 32bit .exr (linear 1.0) and the other one is an 8bit .tif (sRGB 2.2)

I’m using both images in two different VRay shaders in Maya.

 

I created two small grids, and I applied the shader with the linear image to the grid on the left, and the shader with the sRGB image to the grid on the right.
This is what I got if I render the scene.

The sRGB doesn’t look right. It needs to be gamma corrected.
With a gamma correction node with value 0.455 (1/2.2) should be fine.

Render again, and everything looks as expected.

You have probably experienced this error a few times already, haven’t you?
It is quite common specially when you are working with huge assets.

It happened to me last week a lot of times when working with a 40 UDIM asset and trying to export a 32 bit displacement maps.
My machine couldn’t handle it and Zbrush started to giving error saying “Insufficent memory error”.

If this happens to you and don’t know how to extract your displacement maps out of Zbrush, don’t worry, this small trick could help you.

- Execute Zbrush using your root account in Mac or Administrador account in Windows.
- In Windows just right click on the Zbrush icon and select “run as administrator”.
- In Mac start a terminar and logging as root.
- Then execute Zbrush.

- Then in Zbrush go to Preferences -> Mem and increase the Compact Memory.

-That’s it. It should work now.
- Unfortunately this trick only worked for me with simple displacement, but it didn’t work with vector displacement :(

Zbrush to Maya and Vray 2.0

December 1, 2013

I know how tricky can be sometimes to make your Zbrush displacements look great outside Zbrush.
Maya, Softimage, Vray, Renderman or Arnold, just to name a few treat Zbrush displacements in a different way.
Let me explain to you my way to export displacement from Zbrush to Maya and Vray 2.0

- First of all, if you are working with a final asset you will have to export your displacement using your base geometry imported in Zbrush. If you did the scult from scratch in Zbrush you may want to export your lowest subdivision mesh, create a good uv mapping and re-project your sculpted detail in that mesh.
If this is the case, check this.

- Go to the lowest subdivision level.

- Turn off all your layers.

- Export as .obj

- This is the object that you are about to render. If you had imported a base mesh before, you won’t need to export it again, it would be in your 3D application already.
- Go back to the highest subdivision level.

- Turn on all your layers.

- Go down to the lowest subdivision level.

- Store a new morph target and import the previous exported .obj or your original base mesh from your 3D application.

- Your sculpted model will be substituted by the original mesh with no sculpt information.
- Click on switch morph target to activate again your sculpted mesh.
- You are ready to export the displacement maps, just check my settings below for 16 bits, 32 bits and vector displacement.

- Finally to set-up your shaders and render settings for Zbrush displacements in Maya and Vray 2.0 check my previous post about it.

Follow

Get every new post delivered to your Inbox.

Join 81 other followers