Introduction to Gaffer part 03, where I show you how to use references, AOVs, batch rendering, displacement maps, other useful tools.
workflow
Houdini as scene assembler part 02 /
In the previous post I showed you how to load alembic caches using the file node and then change the viewport visualization to bounding box. This is good enough if you are let's say look-deving a character. If you want to load a heavy alembic cache, like a very detailed city with a lot of buildings, or a huge spaceship, you might want to use a different approach.
Instead of using a file node, it is better to use the alembic node to load you assets, and then set the option Load As: alembic delayed load primitives, and display as bounding box. This isn't actually loading the geometry in memory and it will be way more efficient down the line.
In this post I'm just talking about shading assignment in Houdini.
The easiest and more simple way to assign shaders is by selecting the asset node in the /obj context and assign a shader in the render tab. Your Mantra shaders should be placed in the /mat context and your Arnold shaders in the /shop context as /mat is not fully supported yet.
In the /mat context you can just go and create a Mantra Principled Shader. For Arnold, it is better to create an arnold shader network and then any arnold shader inside connected to the surface input.
Houdini doesn't have an isolation mode for shading components like Maya (as far as I know) but you can drag and drop shaders and textures onto the viewport or IPR while look-deving. This only works in the /mat context (again, as far as I know).
Another way of assigning shaders is creating material nodes inside of the alembic node. This material nodes can be assigned to different parts of your asset using wildcards. To assign multiple materials you can create different tabs in the material node or you can just concatenate material nodes (which I prefer). This technique works with both Mantra and Arnold.
You will find yourself most of the time creating material networks (Mantra) or shop networks (arnold) containing all the shaders of your asset. In a lighting shot you will end up with different subnetworks for each asset on the shot.
This subnetworks of shaders can be place at the /obj level or inside of the alembics containing your assets.
Another clever way of assignment shaders is using the data tree -> object appearance. This only works at object level. If you want to go deeper in your alembic asset, you need to add first a node called packed edit. Then in the data tree you will have access to all the different parts of your asset.
There is another way of controlling looks in Houdini, and that is using the material style sheets. We will cover this tool in future posts.
A bit more of gaffer /
Keep playing with gaffer and keep discovering how to do stuff that I’m used to do in other software.
Introduction to gaffer /
By gaffer hq: Gaffer is a free, open-source, node-based VFX application that enables look developers, lighters, and compositors to easily build, tweak, iterate, and render scenes. Built with flexibility in mind, Gaffer supports in-application scripting in Python and OSL, so VFX artists and technical directors can design shaders, automate processes, and build production workflows.
With hooks in both C++ and Python, Gaffer's readily extensible API provides both professional studios and enthusiasts with the tools to add their own custom modules, nodes, and UI.
The workhorse of the production pipeline at Image Engine Design Inc., Gaffer has been used to build award-winning VFX for shows such as Jurassic World: Fallen Kingdom, Lost in Space, Logan, and Game of Thrones.
Houdini as scene assembler, part 01 (of many) /
It’s been a while since I used Houdini at work, the very first time I used Houdini on a show it was while working on Happy Feet 2, it was our main scene assembler for the show. Look-dev, lighting and rendering was all done in Houdini and 3Delight.
From there I never used Houdini again until I was working on Geostorm at Dneg. Most of the shots were managed with Houdini and PrMan. That is all my experience with Houdini in a professional environment. No need to say that I have only used Houdini for assembly tasks, look-dev, lighting and rendering, nothing like fx or other fancy stuff.
The common thing between the two shows where I used Houdini as assembler is that we had pretty neat tools to take care of most of the steps through the pipeline. Becasue of that I can’t barely use Houdini out of the box, so I’m going to try to learn how to use it and share it here for future reference.
During my time working at facilities like MPC, Dneg or Framestore, I have used different scene assemblers like Katana, Clarisse or other propietary tools. My goal is to extrapolate my knowledge and experience using those software to Houdini. I’m pretty sure that I’d be using tools and techniques in the wrong way just because Houdini has a different philosophy than other tools or just because my lack of knowledge in general about Houdini and proceduralism. But anyway, I’ll try to make it work, if you see anything that I’m doing terribly wrong, please let me know, I’ll be listening.
I’ll be posting about stuff that I’m dealing with in no particular order but always assembly oriented, do not expect to see here anything related with fx or more “traditional” use of Houdini. Most of the stuff is going to be very basic, specially at the beginning but please bare with me, it will get more interesting in the future.
If you are assembling a scene one of the first steps it would be to bring all your assets from other applications. You can of course generate content in Houdini but usually most of you assets will be created in other packages, being Maya the most common one. So I guess the very first thing you’d have to deal with is how to import alembic caches. If you are working in a vfx facility chances of having automated tools to setup your shots for you are pretty high. Launching Houdini from a context in a terminal will take care of everything. If you are at home or starting to use Houdini in a vfx boutique you will have to setup your shots manually. There are clever and easy ways to create Houdini templates for your show/shot but we will leave this topic for future posts.
To bring your assets as alembic caches just create a file node, step inside and replace the existing file for another file node pointing to your alembic cache, or just use the existing file node and change the path to read you alembic cache.
If you are look-deving a character lets say, it is completely fine to look at the full geometry in the viewport. If you are assembling a big scene like a city or a space ship you’d probably want to change your viewport settings to something like bounding boxes. There are better ways of dealing with bounding box without loading the geo, more to come soon.
Assets are usually complex and we try to keep everything tidy and organised by naming everything properly and structuring groups and hierarchies in a particular way that makes sense for our purposes. The unpack node will allow you to access to all the different parts and componentes of the alembic caches and to perform different operations later on. The groups can be selected based on the hierarchies created in Maya or based on wildcards. It is extremely important to use a clever naming and structuring groups following certain logic to make the assembly process easier and faster.
The blast node will help you also to access to the information contained in the alembic cache and remove whatever you don’t need to use for a particular operation. You can also invert the selection to keep the items that you wrote in the group field and get rid of the rest.
The group node is another very useful node to point to different groups in your alembic caches. Again based on Maya grouping and wildcards.
That is it for now in that sense, there are many ways to manipulate alembic caches but we don’t need to talk about that just yet. In these first posts I will be talking mostly about bringing assets, working with textures and look-dev. That is the first step for assembling a shot, we need assets ready to travel trough the pipeline.
Uv mapping is key for us, a lot of tasks performed in Houdini use procedural UVs or no UVs at all. This is not the case for us. Asset always have proper UV mapping. Generally speaking you will do all the UV related tasks in Maya, UV Layout or similar tools. In order to see the UVs in Houdini we need to unpack the alembic cache first, then we will be able to press “5” and look at the UVs.
Use a quick uv shade node to display a checkered texture in the viewport. You can easily change the size of the checker or use a different texture. There is also a group field that you can use for filtering.
Not ideal but if you are working on extremely simple assets like walls, grounds, maybe terrains, it is totally fine to create the UVs in Houdini. Houdini UV tools are not the best but you will find yourself using them at some point. The uv texture node crates basic projections like cylindrical, orthographic, etc.
The uv unwrap node create automatic UVs based on projection planes.
The uv layout node is a tools for packing your UVs. Using a fixed scale you can distribute the UVs in different UDIMs.
The auto uv node is actually pretty good. It is part of the game development tools shipped with Houdini. You need to activate this package first, just go the shelf, click on the plus button and look for game development tools. Then click on the icon update toolset to get the latest version.
The auto uv tools has different methods for UVing and for packing, it is worth trying them, it works really well specially with messy objects.
The uv transform node deals with anything related to moving, translating and rotating UVs. You don’t really want to do this here in Houdini, but if you have to, this is the tool. I use it a lot if I need to re-distribute UDIM tiles.
Attribute create node (with the following parameters) allos you to create a parameter to move UVs to a specific UDIM. Then add a uv layout node and set the packing method to UDIM attribute.
On-set tips: Creating high frequency detail /
In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.
If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.
As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.
Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.
In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.
I created this support while ago to scan fruits and other organic assets.
The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.
Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.
Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.
Export from Maya to Mari /
Yes, I know that Mari 3.x supports OpenSubdiv, but I've had some bad experiences already where Mari creates artefacts on the meshes.
So for now, I will be using the traditional way of exporting subdivided meshes from Maya to Mari. These are the settings that I usually use to avoid distortions, stretching and other common issues.
Stmaps /
One of the first treatments that you will have to do to your VFX footage is removing lens distortion. This is crucial for some major tasks, like tracking, rotoscoping, image modelling, etc.
Copy lens information between different footage or between footage and 3D renders is also very common. Working with different software like 3D equalizar, Nuke, Flame, etc, having a common and standard way to copy lens information seems to be a good idea. Uv maps are probably the easiest way to do this, as they are plain 32 bit exr images.
- Using lens grids is always the easiest, fastest and most accurate way of delensing.
- Set the output type to displacement and look through the forward channel to see the uvs in viewport.
- Write the image as .exr 32 bits
- This will output the uv information and can be read in any software.
- To apply the lensing information to your footage or renders, just use a Stmap connected to the footage and to the uv map.
Bake from Nuke to UVs /
- Export your scene from Maya with the geometry and camera animation.
- Import the geometry and camera in Nuke.
- Import the footage that you want to project and connect it to a Project 3D node.
- Connect the cam input of the Project 3D node to the previously imported camera.
- Connect the img input of the ReadGeo node to the Project 3D node.
- Look through the camera and you will see the image projected on to the geometry through the camera.
- Paint or tweak whatever you need.
- Use a UVProject node and connect the axis/cam input to the camera and the secondary input to the ReadGeo.
- Projection option of the UVProjection should be set as off.
- Use a ScanlineRender node and connect it’s obj/scene input to the UVProject.
- Set the projection mode to UV.
- If you swap from the 3D view to the 2D view you will see your paint work projected on to the geometry uvs.
- Finally use a write node to output your DMP work.
- Render in Maya as expected.
rendering Maya particles in Clarisse /
This is a very simple tutorial explaining how to render particle systems simulated in Maya inside Isotropix Clarisse. I already have a few posts about using Clarisse for different purposes, if you check by the tag "Clarisse" you will find all the previous posts. Hope to be publishing more soon.
In this particular case we'll be using a very simple particle system in Maya. We are going to export it to Clarisse and use custom geometries and Clarisse's powerful scatterer system to render millions of polygons very fast and nicely.
- Once your particle system has been simulated in Maya, export it via Alembic. One of the standard 3D formats for exchanging information in VFX.
- Create an IBL rig in Clarisse. In a previous post I explain how to do it, it is quite simple.
- With Clarisse 2.0 it is so simple to do, just one click and you are ready to go.
- Go to File -> Import -> Scene and select the Alembic file exported from Maya.
- It comes with 2 types of particles, a grid acting as ground and the render camera.
- Create a few contexts to keep everything tidy. Geo, particles, cameras and materials.
- In the geo context I imported the toy_man and the toy_truck models (.obj) and moved the grid from the main context to the geo context.
- Moved the 2 particles systems and the camera to their correspondent contexts.
- In the materials context I created 2 materials and 2 color textures for the models. Very simple shaders and textures.
- In the particles context I created a new scatterer calle scatterer_typeA.
- In the geometry support of the scatter add the particles_typeA and in the geometry section add the toy_man model.
- I’m also adding some variation to the rotation.
- If I move my timeline I will see the particle animation using the toy_man model.
- Do not forget to assign the material created before.
- Create another scatterer for the partycles_typeB and configure the geometry support and the geometry to be used.
- Add also some rotation and position variation.
- As these models are quite big compared with the toy figurine, I’m offsetting the particle effect to reduce the presence of toy_trucks in the scene.
- Before rendering, I’d like to add some motion blur to the scene. Go to raytracer -> Motion Blur -> 3D motion blur. Now you are ready to render the whole animation.
Meshlab align to ground /
If you deal a lot with 3D scans, Lidars, photogrammetry and other heavy models, you probably use Meshlab. This "little" software is great managing 75 million polygon Lidars and other complex meshes. Photoscan experienced users usually play with the align to ground tool to establish the correct axis for their resulting meshes.
If you look for this option in Meshlab you wouldn't find it, at least I didn't. Please let me know if you know how to do this.
What I found is a clever workaround to do the same same thing with a couple of clicks.
- Import your Lidar or photogrammetry, and also import a ground plane exported from Maya. This is going to be your floor, ground or base axis.
- This is a very simple example. The goal is to align the sneaker to the ground. I would like to deal with such a simple lidars at work :)
- Click on the align icon.
- In the align tool window, select the ground object and click on glue here mesh.
- Notice the star that appears before the name of the object indicating that the mesh has been selected as base.
- Select the lidar, photogrammetry or whatever geometry that need to be aligned and click on point based glueing.
- In this little windows you can see both objects. Feel free to navigate around it behaves like a normal viewport.
- Select one point at the base of the lidar by double clicking on top of it. Then do the same in one point of the base geo.
- Repeat the same process. You'll need at least 4 points.
- Done :)
Dealing with Ptex displacement /
What if you are working with Ptex but need to do some kind of Zbrush displacement work?
How can you render that?
As you probably now, Zbrush doesn't support Ptex. I'm not a super fan of Ptex (but I will be soon) but sometimes I do not have time or simply I don't want to make proper UV mapping. Then, if Zbrush doesn't export Ptex and my assets don't have any sort of UV coordinates, can't I use Ptex at all for my displacement information?
Yes, you can use Ptex.
- In this image below, I have a detailed 3D scan which has been processed in Meshlab to reduce the crazy amount of polygons.
- Now I have imported the model via obj in Zbrush. Only 500.000 polys but it looks great though.
- We are going to be using Zbrush to create a very quick retopology for this demo. We could use Maya or Modo to create a production ready model.
- Using the Zremesher tool which is great for some type of retopology tasks, we get this low res model. Good enough for our purpose here.
- Next step would be exporting both model, high and low resolution as .obj
- We are going to use these models in Mudbox to create our Ptex based displacement. Yes, Mudbox does support Ptex.
- Once imported keep both of them visible.
- Export displacement maps. Have a look in the image below at the options you need to tweak.
- Basically you need to activate Ptex displacement, 32bits, the texel resolution, etc)
- To setup your displacement setup in Maya and Vray just follow the 32 bits displacement rule.
- And that's it. You should be able to render your Zbrush details using Ptex now.
mmColorTarget /
This is a very quick demo of how to install on Mac and use the gizmo mmColorTarget or at least how I use it for my texturing/references and lighting process. The gizmo itself was created by Marco Meyer.
Combining Zbrush and Mari displacement maps /
Short and sweet (hopefully).
It seems to be quite a normal topic these days. Mari and Zbrush are commonly used by texture artists. Combining displacement maps in look-dev is a must.
I'll be using Maya and Arnold for this demo but any 3D software and renderer is welcome to use the same workflow.
- Using Zbrush displacements is no brainer. Just export them as 32 bit .exr and that's it. Set your render subdivisions in Arnold and leave the default settings for displacement. Zero value is always 0 and height should be 1 to match your Zbrush sculpt.
- These are the maps that I'm using. First the Zbrush map and below the Mari map.
- No displacement at all in this render. This is just the base geometry.
- In this render I'm only using the Zbrush displacement.
- In order to combine Zbrush displacement maps and Mari displacement maps you need to normalise the ranges. If you use the same range your Mari displacement would be huge compared with the Zbrush one.
- Using a multiply node is so easy to control the strength of the Mari displacement. Connect the map to the input1 and play with the values in the input2.
- To mix both displacement maps you can use an average node. Connect the Zbrush map to the input0 and the Mari map (multiply node) to the input1.
- The average node can't be connected straight o the displacement node. Use ramp node with the average node connected to it's color and then connect the ramp to the displacement default input.
- In this render I'm combining both, Zbrush map and Mari map.
- In this other example I'm about to combine two displacements using a mask. I'll be using a Zbrush displacement as general displacement, and then I'm going to use a mask painted in Mari to reveal another displacement painted in Mari as well.
- As mask I'm going to use the same symbol that I used before as displacement 2.
- And as new displacement I'm going to use a procedural map painted in Mari.
- The first thing to do is exactly the same operation that we did before. Control the strength of the Mari's displacement using a multiply node.
- Then use another multiply node with the Mari's map (multiply) connected to it's input1 and the mask connected to it's input2. This will reveal the Mari's displacement only in the white areas of the mask.
- And the rest is exactly the same as we did before. Connect the Zbrush displacement to the input0 of the average node and the Mari's displacement (multiply) to the input1 of the average node. Then the average node to the ramp's color and the ramp to the displacement default input.
- This is the final render.
VFX footage input/output /
This is a very quick and dirty explanation of how the footage and specially colour is managed in a VFX facility.
Shooting camera to Lab
The RAW material recorded on-set goes to the lab. In the lab it is converted to .dpx which is the standard film format. Sometimes the might use exr but it's not that common.
A lot of movies are still being filmed with film cameras, in those cases the lab will scan the negatives and convert them to .dpx to be used along the pipeline.
Shooting camera to Dailies
The RAW material recorded on-set goes to dailies. The cinematographer, DP, or DI applies a primary LUT or color grading to be used along the project.
Original scans with LUT applied are converted to low quality scans and .mov files are generated for distribution.
Dailies to Editorial
Editorial department receive the low quality scans (Quicktimes) with the LUT applied.
They use these files to make the initial cuts and bidding.
Editorial to VFX
VFX facilities receive the low quality scans (Quictimes) with LUT applied. They use these files for bidding.
Later on they will use them as reference for color grading.
Lab to VFX
Lab provides high quality scans to the VFX facility. This is pretty much RAW material and the LUT needs to be applied.
The VFX facility will have to apply the LUT's film to the work done by scratch by them.
When the VFX work is done, the VFX facility renders out exr files.
VFX to DI
DI will do the final grading to match the Editorial Quicktimes.
VFX/DI to Editorial
High quality material produced by the VFX facility goes to Editorial to be inserted in the cuts.
The basic practical workflow would be.
- Read raw scan data.
- Read Quicktime scan data.
- Dpx scans usually are in LOG color space.
- Exr scans usually are in LIN color space.
- Apply LUT and other color grading to the RAW scans to match the Quicktime scans.
- Render out to Editorial using the same color space used for bringing in footage.
- Render out Quicktimes using the same color space used for viewing. If wathcing for excample in sRGB you will have to bake the LUT.
- Good Quicktime settings: Colorspace sRGB, Codec Avid Dnx HD, 23.98 frames, depth million of colors, RGB levels, no alpha, 1080p/23.976 Dnx HD 36 8bit
Sketch shader /
First attempt to create a shader that looks like rough 2D sketches.
I will definitely put more effort on this in the future.
I'm pretty much combining three different pen strokes.
Introduction to scatterers in Clarisse /
Scatterers in Clarisse are just great. They are very easy to control, reliable and they render in no time.
I've been using them for matte painting purposes, just feed them with a bunch of different trees to create a forest in 2 minutes. Add some nice lighting and render insane resolution. Then use all the 3D material with all the needed AOV's in Nuke and you'll have full control to create stunning matte paintings.
To make this demo a bit funnier instead of trees I'm using cool Lego pieces :)
- Create a context called obj and import the grid.obj and the toy_man.obj
- Create another context called shaders and create generic shaders for the objs.
- Also create two textures and load the images from the hard drive.
- Assign the textures to the diffuse input of each shader and then assign each shader to the correspondent obj.
- Set the camera to see the Lego logo.
- Create a new context called crowd, and inside of it create a point cloud and a scatterer.
- In the point cloud set the parent to be the grid.
- In the scatterer set the parent to be the grid as well.
- In the scatterer set the point cloud as geometry support.
- In the geometry section of the scatterer add the toy_man.
- Go back to the point cloud and in the scattering geometry add the grid.
Now play with the density. In this case I’m using a value of 0.7
As you can see all the toy_men start to populate the image.
- In the decimate texture add the Lego logo. Now the toy_men stick to the Logo.
- Add some variation in the scatterer position and rotation.
- That’s it. Did you realise how easy was to setup this cool effect? And did you check the polycount? 108.5 million :)
- In order to make this look a little bit better, we can remove the default lighting and do some quick IBL setup.
Photography assembly for matte painters /
In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.
I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.
The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.
- I'm using these 7 images taken from a bridge in Chiswick, West London. The resolution of the images is 7000px wide so I created a proxy version around 3000px wide.
- All the pictures were taken with same focal, same exposure and with the ISO and White Balance locked.
- We need to know some information about these pictures. In order to blend the images in to a panoramic image we need to know the focal length and the film back or sensor size.
- Connect a view meta data node to every single image to check this information. In this case I was the person who took the photos, so I know all of them have the same settings, but if you are not sure about the settings, check one by one.
- I can see that the focal length is 280/10 which means the images were taken using a 28mm lens.
- I don't see film back information but I do see the camera model, a Nikon D800. If I google the film back for this camera I see that the size is 35.9mm x 24mm.
- Create a camera node with the information of the film back and the focal length.
- At this point it would be a good idea to correct the lens distortion in your images. You can use a lens distortion node in Nuke if you shot a lens distortion grid, or just do eyeballing.
- In my case I'm using the great lens distortion tools in Adobe Lightroom, but this is only possible because I'm using stills. You should always shot lens distortion grids.
- Connect a card node to the image and remove all the subdivisions.
- Also deactivate the image aspect to have 1:1 cards. We will fix this later.
- Connect a transfer geo node to the card, and it's axis input to the camera.
- If we move the camera, the card is attached to it all the time.
- Now we are about to create a custom parameter to keep the card aligned to the camera all the time, with the correct focal length and film back. Even if we play with the camera parameters, the image will be updated automatically.
- In the transform geo parameters, RMB and select manage user knobs and add a floating point slider. Call it distance. Set the min to 0 and the max to 10
- This will allow us to place the card in space always relative to the camera.
- In the transform geo translate z press = to type an expression. write -distance
- Now if we play with the custom distance value it works.
- Now we have to refer to the film back and focal length so the card matches the camera information when it's moved or rotated.
- In the x scale of the transform geo node type this expression (input1.haperture/input1.focal)*distance and in the y scale type: (input1.vaperture/input1.focal)*distance being input1 the camera axis.
- Now if we play with the distance custom parameter everything is perfectly aligned.
- Create a group with the card, camera and transfer geo nodes.
- Remove the input2 and input3 and connect the input1 to the card instead of the camera.
- Go out of the group and connect it to the image. There are usually refreshing issues so cut the whole group node and paste it. This will fix the problem.
- Manage knobs here and pick the focal length and film back from the camera (just for checking purposes)
- Also pick the rotation from the camera and the distance from the transfer geo.
- Having these controls here we won't have to go inside of the group if we need to use them. And we will.
- Create a project 3D node and connect the camera to the camera input and the input1 to the input.
- Create a sitch node below the transfer geo node and connect the input1 to the project3D node.
- Add another custom control to the group parameters. Use the pulldown choice, call it mode and add two lines: card and project 3D.
- In the switch node add an expression: parent.mode
- Put the mode to project 3D.
- Add a sphere node, scale it big and connect it to the camera projector.
- You will se the image projected in the sphere instead of being rendered in a flat card.
Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them
In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.
- Set the camera in the viewport and lock it.
- Now you can zoom in and out without loosing the camera.
- Set the horizon line playing with the rotation.
- Copy and paste the camera projector group and set the horizon in the next image by doing the same than before; locking the camera and playing with camera rotation.
- Create a scene node and add both images. Check that all the images have an alpha channel. Auto alpha should be fine as long as the alpha is completely white.
- Look through the camera of the first camera projector and lock the viewport. Zoom out and start playing with the rotation and distance of the second camera projection until both images are perfectly blended.
- Repeat the process with every single image. Just do the same than before; look through the previous camera, lock it, zoom out and play with the controls of the next image until they are perfectly aligned.
- Create a camera node and call it shot camera.
- Create a scanline render node.
- Create a reformat node and type the format of your shot. In this case I'm using a super 35 format which means 1920x817
- Connect the obj/scene input of the scanline render to the scene node.
- Connect the camera input of the scanline render to the shot camera.
- Connect the reformat node to the bg input of the scanline render node.
- Look through the scanline render in 2D and you will see the panorama through the shot camera.
- Play with the rotation of the camera in order to place the panorama in the desired position.
That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.
- Create another scanline render node and change the projection mode to spherical. Connect it to the scene.
- Create a reformat node with an equirectangular format and connect it to the bg input of the scanline render. In this case I'm using a 4000x2000 format.
- Create a sphere node and connect it to the spherical scanline render. Put a mirror node in between to invert the normal of the sphere.
- Create another scanline render and connect it's camera input to the shot camera.
- Connect the bg input of the new scanline render to the shot reformat node (super 35).
- Connect the scn/obj of the new scanline render and connect it to the sphere node.
- That's all that you need.
- You can look through the scanline render in the 2D and 3D viewport. We got all the images projected in 3D and rendered through the shot camera.
You can download the sample scene here.
Linear Workflow in Maya with Vray 2.0 /
I’m starting a new work with V-Ray 2.0 for Maya. I never worked before with this render engine, so first things first.
One of my first things is create a nice neutral light rig for testing shaders and textures. Setting up linear workflow is one of my priorities at this point.
Find below a quick way to set up this.
- Set up your gamma. In this case I’m using 2,2
- Click on “don’t affect colors” if you want to bake your gamma correction in to the final render. If you don’t click on it you’ll have to correct your gamma in post. No big deal.
- The linear workflow option is something created for Chaos Group to fix old VRay scenes which don’t use lwf. You shouldn’t use this at all.
- Click on affect swatches to see color pickers with the gamma applied.
- Once you are working with gamma applied, you need to correct your color textures. There are two different options to do it.
- First one: Add a gamma correction node to each color texture node. In this case I’, using gamma 2,2 what means that I need to use a ,0455 value on my gamma node.
- Second option: Instead of using gamma correction nodes for each color texture node, you can click on the texture node and add a V-Ray attribute to control this.
- By default all the texture nodes are being read as linear. Change your color textures to be read as sRGB.
- Click on view as sRGB on the V-Ray buffer, if not you’ll see your renders in the wrong color space.
- This is the difference between rendering with the option “don’t affect colors” enabled or disabled. As I said, no big deal.
Linear Workflow in Softimage /
A walk through video about setting up linear workflow in Softimage, audio only in Spanish but it’s quite simple to follow watching the movie.