First attempt to create a shader that looks like rough 2D sketches.
I will definitely put more effort on this in the future.
I'm pretty much combining three different pen strokes.
Rough outline.
Quick definition of volume.
Final sharp detail.
First attempt to create a shader that looks like rough 2D sketches.
I will definitely put more effort on this in the future.
I'm pretty much combining three different pen strokes.
Rough outline.
Quick definition of volume.
Final sharp detail.
Quick exercise using Modo Replicators. Lot of fun.
Scatterers in Clarisse are just great. They are very easy to control, reliable and they render in no time.
I've been using them for matte painting purposes, just feed them with a bunch of different trees to create a forest in 2 minutes. Add some nice lighting and render insane resolution. Then use all the 3D material with all the needed AOV's in Nuke and you'll have full control to create stunning matte paintings.
To make this demo a bit funnier instead of trees I'm using cool Lego pieces :)
Now play with the density. In this case I’m using a value of 0.7
As you can see all the toy_men start to populate the image.
Final render.
We have a new trailer for Exodus: Gods and Kings.
In this one we already can see some cool stuff that we did at Double Negative VFX.
In this post I'm going to explain my methodology to merge different pictures or portions of an environment in order to create a panoramic image to be used for matte painting purposes. I'm not talking about creating equirectangular panoramas for 3D lighting, for that I use ptGui and there is not a better tool for it.
I'm talking about blending different images or footage (video) to create a seamless panoramic image ready to use in any 3D or 2D program. It can be composed using only 2 images or maybe 15, it doesn't matter.
This method is much more complicated and requires more human time than using ptGui or any other stitching software. But the power of this method is that you can use it with HDR footage recorded with a Blackmagic camera, for example.
The pictures that I'm using for this tutorial were taken with a nodal point base, but they are not calibrated or similar. In fact they don't need to be like that. Obviously taking pictures from a nodal point rotation base will help a lot, but the good thing of this technique is that you can use different angles taken from different positions and also using different focal and different film backs from various digital cameras.
Depending on your pipeline and your workflow you may want to use cards or projectors. At some point you will need both of them, so is nice to have quick controls to switch between them
In this tutorial we are going to use the card mode. For now leave it as card and remove the sphere.
That's it if you only need to see the panorama through the shot camera. But let's say you also need to project it in a 3D space.
You can download the sample scene here.
Every single facility or 3D artist around the globe have their own way to work with our Lighting Checkers, based on the render engine they use, shaders, pipeline in general. But just to make your life a bit easier, akromatic wants to provide you with a digital version of our Lighting Checkers to quickly match our physical version.
In this case we are offering you digital akromatic Lighting Checkers for arnold render.
We'll be posting other render engines soon.
Download here.
I will continue writing about my experiences working with Clarisse. This time I'm gonna talk about working with layers and passes, a very common topic in the rendering world no matter what software you are using.
Clarisse allows you to create very complex organization systems using contexts, layers/passes and images. In addition to that we can compose all the information inside Clarisse in order to create different outputs for compositing.
Clarisse has a very clever organization methods for huge scenes.
I've been using Isotropix Clarisse in production for a little while now. Recently the VFX Facility where I work announced the usage of Clarisse as primary Look-Dev and Lighting tool, so I decided to start talking about this powerful raytracer on my blog.
Today I'm writing about how to set-up Image Based Lighting.
A few days ago I did my first tests in Colorway. My idea is to use Colorway as texturing and look-development tool for VFX projects.
I think it can be a really powerful and artist friendly software to work on different type of assets.
It is also a great tool to present individual assets, because you can do quick and simple post-processing tasks like color correction, lens effects, etc. And of course Colorway allows you to create different variations of the same asset in no time.
With this second test I wanted to create an entire asset for VFX, make different variations and put everything together in a dailies template or similar to showcase the work.
At the end of the day I'm quite happy with the result and workflow combining Modo, Mari and Colorway. I found some limitations but I truly believe that Colorway will fit soon my needs as Texture Painter and Look-Dev Artist.
Transferring textures
One of the limitations that I found as Texture Painter is that Colorway doesn't manage UDIMs yet. I textured this character time ago at home using Mari following VFX standards and of course, I'm using UDIMs, actually something around 50 4k UDIMs.
I had to create a new UV Mapping using the 1001 UDIM only. In order to keep enough texture resolution I divided the asset in different parts. Head, both arms, both legs, pelvis and torso.
Then using the great "transfer" tool in Mari, I baked the high resolution textures based on UDIMs on to the low resolution UVs based on one single UV space. I created one 8k resolution texture for each part of the asset. I'm using only 3 texture channels, Color, Specular and Bump.
Layer Transfer tool in Mari.
All the new textures already baked in to the default UV space 1001
My lighting setup in Modo couldn't be more simple. I'm just using an Equirectangular HDRI map of Beverly Hills. This image is actually shipped with Modo.
Image Based Lighting works great in Modo and is also very easy to mix different IBLs in the same scene. Just works great.
Shading wise is also quite simple. Just one shading layer with Color, Specular and Bump maps connected. I'm using one shader for each part of the asset.
The render takes only around 3 minutes on my tiny MacBook Air.
Rendering for Colorway takes more than that but obviously you will save a lot of time later.
Once in Colorway I can easily play with colours and textures. I created a color texture variation in Mari and now in Colorway I can plug it and see the shading changes in no time.
All the different parts exported from Modo are on the left side toolbar.
On the right side all the lights will be available to play with. In this case I only have the IBL.
All the materials are listed on the right side. It is possible to change color, intensity and diffuse textures. This gives you a huge amount of freedom to create different variations of the same asset.
I really like the possibility of using post-precessing effects like Lens distortion or dispersion. You can have a quick visual feedback of very common lens effects used on VFX projects.
Finally I created a couple of color variations for this asset.
Notes
A couple of things that I noticed while working on this asset:
News from akromatic.
"Based on the feedback and requirements of some VFX Facilities, we decided to release a new flavour of our calibrated paint.
Some Look-Development Artists prefer to use grey balls with higher specular components and other Artists are more comfortable using less shiny spheres.
It is matter of personal preference, so let us know which one is your flavour.
Original spheres: Gloss average around 30%
New spheres: Gloss average around 18%
Both of them are calibrated as Neutral Greys and hand painted."
New grey sphere, half hit by the sun, half in shade.
New grey flavour, close up. Soft lighting transition.
The mirror side remains the same. Carefully polished by hand.
Mirror side, close up.
All the information here.
A few months ago I wrote a post about retopology tools in Maya. I'm not using those tools anymore, now I deal with retopology using Modo.
I'm doing a lot of retopo these days working with 3D scanners and decimated Zbrush models coming from the art department.
Pretty much all the 3D packages these days have similar retopology tools, but working with Modo I feel more freedom and I work more comfortable doing this kind of task.
These are the tools that I usually use.
To carry on with retopology I use the "topology pen tool" which combines all the other retopology options. I use this tool to make 90% of the work.
These are some of it's options.
A few days ago (or weeks) The Foundry released their latest cool product called "Colorway", and they did it for free.
Colorway is a product created to help designers with their work flow specially when dealing with color changes, texture updates, lighting, etc. Looks in general.
This software allow us to change those small thing once the render is done. We can do it in real time without waiting long hours for rendering again. We can change different things related with shading and lighting.
This is obviously quite an advantage when we are dealing with clients and they ask us for small changes related with color, saturation, brightness, etc. We don't need to render again anymore, just use Colorway to make those changes live in no time.
Even the clients can change some stuff and send us back a file with their changes.
Great idea, great product.
I'm not a designer, I'm a vfx artist doing mainly textures and look-development, and even if Colorway wasn't designed for vfx, it can be potentially be used in the vfx industry, at least for some tasks.
There are a few things that I'd like to have inside Colorway in order to be a more productive texturing&look-dev tool, but so far it can be used in some ways to create different versions of the same asset.
To test Colorway I used my model of War Machine.
A few things that I'd like to see in Colorway in future versions in order to have more control and power for look-dev tasks.
We have a new high resolution HDRI panorama for VFX at akromatic.com
Check it out here.
Just a few screenshots of my process working on the skateboard images that I posted few days ago.
Just a couple of quick renders riding my skateboard :)
Sometimes Mari seems to have small issues with the texture bleeding.
I just realized that sometimes the bleeding doesn't happen at all. If you find yourself with this problem, the best solution is probably to force Mari to do the texture bleeding.
Only 2 steps are needed.
I will be posting different assets in this website that I think can be useful for other VFX artists, students and amateurs.
I'm starting with one of my favourite light-rigs for look development and model checking.
Look Development Light-rig 0001 for Arnold
2014
2015
2016
Quick sketches for the construction of a fingerboard skatepark.
Let's go for a ride!
Processing Lidar scans to be used in production is a very tedious task, specially when working on big environments, generating huge point clouds with millions of polygons. That’s so complicated to move in any 3D viewport.
To clean those point clouds the best tools usually are the ones that the 3D scans manufacturers ship with their products. But sometimes they are quite complex and not artist friendly.
And also most of the time we receive the Lidar from on-set workers and we don’t have access to those tools, so we have to use mainstream software to deal with this task.
If we are talking about very complex Lidar, we will have to spend a good time to clean it. But if we are dealing with simple Lidar of small environments, props or characters, we can clean them quite easily using MeshLab or Zbrush.
Another alternative to MeshLab is Zbrush. But the problem with Zbrush is the memory limitation. Lidar are a very big point clouds and Zbrush doesn’t manage the memory very well.
But you can combine MeshLab and Zbrush to process your Lidar’s.