photogrammetry

Professional photogrammetry 04 by Xuan Prada

Hello!

This is the first of two videos about drone photogrammetry of my series "Professional photogrammetry".
In this video I will explain everything that I know about scanning for visual effects using commercial drones.

We will talk about shooting patterns, surveying extra data, equipment, flight missions, etc. And we will also start processing some of the data sets that I have available for you.
The video is around 3 hours long and it will continue in the next episode of this series.

Head over my patreon for more information

Thanks!
Xuan.

Intro to gaussian splatting by Xuan Prada

Bibliography

  • https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/

  • https://github.com/aras-p/UnityGaussianSplatting

  • https://www.youtube.com/watch?v=KFOy354zf9E

Hardware

- You need an Nvidia GPU with at least 24GB VRAM.

Software

  • Git https://git-scm.com/downloads

  • Once installed, open a command line and type git --version to check if it's working.

  • Anaconda https://www.anaconda.com/download

  • It will install all the packages and wrappers that you need.

  • CUDA toolkit 11.8 https://developer.nvidia.com/cuda-toolkit-archive

  • Once installed open a command line and type nvcc --version to check if it's working.

  • Visual Studio with C++ https://visualstudio.microsoft.com/vs/older-downloads/

  • Once installed open Visual Studio Installer and install Desktop development C++.

  • Colmap https://github.com/colmap/colmap/releases

  • This tool is for creating camera positions.

  • Add it to environment variables.

  • Edit environment variables, doble click on "path" variable and add a new one and paste the path where Colmap is stored.

  • ImageMagik https://imagemagick.org/script/download.php

  • This tool is for resizing images.

  • Test it by typing these lines one by one in the command line.

  • magick logo: logo.gif

  • magick identify logo.gif

  • magick logo.gif win:

  • FFMPEG https://ffmpeg.org/download.html

  • Add it to environment variables.

  • Open a command line and type ffmpeg to check if it's working.

  • To convert a video to photos, go to the folder where ffmpeg is downloaded.

  • Type ffmpeg.exe -i pathToVideo.mov -vf fps=2 out%04d.jpg

  • Finally restart your computer.

How to capture gaussian splats?

  • Same rules as photogrammetry but less images are needed.

  • Do not move too fast, we don't want blurry frames.

  • Take between 200 - 1000 photos.

  • Fixed exposure, otherwise it will create flickering in the final model.

Processing

  • Create a folder called "dataset".

  • Inside create another folder called "input" and place all the photos.

  • Now we need to use Colmap to obtain the camera poses. You could use RealityCapture or Metashape to do the same thing.

  • We can do this from the command line, but for simplicity let's use the gui.

  • Open Colmap, file - new. Set the database to your "dataset" folder and call it database.db Set the images to the "input folder". Save.

  • Processing - feature extraction. Enable "shared for all images if there is no changing in zoom in your photos". Click on extract. This will take a few minutes.

  • Processing - feature matching. Sequential is faster and exhaustive more precisse. This will take a few minutes.

  • Save the Colmap scene in "dataset" - "colmap". (create the folder).

  • Reconstruction - Reconstruction options. Uncheck multiple_models as we are reconstructing a single scene.

  • Reconstruction - Start reconstruction. This will take the longer, potentially hours, depending on the amount of photos.

  • Once Colmap has finished you will see the camera poses and the sparse pointcloud.

  • File - Export model and save it in "dataset" - "distorted" - "sparse" - "0". Create directories.

Train the 3D gaussian splatting model

  • Open a command line and type git clone https://github.com/graphdeco-inria/gaussian-splatting --recursive

  • This will be downloaded in your users folder. gaussian-splatting

  • Open an anaconda prompt and go to the directory where the gaussian-splatting was downloaded.

  • Type these line one at a time.

  • SET DISTUTILS_USE_SDK=1

  • conda env create --file environment.yml

  • conda activate gaussian_splatting

  • Cd to the folder where gaussian splatting was downloaded.

  • Type these lines one at a time.

  • pip install plyfile tqdm

  • pip install submodules/diff-gaussian-rasterization

  • pip install submodules/simple-knn

  • Before training the model we need to undistor the images.

  • Type python convert.py -s $FOLDER_PATH --skip_matching

  • This is going to create a folder called sparse and another one called stereo, and also a couple of files.

  • Train the model.

  • python train.py -s $FOLDER_PATH -m $FOLDER_PATH/output

  • This will train the model and export two pointclouds, one at 7000 iterations and another one at 30000 iterations.

Visualizing the model

  • Download the viewer here: https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/binaries/viewers.zip

  • From a terminal: SIBR_gaussianViewer_app -m $FOLDER_PATH/output

  • Unity 2022.3.9f1

  • Load the project. https://github.com/aras-p/UnityGaussianSplatting

  • Tools - Gaussian splats - Create.

  • Select the pointcloud, create.

  • Select the gaussian splats game object and attach the pointcloud.

  • Do your thing!

Professional photogrammetry 03 by Xuan Prada

Hello patrons,

This is the third episode of "Professional photogrammetry".
Two videos, around 4 hours long where we will talk about Photometric stereo scanning. We will discuss the different setups that we have available for this kind of scanning. Then we will work on three different exercises, starting with the calibration of the setup followed by some fabric and vegetation scanning.

Finally, we will take a look at how to use coded targets for variable reduction in our scans.
Hope you learn something new and have some fun.

All the info on my Patreon page.

Thanks!
Xuan.

Professional photogrammetry 02 by Xuan Prada

Here is the trailer for "Professional photogrammetry episode 02".

In this 4-ish hours video I will talk about scanning in natural environments. We will cover everything needed for scanning and processing two different types of natural assets.

The first asset is a 3D asset, the second one is a ground surface. Both scanning and processing workflows are completely different for each, but hopefully after this video, you should be able to grow your own library of natural environment assets.

We go through equipment, scanning patterns, camera settings, etc. Then we will process everything in order to create final high quality assets ready for visual effects and real time.

I hope you enjoy this video, if you like it I guess I will do more videos about scanning in the wild/city.

Thanks!
Xuan.

All the info on my Patreon.

Professional photogrammetry 01 by Xuan Prada

Hello patrons,

Let's take a break from the USD training, we'll come back to it in the next video.
For now, let's enjoy a 4 hours video about professional photogrammetry.
I reckon this will be a short series, maybe 3 videos about photogrammetry acquisition and processing.

In today's video we will talk about cross polarization in a controlled environment for capturing 3D assets. We will discuss how polarization works, camera gear, how to take photos properly and how to process the data to create nice looking assets for VFX and real time.

All the information on my Patreon.

I hope you like it.
Thanks!
Xuan.

Scan based materials on a budget by Xuan Prada

Hello patrons,

Last post of the year!

In this two and half hours video I will show you my workflow to create smart materials based on photogrammetry. A technique wideky used in VFX and the game industry.

But we won't be using special hardware or very expensive photographic equipment, we are going to be using only a cheap digital camera or even a smartphone.

In this video you will learn:

- How to shoot photogrammetry for material creation.
- How to process photogrammetry in Reality Capture.
- How to bake textures maps from high resolution geometry in Zbrush.
- How to create smart materials in Substance Designer for Substance Painter or for 3D applications.
- How to use photogrammetry based materials in real time engines.

Thanks for your support and see you in 2022!
Stay safe.

Xuan.

Houdini topo transfer - aka wrap3 by Xuan Prada

For a little while I have been using Houdini topo transfer tools instead of Wrap 3. Not saying that I can fully replace Wrap3 but for some common and easy tasks, like wrapping generic humans to scans for both modelling and texturing, I can definitely use Houdini now instead of Wrap 3.

Wrapping generic humans to scans

  • This technique will allow you to easily wrap a generic human to any actor’s scan to create digital doubles. This workflow can be used during modeling the digital double and also while texturing it. Commonly, a texture artist gets a digital double production model in t-pose or a similar pose that doesn’t necessary match the scan pose. It is a great idea to match both poses to easily transfer color details and surface details between the scan and the production model.

  • For both situations, modeling or texturing, this is a workflow that usually involves Wrap3 or other proprietary tools for Maya. Now it can also easily be done in Houdini.

  • First of all, open the ztool provided by the scanning vendor in Zbrush. These photogrammetry scans are usually something around 13 – 18 million of polygons. Too dense for the wrapping process. You can just decimate the model and export it as .obj

  • In Maya align roughly your generic human and the scan. If the pose is very different, use your generic rig to match (roughly) the pose of the scan. Also make sure both models have the same scale. Scaling issues can be fixed in Wrap3 or Houdini in this case, but I think it is better to fix it beforehand, in a vfx pipeline you will be publishing assets from Maya anyway. Then export both models as .obj

  • It is important to remove teeth, the interior of the mouth and other problematic parts from your generic human model. This is something you can do in Houdini as well, even after the wrapping, but again, better to do it beforehand.

  • Import the scan in Houdni.

  • Create a topo transfer node.

  • Connect the scan to the target input of the topo transfer.

  • Bring the base mesh and connect it to the source input of the topo transfer.

  • I had issues in the past using Maya units (decimeters) so better to scale by 0.1 just in case.

  • Enable the topo transfer, press enter to activate it. Now you can place landmarks on the base mesh.

  • Add a couple of landmarks, then ctrl+g to switch to the scan mesh, and align the same landmarks.

  • Repeat the process all around the body and click on solve.

  • Your generic human will be wrapped pretty much perfectly to the actor’s scan. Now you can continue with your traditional modeling pipeline, or in case you are using this technique for texturing, move into Zbrush, Mari and or Houdini for transferring textures and displacement maps. There are tutorials about these topics on this site.

Transferring texture data

  • Import the scan and the wrapped model into Houdini.

  • Assign a classic shader with the photogrammetry texture connected to its emission color to the scan. Disable the diffuse component.

  • Create a bakeTexture rop with the following settings.

    • Resolution = 4096 x 4096.

    • UV object = wrapped model.

    • High res object = scan.

    • Output picture = path_to_file.%(UDIM)d.exr

    • Format = EXR.

    • Surface emission color = On.

    • Baking tab = Tick off Disable lighting/emission and Add baking exports to shader layers.

    • If you get artifacts in the transferred textures, in the unwrapping tab change the unwrap method to trace closest surface. This is common with lidar, photogrammetry and other dirty geometry.

    • You can run the baking locally or on the farm.

  • Take a look at the generated textures.

Introduction to Reality Capture by Xuan Prada

In this 3 hour tutorial I go through my photogrammetry workflow using Reality Capture in conjunction with Maya, Zbrush Mari and UV Layout.

I will guide you through the entire process, from capturing footage on-set until asset completion. I will explain the most basic settings needed to process your images in Reality Capture, to create point clouds, high resolution meshes and placeholder textures.
Then I will continue to develop the asset in order to make it suitable for any visual effects production.

This are the topics included in this tutorial.

- Camera gear.
- Camera settings.
- Shooting patterns.
- Footage preparation.
- Photogrammetry software.
- Photogrammetry process in Reality Capture.
- Model clean up.
- Retopology.
- UV mapping.
- Texture re-projection, displacement and color maps.
- High resolution texturing in Mari.
- Render tests.

Check it out on my Patreon feed.

On-set tips: Creating high frequency detail by Xuan Prada

In a previous post I mentioned the importance of having high frequency details whilst scanning assets on-set. Sometimes if we don't have that detail we can just create it. Actually sometimes this is the only way to capture volumes and surfaces efficiently, specially if the asset doesn't have any surface detail, like white objects for example.

If we are dealing with assets that are being used on set but won't appear in the final edit, it is probably that those assets are not painted at all. There is no need to spend resources on it, right? But we might need to scan those assets to create a virtual asset that will be ultimately used on screen.

As mentioned before, if we don't have enough surface detail it will be so difficult to scan assets using photogrammetry so, we need to create high frequency detail on our own way.

Let's say we need to create a virtual assets of this physical mask. It is completely plain, white, we don't see much detail on its surface. We can create high frequency detail just painting some dots, or placing small stickers across the surface.

In this particular case I'm using a regular DSLR + multi zoom lens. A tripod, a support for the mask and some washable paint. I prefer to use small round stickers because they create less artifacts in the scan, but I run out of them.

I created this support while ago to scan fruits and other organic assets.

The first thing I usually do (if the object is white) is covering the whole object with neutral gray paint. It is way more easy to balance the exposure photographing again gray than white.

Once the gray paint is dry I just paint small dots or place the round stickers to create high frequency detail. The smallest the better.

Once the material has been processed you should get a pretty decent scan. Probably an impossible task without creating all the high frequency detail first.

On-set tips: The importance of high frequency detail by Xuan Prada

Quick tip here. Whenever possible use some kind of high frequency detail to capture references for your assets. In this scenario I'm scanning with photos this huge rock, with only 50 images and very bad conditions. Low lighting situation, shot hand-held, no tripod at all, very windy and raining.
Thanks to all the great high frequency detail on the surface of this rock the output is quite good to use as modeling reference, even to extract highly detailed displacement maps.

Notice in the image below that I'm using only 50 pictures. Not much you might say. But thanks to all the tiny detail the photogrammetry software does very well reconstructing the point cloud to generate the 3D model. There is a lot of information to find common points between photos.

The shooting pattern couldn't be more simple. Just one eight all around the subject. The alignment was completely successfully in Photoscan.

As you can see here, even with a small number of photos and not the best lighting conditions, the output is quite good.

I did an automatic retopology in Zbrush. I don't care much about the topology, this asset is not going to be animated at all. I just need a manageable topology to create a nice uv mapping and reproject all the fine detail in Zbrush and use it later as displacement map.

A few render tests.

A bit more photogrammetry by Xuan Prada

Just a few more screenshots and renders of the last photogrammetry stuff that I've been doing. All of these are part of some training that I'll be teaching soon. Get in touch if you want to know more about it.

More photogrammetry stuff by Xuan Prada

I'm generating content for a photogrammetry course that I'll be teaching soon. These are just a few images of that content. More to come soon, I'll be doing a lot of examples and exercises using photogrammetry for visual effects projects.

Meshlab align to ground by Xuan Prada

If you deal a lot with 3D scans, Lidars, photogrammetry and other heavy models, you probably use Meshlab. This "little" software is great managing 75 million polygon Lidars and other complex meshes. Photoscan experienced users usually play with the align to ground tool to establish the correct axis for their resulting meshes.

If you look for this option in Meshlab you wouldn't find it, at least I didn't. Please let me know if you know how to do this.
What I found is a clever workaround to do the same same thing with a couple of clicks.

  • Import your Lidar or photogrammetry, and also import a ground plane exported from Maya. This is going to be your floor, ground or base axis.
  • This is a very simple example. The goal is to align the sneaker to the ground. I would like to deal with such a simple lidars at work :)
  • Click on the align icon.
  • In the align tool window, select the ground object and click on glue here mesh.
  • Notice the star that appears before the name of the object indicating that the mesh has been selected as base.
  • Select the lidar, photogrammetry or whatever geometry that need to be aligned and click on point based glueing.
  • In this little windows you can see both objects. Feel free to navigate around it behaves like a normal viewport.
  • Select one point at the base of the lidar by double clicking on top of it. Then do the same in one point of the base geo.
  • Repeat the same process. You'll need at least 4 points.
  • Done :)