3D Photogrammetry
Capturing process
Setup
I create the photos for 3D scans in an Amazon Basics Mobile Photo Studio (which I use without the front curtain).
Initially I would just place the objects on the ground in inside it, but despite the uniform lighting the objects still showed noticeable shadows near the ground. For that reason I have copied an approach from Kevin Parry’s Fruit stopmotion video and added a plexiglas pane to remove all shadows.
This is what the full setup looks like. Instead of placing the target on a turntable I instead rotate the object itself which has the same effect. The white background ensures that the software has nothing else to track and therefore resolves only the target object.
The full setup while shooting bread.
The fact that I have to reach into the box makes it impossible to close the front curtain. To counteract this I place my Nanguang LuxPad 23 right below the camera to replace the light that would normally be reflected from the white curtain.
Support light turned off.
Support light turned on.
Execution
For most objects I follow a similar process.
- I first do one rotation of the object.
- I then move the camera up a little bit to capture another rotation from a different perspective.
- This gets repeated several times.
- At some point I turn the object on its head to capture the other side. During this process I keep taking pictures to ensure that the software can understand what is happening.
- I then capture another few rotations for the other side of the object.
Here is the entire set of images:
Keeping objects in place
For some objects it can be difficult to perform the rotation described above. While it is generally possible record two different disconnected chunks of the same object I wouldn’t recommend it because merging these chunks in Metashape tends to fail quite often.
So in order to maintain the advantages of having one continuous sequence of images I like to use modeling clay or patafix to keep objects from rolling away or tipping over.
Processing in Metashape
Import all the images into one chunk in Metashape.
Alignment
Start by aligning the images. Here are the setting that I use. You might want to play around with the number of key points and tie points. 75K usually works pretty well.
Because everything was recorded in one continuous motion with a completely white background it can be assembled as one chunk. But there are still artefacts that were cause by my hands when I had to hold the bread to rotate it over the span of several photos.
To remove them I start by building a mesh from the sparse cloud.
The resulting mesh obviously contains some artifacts from my fingers. I use the Free-Form Selection to only select the bread itself and then click on Crop Selection to only keep the selected area.
After completing these steps I have a mesh of just the bread, but its quality is pretty low since it was created from just the sparse cloud. But this mesh is good enough to create masks from it.
I then go to Import → Import Masks and generate masks from the original model.
The subject is now perfectly masked in every photo - no more artifacts from the background.
But one problem remains. The mask does not mask the fingers in the few frames where I had to hold the subject in my hands.
To fix this I use the “Intelligent Scissors” and manually remove the fingers from the few frames where it is necessary.
After that I can start the “real” alignment process.
This time the sparse cloud is nice and clean.
Building the “master” model
Dense Cloud, Mesh & Texture
With the alignment finished it’s now time to build the real dense cloud. I usually use the highest quality setting.
There might still be some tiny artifacts around the dense cloud, so I select and delete them.
After that I build the mesh (via the “Workflow” tab).
The mesh might contain some holes. To fix this I go to Tools > Mesh > Close Holes….
Another potential issue are loose parts. The Gradual Selection can help with that by selecting unconnected geometry, so that it can be deleted.
The next step is the texture for the master model. Building textures also happens via the Workflow menu. This step can pretty much be run with the default settings. I like to set the texture quality to slightly above what I want to have for the final models. For example, when aiming for 4K I use 6K:
This completes the master model. It has extremely dense geometry and a high-res texture.
Orientation
Metashape usually generates the model with a completely random orientation. This is not immediately obvious in the software because the viewport gives me no real frame of reference. To change this, I go to Model > Show/Hide Items > Show Grid. This adds a “floor” to the scene which can be used to align the object in 3D space.
Here is the initial position:
I then use the Move/Rotate Object tools to align the model to the grid.
Creating LOD versions
After generating, texturing and aligning the model I can finally get started on creating the different LOD (Level of Detail) versions of the model.
The usual versions that I like to create are 50.000 / 5.000 / 500 polygons.
Models
I use the Decimate Mesh function to reduce the model to 50.000 faces. It’s important to not replace the default model (Select “No”).
For some meshes it might be a good idea to also apply a bit of smoothing.
By repeating this step I generate all 3 different versions.
Texturing
And finally I generate textures for all three versions. This is fairly easy because I can just bake the diffuse, normal and occlusion map from the original model on the LOD version. Here are the settings which I run on all three versions:
And with that the model is finished.