Skip to main content

3D Photogrammetry

Capturing process

Setup

image-1663610147894.jpg

I create the photos for 3D scans in an Amazon Basics Mobile Photo Studio (which I use without the front curtain).

Initially I would just place the objects on the ground in inside it, but despite the uniform lighting the objects still showed noticeable shadows near the ground. For that reason I have copied an approach from Kevin Parry’s Fruit stopmotion video and added a plexiglas pane to remove all shadows.

This is what the full setup looks like. Instead of placing the target on a turntable I instead rotate the object itself which has the same effect. The white background ensures that the software has nothing else to track and therefore resolves only the target object.

The full setup while shooting bread.

image-1663610172640.jpg

The fact that I have to reach into the box makes it impossible to close the front curtain. To counteract this I place my Nanguang LuxPad 23 right below the camera to replace the light that would normally be reflected from the white curtain.

image-1663610216735.jpg

Support light turned off.

image-1663610222304.jpg

Support light turned on.

Execution

For most objects I follow a similar process.

  • I first do one rotation of the object.
  • I then move the camera up a little bit to capture another rotation from a different perspective.
  • This gets repeated several times.
  • At some point I turn the object on its head to capture the other side. During this process I keep taking pictures to ensure that the software can understand what is happening.
  • I then capture another few rotations for the other side of the object.

Here is the entire set of images:

image-1663610247776.png

Keeping objects in place

image-1663610260072.jpg

For some objects it can be difficult to perform the rotation described above. While it is generally possible record two different disconnected chunks of the same object I wouldn’t recommend it because merging these chunks in Metashape tends to fail quite often.

So in order to maintain the advantages of having one continuous sequence of images I like to use modeling clay or patafix to keep objects from rolling away or tipping over.

Processing in Metashape

Import all the images into one chunk in Metashape.

Alignment

Start by aligning the images. Here are the setting that I use. You might want to play around with the number of key points and tie points. 75K usually works pretty well.

image-1663610278458.png

Because everything was recorded in one continuous motion with a completely white background it can be assembled as one chunk. But there are still artefacts that were cause by my hands when I had to hold the bread to rotate it over the span of several photos.

image-1663610291008.png

To remove them I start by building a mesh from the sparse cloud.

image-1663610301166.png

image-1663610305129.png

The resulting mesh obviously contains some artifacts from my fingers. I use the Free-Form Selection to only select the bread itself and then click on Crop Selection to only keep the selected area.

image-1663610315781.png

image-1663610316099.png

image-1663610316155.png

After completing these steps I have a mesh of just the bread, but its quality is pretty low since it was created from just the sparse cloud. But this mesh is good enough to create masks from it.

I then go to Import → Import Masks and generate masks from the original model.

image-1663610324578.png

The subject is now perfectly masked in every photo - no more artifacts from the background.

image-1663610328800.png

But one problem remains. The mask does not mask the fingers in the few frames where I had to hold the subject in my hands.

image-1663610341101.png

To fix this I use the “Intelligent Scissors” and manually remove the fingers from the few frames where it is necessary.

image-1663610350633.png

image-1663610350692.png

image-1663610350746.png

image-1663610350785.png

After that I can start the “real” alignment process.

image-1663610359410.png

This time the sparse cloud is nice and clean.

image-1663610364433.png

Building the “master” model

Dense Cloud, Mesh & Texture

With the alignment finished it’s now time to build the real dense cloud. I usually use the highest quality setting.

image-1663610371013.png

image-1663610371040.png

There might still be some tiny artifacts around the dense cloud, so I select and delete them.

image-1663610376996.png

After that I build the mesh (via the “Workflow” tab).

image-1663610381679.png

The mesh might contain some holes. To fix this I go to Tools > Mesh > Close Holes….

image-1663610386635.png

Another potential issue are loose parts. The Gradual Selection can help with that by selecting unconnected geometry, so that it can be deleted.

image-1663610392290.png

The next step is the texture for the master model. Building textures also happens via the Workflow menu. This step can pretty much be run with the default settings. I like to set the texture quality to slightly above what I want to have for the final models. For example, when aiming for 4K I use 6K:

image-1663610396817.png

This completes the master model. It has extremely dense geometry and a high-res texture.

image-1663610429554.png

Orientation

image-1663610424772.png

Metashape usually generates the model with a completely random orientation. This is not immediately obvious in the software because the viewport gives me no real frame of reference. To change this, I go to Model > Show/Hide Items > Show Grid. This adds a “floor” to the scene which can be used to align the object in 3D space.

Here is the initial position:

image-1663610435335.png

I then use the Move/Rotate Object tools to align the model to the grid.

image-1663610443414.png

Creating LOD versions

After generating, texturing and aligning the model I can finally get started on creating the different LOD (Level of Detail) versions of the model.

The usual versions that I like to create are 50.000 / 5.000 / 500 polygons.

Models

I use the Decimate Mesh function to reduce the model to 50.000 faces. It’s important to not replace the default model (Select “No”).

image-1663610452337.png

image-1663610452373.png

For some meshes it might be a good idea to also apply a bit of smoothing.

image-1663610457731.png

By repeating this step I generate all 3 different versions.

image-1663610463067.png

Texturing

And finally I generate textures for all three versions. This is fairly easy because I can just bake the diffuse, normal and occlusion map from the original model on the LOD version. Here are the settings which I run on all three versions:

image-1663610470273.png

image-1663610470303.png

image-1663610470336.png

And with that the model is finished.

image-1663610478231.jpg