Photogrammetry (sometimes called “photoscanning”) gives you significantly more accurate displacement maps than Bitmap approximation or Multi-angle techniques at the cost of being the most demanding way in terms of hardware, software and time. It works by taking pictures of an object from different angles and using them to create a representation of it’s geometry. In other words: It’s an incredibly accurate way to capture displacement data for material creation.
The workflow shown in this guide works by creating a high-detail pass with the maximum amount of information possible and a smooth/flat pass. The details of the high-detail pass are then baked onto the low-detail pass, thus eliminating larger height changes and leaving only the smaller detail which creates uniform and easily tileable displacement maps at very high resolutions (theoretically up to ~32000px).
Deciding whether photogrammetry is the right tool
Photogrammetry can generally be used on surfaces which fulfill these criteria:
- The surface must have a strong enough displacement for it to create a detectable difference in perspective when moving the camera. What this means in concrete terms depends on your equipment.
- Paving stones, bricks and tree bark are pretty much always suitable, even with lower end cameras.
- Gravel and tiles with shallow seams are more difficult, but still doable.
- With a high end camera, a macro lens and a lot of patience it is even possible to use photogrammetry for leaves and fabrics.
- Plain/smooth wood is where photogrammetry hits its limits (at least for me right now) as the displacement changes are so subtle that they get lost in the noise during processing.
- The surface must not change it’s appearance (color/structure) based on the angle from which it is looked at. This rules out reflective and partially transparent surfaces.
- It must provide enough detail for the software to track camera movement from one shot to the next. Smooth paint or plaster can be a challenge.
- A decent camera (DSLR or Mirrorless, I’ve never tried it with a smartphone)
- A tripod or monopod (I personally prefer the monopod for most situations)
- Metashape (or comparable photogrammetry tool)
- xNormal (or comparable baking tool)
- Affinity Photo (or comparable image editor)
The shooting process
Shooting for photogrammetry in Metashape works similarly to shooting for bitmap approximation.
Find a suitable surface and record a serpentine pattern in which every image has a decent overlap with the next one. One big point to remember during all of this is that Metashape needs differences in perspective to properly reassemble a surface. The Agisoft Metashape User Manual specifically points out that the different images should not be created by just rotating the camera around one point. It must be moved in 3D space for every shot to achieve perspective changes.
This also leads to the reason that I prefer using a monopod over a tripod for photogrammetry: 2D image stitching generally suffers from perspective distortions that appear if the camera is not always facing down at an exact 90° angle. Metashape is actually embracing these small differences! A lot of the time the small perspective changes introduced by using a monopod which isn’t always facing down in a perfectly straight angle help the software to reconstruct the surface.
Photogrammetry processing in Metashape
Once you have recorded your images the photogrammetry process can begin. The workflow I am describing here is how I do it in Agisoft Metashape but it should also be possible to perform similar steps in another photogrammetry tool.
We will begin with the detailed pass and then decimate and smooth it to get the flat pass for baking. Metashape batchjob files for this process are available on my Github but I would suggest performing the steps manually at least once as it will help you when doing troubleshooting in the future.
Before doing any kind of processing in Metashape I would recommend adjusting a few settings. Go to Tools → Preferences and make sure that some settings are configured properly:
- The GPU should be enabled
- The Default view (in the “General” tab) should be changed to “Dense Cloud” as the default setting (“Model”) can cause Metashape to be extremely laggy when opening a large project.
Creating the detailed pass
Import the photos and align the cameras via Workflow → Align Photos. The most important setting here is the quality preset. I would recommend using the “high” preset since all other presets either down- or upscale the given images. In the Advanced menu there two more options labeled Key point limit and Tie point limit. The Key Point limit defines how many points the software will extract from every photo and the Tie Point Limit then tells it how many of these points to use for the actual reconstruction. At this point you might be tempted to increase these settings to really high values but according to the Agisoft Metashape User Manual (page 22) these settings are primarily designed to tweak the performance rather than the quality, increasing them will therefore only have a marginal impact on the result (unless the reconstruction fails completely).