0% found this document useful (0 votes)
4 views

Photogramrtric workshop

The document provides an overview of photogrammetry, a technique for creating 3D models from photographs, detailing its applications, software, and workflow. It covers the history, limitations, and practical considerations for capturing images and processing them into 3D models. Additionally, it includes a shooting guide and discusses alternative 3D scanning technologies.

Uploaded by

Galaxy Gamer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Photogramrtric workshop

The document provides an overview of photogrammetry, a technique for creating 3D models from photographs, detailing its applications, software, and workflow. It covers the history, limitations, and practical considerations for capturing images and processing them into 3D models. Additionally, it includes a shooting guide and discusses alternative 3D scanning technologies.

Uploaded by

Galaxy Gamer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

What is it?

How do you create a 3D model from a collection of photographs.

Photogrammetry

?
https://paulbourke.net/ ameu

https://www.agisoft.com/downloads/

Paul Bourke, January 2025

Outline
• Applications, alternative technologies and brief history
• Software and workflow
• Worked example
• Photography: shooting guide
• Limitations, post production, advanced topics
• Questions/discussion

• Practical exercise

Chandigarh museum
fl
Applications Example
• Assets for 3D environments, eg: gaming, VR, AR.
• Cultural heritage (art, archaeology, and architecture).
• Surveying and mapping, earliest applications.
• Human face/body scanning, medicine and movie industry.
• Visual effects.
• Reverse engineering, reproducing mechanical parts.

Perth, Western Australia

Example Example: drone

Chandigarh Museum Castle


Example: small scale Example: large scale

Perth Museum Railway development, Lithuania

Brief history
• Photogrammetry is the general term given to deriving 3D geometric information from
a series of images. Some people prefer P3DR “Photographic 3D Reconstruction”

• Initially largely used for aerial surveys, deriving landscape models. Originally only
used a stereoscopic pair, that is, just two photographs to compute distances.

• More recently the domain of machine vision, for example: deriving a 3D model of a
robots environment.

• Big step forward was the development of SfM algorithms: structure from motion. This
generally solves the camera parameters and generation of a 3D point cloud.

• Most common implementation is called Bundler: “bundle adjustment algorithm


allows the reconstruction of the 3D geometry of the scene by optimizing the 3D
location of key points, the location/orientation of the camera, and its intrinsic
parameters”.
Alternative 3D scanning technologies Photogrammetry software
• Structured light scanners. • Metashape (used in the example today). Agisoft. (Commercial)
• Return time of flight scanners. • Realitycapture. EPIC games. (Free / Commercial)
• LIDAR scanners. • MeshRoom. Alicevision. (Open source)

• Hand modelling. • Zephyr. (Free for small models)


• Visual SFM, Colmap. (Tools)
• Regard3D (Free), Pix4d …
The appropriate approach depends in the characteristics of the object and the • …. a whole bunch of other more discipline specific softwares.
intended use of the resulting 3D model.
The principles presented here apply across all solutions.
Photogrammetry is also not necessarily the best approach for all model types. Different packages support different degrees of automation, degree of
adjustment and control of the process, maximum number of photographs …

Workflow Photography: Shooting guide


• Capture photographs. Camera + prime lens. • One cannot expect parts that aren’t photographed to be reconstructed.
• Align points, and derive camera positions. • Ideally use all manual settings for iso, exposure time, aperture and white point

• Create mesh, derived from depth maps. • The number of photographs depends on the complexity of the object
Alternatively create dense point cloud. - 2.5D surfaces might only need 10-20
- For contained 3D objects typically require 50-200
• Cleanup - Extended landscapes may require 1000’s
1. remove “shrapnel”
2. close holes, eg: base or after (1) • Each photograph should be taken from a different location.
The opposite to panorama photography where camera should be at zero parallax point.
3. apply scale (see later)
4. perform other geometry edits • Camera orientation doesn’t matter.
• Calculate textures, cameras act as data projectors. • Focus and depth of focus important. Details in the image will determine the number and
quality of feature points.
• Apply colour corrections.
• Best results with a single focal length, eg: Prime lens.
• Export in favourite 3D format for destination application.
15 photographs 50 photographs

95 photographs 200 photographs


Worked example Feature points, tiepoints

Patna museum

Sparce point cloud Camera positions


Surface + texture Quality

Camera positions

Live example
Post production Limitations
• Removing geometry not part of the desired model. • Gaining access for camera shots.
eg: Removing base, other parts of the room. If it isn’t photographed then it can’t be reconstructed.

• Closing holes. • Lighting. Baked in shadows and shading.


eg: After removing the base, or deleting protruding errors.
• Light types.
• Managing triangle count: Recommend MeshLab (free). Avoid different light sources = different white points.

• Managing texture files: Any image processing software. • Mirror surfaces, a problem for almost all technologies.
• Orientating the model. • Glossy, specular surfaces.
• More advanced editing: Blender (free), zBrush (commercial). • Moving objects or objects changing shape.
Requires a camera rig with a large number of cameras.

• Very thin features.

Lighting Lighting
Limitations: access Limitations: moving objects

Limitations: specular highlights


• Specular reflections depend on the position of the camera with respect to light
source. They are different for each camera position so the object is essentially
changing between photographs.

• Specular reflections preserve polarised light.


Diffuse surfaces depolarise polarised light.
Cross polarisation involves polarising the source light and placing a polarising
filter on the lens orientated at 90 degrees to the source polarisation.
Cross polarisation Cross polarisation

Before After

Lighting: Ring light Light types: white point

Sunlight Shade Fluorescent


Camera rig Additional topics
Flash trigger cable Flash trigger by camera Finger trigger • Desirable camera attributes.
Polaroid lter on lens Polaroid lter on light 1. High resolution. Results in higher quality textures.
2. Sharp infocus images.
Result in more feature points, and higher quality textures.
3. Fixed focal length (beware focus breathing).
Required for better accuracy of the camera positions.

• Geometric resolution vs texture resolution.


• True scale and colour calibration.
• Accuracy.
• Automation.
1. Turntables
2. Robots

Geometric resolution vs texture resolution


Textured Shaded Wireframe Textured Shaded Wireframe
• Geometric and texture resolution are two separate considerations.
• For example, for realtime graphics one might prefer simpler geometric
resolution and higher texture resolution.

• For analysis one may prefer high geometric detail.


• For archiving purposes one generally wants to maximise both.

Good textures can often hide low geometric detail.


Always ask to see models without texture maps.

2 million triangles 200,000 triangles


fi
fi
Accuracy Scale and colour correction
• It is a function of the numerical accuracy requested in the software (processing
time), as well as camera resolution and lens quality. Also depends on the size
of the model being captured.

• Resolving power: scale of smallest features that can be represented.


• Dimensions and differential dimensions.
• What does it mean to have some parts more accurate than others? How is that
quantified. This is particularly misleading for some other scanning
technologies that claim sub mm accuracy.

• I estimate for my standard gear the dimensions are accurate to +/- 2mm on a
1m object, resolving power of 1mm.

Automation: turntable
Automation: robots

Questions and workshop

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy