Photogrammetry Basics

A Quick Overview of Processing Pictures Into a 3D Model

Screenshot of a 3D model of tabby slave cabin foundations and walls from Kingsley Plantation, Florida.

Photogrammetry,as a technique to create three-dimensional models, is a growing trend in heritage management and a new tool in the archaeologist's toolkit that is allowing for quick, accurate data collection; this digital data is easily stored and shared, making it one of the best methods for disseminating information about artifacts, historic structures, or even entire archaeological sites. This post will highlight a few of the useful software programs that you can use to create 3D models as well as a an example step-by-step of the creation of a 3D image.


At the most basic level, photogrammetry is the use of photographs to determine some kind of measurements from them. This has been a field of study since about as long as photography has been in use, gaining more momentum as portable cameras were able to be flown on aircraft. Today, photogrammetry might be a word that is applied to a number of specific techniques for remote sensing and measurement data obtained through photographic techniques whether on the ground, in the air, or even from low-earth orbit. For those of us in the field of archaeology, however, we are most often hearing the word these days in relation to software that allows us to create 3D digital models from a series of pictures. 

There are several types of software to choose from. FPAN does not endorse any particular one of these software packages. Some are open source, some are just free; for the most part it's a matter of defining your desired outcomes and matching it to the program that best fits your data collection needs. The example below is produced with Agisoft Photoscan's Standard Edition. We have found that it is a great package and gives excellent results-it's pretty easy to use once you get the hang of it, too! A few other examples of software packages/equipment that are used for this and similar purposes include:

Autodesk 123D Catch
Structure Sensor
Autodesk Recap
Acute 3D Context Capture
Drone Deploy
Blender
Sketchup
and, though not exactly in line with what we're talking about ArcGis Earth has some neat 3D things you can create/work with.

This example will serve as a general overview of the process, just to give you an idea of how this kind of software works.

The first step is to take a series of photos of an object. Generally, there should be a 60% overlap of the pictures to allow the software to tie the pictures together. You can either move the camera around the object or move the object around in front of the camera. This sounds a little bit easier than it actually is. The outcome of your model will depend on your photo set. The better the quality and number of pictures, the better the model. It can also help to have reference points included in the picture, say lettered numbers, pin flags, or other objects; this all helps the software align the photos.

Photos are taken from as many angles as possible, making sure to overlap. This wasn't hard to overlap as every photo overlaps due to the size of the headstone.


Once you have taken the pictures and uploaded them into the program, the next step is to "align" the photos. Don't worry, the program does this for you. It may not be able to align all of the photos, which is why more is better. However, as with all software programs, the more data you give it to crunch, the longer it will take. This will create the "sparse point-cloud" which will give you some idea of what camera shots were able to provide data and a first step to edit a little bit.

The sparse point cloud is generated by aligning the photos. The blue rectangles are the locations of the camera when it took the picture. You can see some random points outside of what we are trying to model.


After the photos are aligned the next step is to create a "dense point cloud". In essence, the program is defining x,y, and z coordinate locations in a space of the object you have taken a series of pictures of. This is the big framework that the rest of the model will be built on, and an opportunity to edit errant points or clear out photographic "noise" like other objects you are not intending to model.

The dense point cloud has been created. It allows for a much better visualization of what you might want to edit out and what you want the program to focus on during the next steps.


Once the dense point cloud is set to your liking you will then create a mesh frame for the object; this is sort of like connecting all the little dots together into the basic outlined structure. This gives the object shape and at this point is not just a point cloud, but a 3D object.

Those of you familiar with GIS might recognize these little triangles. This is the mesh that has been generated by linking the dense cloud points together.


Finally, the color texture is overlaid on top of the mesh framework. A few more tweaks here and there and you have yourself a brand new 3D digital model that you can then upload, share, and even let other folks download to print on a 3D printer!




There are many options for photogrammetry processing out there and a multitude of uses. We'll have more in future posts, but hope this brief introduction helped you better understand how this process works.

Be sure to follow FPAN on Sketchfab to see our projects!

Text and pics: Kevin Gidusko