Mario Valle Web

MOD3D — a Three-dimensional Reconstruction System

Ing. Mario Valle – AVS Italy
January 2001

Goal of this project is to create a system that, starting from a set of photos of undersea bio-constructor organisms (like corals, sponges, etc.), reconstructs their 3D models. This enables non-destructive measurements on them and helps the comparison of the same object at two different times.

This project has been requested by ENEA – CRAM (Centro Ricerche Ambiente Marino). ENEA is one of the biggest Italian public research councils.

Their scientific needs are the measure of the growing of specific undersea organisms. Those organisms are a sensible indicator of the status of water especially regarding the water absorption of CO2. This helps the understanding of the Earth greenhouse effects and trends.

With the MOD3D system they plan to increase the precision of measurement of the organism grow using full 3D modeling. And of no less importance, this non-contact reconstruction method permits the preservation of the organisms. This is in contrast with the current methodology that requests the removal (with consequent destruction) of the organism to bring it to the laboratory for measurement.

The project should be classified in the Computer Vision area. This is a young active research area that unfortunately still lacks consolidated algorithms and procedures. Adding to this the peculiar environment in which the images are collected and you face an interesting development challenge.

One paper (local copy) has been published that describes this work both from the biological and computer vision sides.

To learn more about MOD3D you can access its online help [italiano]

Environment constrains

A diver using a digital camcorder records the images underwater. This means that no hypothesis can be made on the position of the camera in the different frames. Also the camera can or cannot be pre calibrated and no hypothesis can be made on the invariance of the calibration between frames.

Another constrain is that the site permits very few setup steps before tacking the images. This translates in the requirement to have only a single simple reference object (a cube on a pole) inside the area.

The images contain also a lot of noise that must be filtered before extracting reference points and object contours. In spite of the filtering the algorithms must be robust and cope with imprecise data and misleading measurements (outliners).

The area under investigation has typically a dimension around 2 by 2 meters. The organisms to be modeled have dimensions between 10 and 50 centimeters. The measures will be taken twice a year without guarantee that the reference object will have the same position and orientation between two reconstruction steps.

Reconstruction overview

There are three steps in a reconstruction session:

Then on the reconstructed object models, various measurements can be done: volumes, distances and so on. Instead, the reconstructed scene can be compared with a scene taken some time before or exported to a VRML file to be displayed on the project Web page or interchanged with other biologists.

Reconstruction session using MOD3D

The MOD3D working session is logically subdivided in several steps. Each adds some new data needed for the reconstruction. The entire procedure is semi-automatic and drive the user through the steps needed to obtain all the data needed by the reconstruction algorithms. The visualization is employed in MOD3D to make the user verify the quality of the computed data at each step. Remember also that the important parts of MOD3D are not visible to the user.

Start MOD3D [start MOD3D]
Select the source that can be a preceding saved session or a new set of images [select source images]
Select the images to be used [select images]
Filter the images [filter images]
Extract reference points and calibrate the cameras [extract reference points]
Verify visually that the reconstructed cameras are congruent [verify congruence]
Extract object silhouettes [Extract object silhouettes]
Extract ground reference points [Extract ground reference points]
Reconstruct object bounding boxes and globally refine the calibration [Reconstruct object bounding boxes]
Reconstruct the objects [Reconstruct the objects]
Now uses the reconstructed scene to… [use the reconstructed scene]
…measure the reconstructed objects [measure objects]
…or compare the scene with a preceding one [compare reconstructions]
…or export the scene to a web page as a VRML scene [export to VRML]