Benvenuto nel progetto Visas

The Underwater 3D Reconstruction service focuses on the integration of optical and acoustic techniques for the generation of multi-resolution textured 3D models of underwater archaeological sites. Multibeam echosounders (MBES), usually employed for the generation of bathymetric maps in archeological contexts, allow for acquiring a great amount of data at long distances and in presence of bad visibility, but the results are affected by low resolution and do not contain color information. The optical systems, in contrast, are more suited for close-range acquisitions and allow for gathering high-resolution, accurate 3D data and textures, but the results are strongly influenced by the visibility conditions. Therefore, the integration of 3D data captured by these two types of systems is a promising technique in underwater applications, as it allows for modeling large and complex scenes in relatively short time. The method proposed in VISAS exploits the high resolution data obtained from photogrammetric techniques and the latest techniques for the construction of acoustic microbathymetric maps to build three-dimensional representations that combine the resolution of optical sensors with the precision of acoustic bathymetric surveying techniques. The method allows for obtaining a complete representation of the underwater scene and to geo-localize the optical 3D model using the acoustic bathymetric map as a reference. 

 

The main steps of the proposed methodology are shown in Figure 2:

 

Figure 2: Processing pipeline of acoustic and optical data.

 

After a first inspection of the site, in order to localize the areas of greatest importance from an archaeological point of view (the only one that will be acquired with optical sensors), the integrated survey is carried out.

Subsequently, the optical and acoustic data are merged by using a target–based registration approach. This technique is based on the detection of homologous geometrical entities (features) between the two representations and their subsequent alignment. For this purpose, we used man-made targets placed on the seabed. Taking into account that the reflective properties of the optical and acoustic signals vary according to the materials to be used, we have opted for leveraging the high reflectivity of the air in water, so we have designed custom opto-acoustic markers built from aluminum structures covered by bubble wrap.

Using the information coming from accurate depth measurements (with errors on positioning on the order of centimeters) obtained with the high frequency MBES equipment, the geo-referencing of the optical pointcloud ends up to be a by-product of the registration step.

The last steps of the process consist of meshing and texturing the opto-acoustic point cloud of the underwater archaeological site. The meshing step is carried out using a dedicated software, which has the ability to create a mesh by using an efficient multi-resolution algorithm and to perform further refinements of the model by using the point cloud as reference, so that the model reconstruction is performed in a coarse-to-fine fashion.

In order to place textures on the 3D model, we use a technique based on the projection and blending of 2D images on the 3D surface. In particular, since the camera poses are known downstream of the optical 3D reconstruction process, we directly map the high resolution images on the portion of the 3D surface of the model representing the archaeological remains. For this purpose, we select an image subset, because the averaging among neighborhood values during the blending on the images works better if performed on a largely overlapped area (due to the reduction of blurring). 

The low-resolution polygonal mesh of the seabed is in turn obtained from the acoustic bathymetry and textured with a tile-based texture mapping approach that just requires to set some sample images of texture tiles instead of a large texture.