Benvenuto nel progetto Visas
The Underwater 3D Reconstruction service focuses on the integration of optical and acoustic techniques for the generation of multi-resolution textured 3D models of underwater archaeological sites. Multibeam echosounders (MBES), usually employed for the generation of bathymetric maps in archeological contexts, allow for acquiring a great amount of data at long distances and in presence of bad visibility, but the results are affected by low resolution and do not contain color information. The optical systems, in contrast, are more suited for close-range acquisitions and allow for gathering high-resolution, accurate 3D data and textures, but the results are strongly influenced by the visibility conditions. Therefore, the integration of 3D data captured by these two types of systems is a promising technique in underwater applications, as it allows for modeling large and complex scenes in relatively short time. The method proposed in VISAS exploits the high resolution data obtained from photogrammetric techniques and the latest techniques for the construction of acoustic microbathymetric maps to build three-dimensional representations that combine the resolution of optical sensors with the precision of acoustic bathymetric surveying techniques. The method allows for obtaining a complete representation of the underwater scene and to geo-localize the optical 3D model using the acoustic bathymetric map as a reference.
The main steps of the proposed methodology are shown in Figure 2:
Figure 2: Processing pipeline of acoustic and optical data.
After a first inspection of the site, in order to localize the areas of greatest importance from an archaeological point of view (the only one that will be acquired with optical sensors), the integrated survey is carried out.
Subsequently, the optical and acoustic data are merged by using a target–based registration approach. This technique is based on the detection of homologous geometrical entities (features) between the two representations and their subsequent alignment. For this purpose, we used man-made targets placed on the seabed. Taking into account that the reflective properties of the optical and acoustic signals vary according to the materials to be used, we have opted for leveraging the high reflectivity of the air in water, so we have designed custom opto-acoustic markers built from aluminum structures covered by bubble wrap.
Using the information coming from accurate depth measurements (with errors on positioning on the order of centimeters) obtained with the high frequency MBES equipment, the geo-referencing of the optical pointcloud ends up to be a by-product of the registration step.
The last steps of the process consist of meshing and texturing the opto-acoustic point cloud of the underwater archaeological site. The meshing step is carried out using a dedicated software, which has the ability to create a mesh by using an efficient multi-resolution algorithm and to perform further refinements of the model by using the point cloud as reference, so that the model reconstruction is performed in a coarse-to-fine fashion.
In order to place textures on the 3D model, we use a technique based on the projection and blending of 2D images on the 3D surface. In particular, since the camera poses are known downstream of the optical 3D reconstruction process, we directly map the high resolution images on the portion of the 3D surface of the model representing the archaeological remains. For this purpose, we select an image subset, because the averaging among neighborhood values during the blending on the images works better if performed on a largely overlapped area (due to the reduction of blurring).
The low-resolution polygonal mesh of the seabed is in turn obtained from the acoustic bathymetry and textured with a tile-based texture mapping approach that just requires to set some sample images of texture tiles instead of a large texture.
One of the main objectives of the VISAS project concerns the development of virtual reality systems able to provide instructions and information in a playful and educative manner in order to improve the visitor’s experience of underwater archaeological sites by making it more interesting, charming, and effective.
In the Virtual Dive Experience service there are two different possible embodiments for the VR system, each one characterized by the type of devices, the provided levels of immersion, interaction and presence (Figure 3).
The first architecture allows users to perform a semi-immersive visualization by means of a full HD monitorbased on passive 3D technology. The passive technology has been preferred to the active one because active 3D glasses are expensive and need batteries to work. Furthermore, passive 3D glasses are inexpensive and comparatively lighter and more comfortable. Users interact with the system by means of a multi-touch tablet featuring a user-interface that provides all the input functionalities needed to explore the 3D environment and get access to the multimedia data.
The second architecture allows users to perform an immersive experience through the HMD (head mounted display) technology. The HMD isolates the user from the distractions of the actual physical environment and encompasses the entire field of view, including the peripheral space. The navigation in the virtual environment is performed by the user by moving his/her head and interacting with a joystick. Compared to the first architecture which relies on monitors for the visualization, in the immersive environment users receive audio contents instead of visual information when interacting with 3D objects and Points of Interest (POIs).
In particular, two different software interfaces have been implemented for the VR system to be used by tourists or divers.
The first one allows users to live a virtual experience and learn both general information and historical-cultural contents related to the specific archaeological site. Users are able to explore the 3D reconstruction of the underwater site and to receive information and contents about the submerged exhibits and structures of the site. In particular, information is provided on archaeological peculiarities related to materials and construction techniques, but flora and fauna are also described, with a particular attention on their interaction with the submerged artifacts.
The second one allows divers to make a detailed planning of the operations and itinerary that will be later performed in the underwater environment. This is a very effective and innovative reinterpretation of the dive planning stage that precedes each scuba dive session. In fact, the dive planning gives important instructions for technical and safety purposes, but it is often a taught activity that could appear boring and demanding, especially for recreational divers. Furthermore, the dive planning may be complex and in some cases the processes may have to be repeated several times before a satisfactory plan is achieved. The implemented software for the dive planning, performed by means of a VR system, allows for combining the educational purpose with playful activities that are able to involve psychologically and emotionally any kind of users.
Augmented Diving service is intended for the divers who are going to visit the underwater site, allowing them to have a virtual guide that provides specific information about the artifacts and the area they are visiting. The service is based on a tablet properly equipped with a waterproof case and an integrated system for acoustic localization and inertial navigation.
The underwater positioning tools currently available on the market are based on very expensive acoustic communication systems, which estimate the position of the receiver by computing the distance from at least three fixed transmitters (LBL - Long Baseline) or use a single transmitter and an array of receivers (SBL / USB - Short / Ultrashort Baseline).
The goal of the VISAS project is to build a low-cost underwater positioning and orientation system composed by one or more fixed beacons placed on the seabed and an underwater tablet equipped with an acoustic modem (LBL technique, see Figure 4). In order to improve the accuracy and increase the robustness in case of loss of signal from one or more beacons, the tablet is also equipped with an inertial platform and a depth sensor. The data coming from the various sensors are processed through data fusion and error estimation algorithms.
The tracking system on the tablet sends a query to the beacons and computes the distance from each of them. These data are used by the data fusion algorithm to correct the estimate on the position obtained through the inertial system and the pressure sensor. The navigation software receives this information and shows the location of the tablet on a 3D map of the underwater archaeological site.
The user interface of the augmented navigation system guides the diver along the selected track inside the archeological site. Using the tablet position and orientation, the app shows a 3D map representing the environment around the diver, adding useful information about underwater artifacts and structures. Moreover, some additional data are displayed: system status, battery charge, water temperature, and depth. The diver can also change the view mode or open the camera device to shoot a photo. The photos are automatically geo-referenced.
Figure 4: VISAS Augmented diving system