Sunday, 16 September 2012

Converting pictures into a 3D mesh with PPT, MeshLab and Blender

Obs.: Please, read the article with technical information that is important and complementary about the technique, place and manner that the photographs were taken.

SfM is a powerful technology that allow us convert a picture sequence in a points cloud.

MeshLab is a usefull software in 3D scanning tool, that is in constant development and can be used to reconstruct a points cloud in a 3D mesh.

Blender is a most popular opensource software of modeling and animation , with a intuitive UV mapping process.

When we joint the three software they allow us create a complete solution of picture scanning.

The process will be described superficially for that already have a few knowledge about the tools used to do this reconstruction.

First of all, was needed a group of pictures that was converted in a points cloud with Python Photogrammetry Tools.

The picture was taken without flash. This make the process harder in the future, when is needed use the image how reference to create the relief of the surface.

MeshLab was used to convert the points cloud into a 3D mesh with Poison reconstruction.

The surface was painted with vertex color.

The 3d mesh and the points cloud was imported in Blender.

The points cloud was imported because it have the information about the cameras point (orange points).

Using this points was possible placed the camera in the right position.

The vanishing points was matched using the focla distance of the camera. But, how we can see in the image above the mesh didnt match with the reference picture.

To point the camera was needed to orbit it manually.

Blender have a good group of UV Mapping tools. It is possible to use only the interest region of the picture to make a final texture map, how we can see in the infographic above.

So, in this process each viewpoint texture was projected using a picture. Above we can see in right the original image, and in the left the mesh with the projected texture. This appears to be perfect because the viewpoint of the camera is the same viewpoint of the picture.

But, if the 3D scene is orbited, we can attest that the projection works well only in the last viewpoint.

So, a good way to make the final texture is using the viewpoint of the picture to paint only the interest area.

When the scene is orbited we can attest that only the interest area was painted.

The surface have to be painted using some viewpoints, to complete bit by bit the entire texture.

We can see the finished process above. It isn't needed using all pictures taken to build the final texture. Depending on complexity of the model inly four images will be needed to complete the entire texture.

Now we can compare the texture process and the vertex paint process. In this case the texture process was more interesting to be used.

The resulted mesh have a hight level of details and nevertheless can be used to be viewed in realtime (see the video in the top).

To increase the mesh quality, we can use the Displacement Modifier in Blender. It project the relief of the surface using the texture how reference.
The final result:

This article was possible thanks to the kindness of Dott.ssa Paola Matossi L'Orsa and Dott.ssa Sara Caramello and with the permission from the "Fondazione Museo delle Antichità Egizie di Torino".
BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.