Thursday 1 November 2012

Taung Project: Recovering the missing parts of the skull


As was published in the last articles, we are working on the Taung Project, which involves the reconstruction of a 2.5-million-year-old fossil; not just reconstructing the face with soft tissue, but restructuring the entire skull as well.

The most important thing in this project is the technology that will be used, because evidently, all the results will be shared with the community. And the ‘community’ means everyone.


This article will describe the techniques in recovering the missing parts of the Taung child skull.

It's important to state at this point that all integrants of the Arc-Team work hard in their professions, so there will be times one would publish an article before another, when they have free time to share their
knowledge. Having said that, this article was written during someone’s free time, in the hopes that it might be useful to otherswho would read this blog. Below you’ll find the description of how the skull was scanned in 3D.

Describing the process



The skull was scanned in great detail for Luca Bezzi. The model was prepped for importing to Blender.


Unfortunately (or fortunately, for the nerds), a significant part of the skull was missing, as indicated by the purple lines. For a complete reconstruction, the missing parts needed to be recovered.

The first step was to recover the parts using a mirrored mesh in Blender 3D. You can see a time-lapse video of the process here.

This was sufficient enough to cover a large part of the missing area.

But even with the mirroring, a few parts were still missing.
How can this be solved?


An option was to use the CT scan of primates to reconstruct the missing parts at the mandible and other areas.

Obviously, the CT scan chosen was that of infant and juvenile primates.

You can found the tomographies in this link. They can be used for research purposes. To download the files, you'll have to creat an account.

The mandible is of a juvenile chimpanzee (Pan troglodytes). Viewd in InVesalius.

The reconstruction of CT-scan was imported (.ply) in Blender.

And placed on skull.


 But, beyond of the size bigger, the Australopithecus didn't have canines so big.

Using the Blender sculpting tools, it was possible to deform the teeth to make them appear less “carnivorous”…


…and make them compatible with the Taung skull.

To complete the top, the cranium of an infant chimpanzee (Pan troglodytes) was chosen.

ollowing the same process as before, the reconstructed mesh was imported to Blender…


 …and made compatible with the Taung cranium.

The overlapping portion of the cranium was deleted.

The same was done with the mandible.

The skull was completed, but with a crudely formatted mesh because of the process of combining different meshes.

The resulting mesh was very dense, as you can see in the wired orange part.

Why didn’t we use the decimate tool? Because the computer (Core 5) often crashes when this is used.

Why didn’t we make a manual reconstruction of the mesh? To avoid a subjective reconstruction.

How was this solved?

A fake tomography needed to be done to reconstruct a clean mesh in InVesalius. How? We know that when you illuminate an object, the surface reflects the light, but inside it's totally dark because of the absence of light.

So since Blender allows the user to start the camera view when needed, you can set up the camera to "cut" a space and see inside the objects.
The background has to be colored in white, so only the dark part inside the skull appears.

To invert the colors (because the bones have to be white in the CT scan), you can use Blender nodes…

…and render an animated image sequence (frame 1 to 120) of 120 slices.


Using the Python script IMG2DCM, the image sequence was converted in a DICOM sequence that was imported to InVesalius and reconstructed like a 3D mesh.

With IMG2DCM, it is possible to manually establish the distances of the DICOM slices, but in this case,the conversion was made with default values (because this is flattened), and the mesh will just be rescaled later on.





The reconstructed mesh is then imported and rescaled to match the original model.


The result is a clean mesh that can be modified with Remesh to come up with an object with 4-sided faces.

 Now, we only needed to use the sculpt tool for "sanding" the mesh.


 

To create the texture, the original mesh was used. A description of the technique can be viewed here.

When the mapping was finished, the rendering was done, and this step of the project was completed.

You can download the Collada file (3D) here.

I hope this article was useful and/or interesting for you. The next step is a previous 2D reconstruction as training for making the 3D final model.

See you there…a big hug!


1 comment:

  1. Hey

    really like the 3d work. interesting process.
    thanks for sharing.

    also like the blender CT "scan", great idea.

    regards


    ReplyDelete

BlogItalia - La directory italiana dei blog Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.