 Research
 Open Access
 Published:
Craniofacial reconstruction based on a hierarchical dense deformable model
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 217 (2012)
Abstract
Craniofacial reconstruction from skull has deeply been investigated by computer scientists in the past two decades because it is important for identification. The dominant methods construct facial surface from the soft tissue thickness measured at a set of skull landmarks. The quantity and position of the landmarks are very vital for craniofacial reconstruction, but there is no standard. In addition, it is difficult to accurately locate the landmarks on dense mesh without manual assistance. In this article, we propose an automatic craniofacial reconstruction method based on a hierarchical dense deformable model. To construct the model, we collect more than 100 head samples by computerized tomography scanner. The samples are represented as dense triangle mesh to model face and skull shape. As the deformable model demands all samples in uniform form, a nonrigid registration algorithm is presented to align the samples in pointtopoint correspondence. Based on the aligned samples, a global deformable model is constructed, and three local models are constructed from the segmented patches of the eye, nose, and mouth. For a given skull, the global and local deformable models are matched with it, and the reconstructed facial surface is obtained by fusing the global and local reconstruction results. To validate our method, a face deformable model is constructed and the reconstruction results are evaluated in its coefficient domain. The experimental results indicate that the proposed method has good performance for craniofacial reconstruction.
Introduction
Craniofacial reconstruction is an efficient method to get a visual outlook of an individual in the case of only skull and bone remaining. The traditional plastic methods[1–3] depend on the timeconsuming manual work of artists. The reconstruction result is generally determined by the experience of practitioners. To reduce reconstruction time and eliminate subjective biases, different computeraid craniofacial reconstruction methods have been proposed[4–17]. The stateoftheart of the computeraid craniofacial reconstruction have comprehensively been reviewed in the surveys[18–21]. The soft tissue thickness measured on skull is the foundation for craniofacial reconstruction. To get complete tissue thickness, the head samples are usually measured by different equipments such as computerized tomography (CT), magnetic resonance imaging and ultrasound scanner. Most computeraid craniofacial reconstruction methods fit a selected facial template to the target skull according to the average soft tissue thickness at the skull landmarks[4–8]. Others deform a reference skull to match the remaining skull according to the skull feature such as anthropologic points[9], lines[10], and other features[11]. Applying an extrapolation of the skull deformation to the face template, the reconstructed face will be achieved.
The selection of the template or reference is vital for accurate craniofacial reconstruction. In general, a generic or a specific craniofacial template with similar shape attributes is chosen. But it is difficult to get suitable reference for every dry skull because of the diversity of skull and face modality. In addition, as the complex deformation between the reference and the target skull, the warping methods should intensively be studied to get accurate reconstruction result. So many deformation methods are proposed to model the nonrigid shape deformation of skull and face, such as radial basis functions (RBF)[22, 23], or more exactly, a thin plate spline (TPS)based deformation[12, 24, 25] for its smoothness. Instead of using fixed template, the recently proposed statistical craniofacial reconstruction methods[12–17] construct a type of deformable model from a set of 3D heads by the principle component analysis (PCA) technique. The statistic deformable model can be regarded as a dynamic template for the given skull. The template deformation is a model fitting procedure driven by the difference between the input skull and the template, in which the model parameters are adjusted by optimization method. The reconstruction result of the deformable model depends on the diversity of samples in the 3D heads database. If there are sufficient samples, good reconstruction results will be achieved. So the statistic method is regarded as the dominant method with great potential application in practice.
Essentially, the craniofacial reconstruction is to figure out the face of unknown skull by the knowledge of skull and face dependency, which is concretely represented as the distribution of the tissue thickness on skull. Most current methods utilize the soft tissue thickness of a set of skull landmarks for craniofacial reconstruction, but it is considered not an ideal approach to model the relationship between face and skull. One reason is that the statistical soft tissue thickness at a set of sparse landmarks is far less than enough to reflect the whole distribution of tissue depth. The other reason is that the quantity and position of the landmarks are indefinite. Different landmark sets have been proposed for craniofacial reconstruction[26–31], though there are definite anatomical points in biometrics[32, 33]. Moreover, it is difficult to detect the landmarks accurately on the complex surface of skull without manual interactive work. In order to reflect the complete tissue thickness distribution and eliminate the disadvantages of the sparse representations, the methods which measure tissue depth at all points have been proposed. In these methods, the face and skull are generally represented in dense form. For examples, Tu et al.[34] constructed a face space for craniofacial reconstruction from the dense skull and face surfaces extracted from head CT images. Vandermeulen et al.[35] also used dense representations (implicit surfaces) for both skin and skull in craniofacial reconstruction. Pei et al.[22] presented a dense tissue depth image representation for craniofacial reconstruction, namely tissuemap. The dense tissue depth methods utilize more information of the relationship between skull and face, it generally has better craniofacial reconstruction results. To represent the dense tissue depth exactly, the dense point registration of skull or face is usually demanded. Although many registration methods[12, 24, 25, 36] have been proposed to construct correspondence between surfaces and point sets, it is still a challenging problem for further investigation because of the complex skull mesh with gross errors or outliers.
To the complex skull and face surfaces, the modality variety is composed of global shape and local detail. However, most current craniofacial reconstruction systems generally take the whole face or skull for shape analysis, while the local feature of skull and face is not emphasized. The recent research reveals that the local shape model is better than the global model to represent local shape variety[37, 38]. Inspired by this point, we propose a hierarchical craniofacial reconstruction model which integrates the global model with several local models to improve craniofacial reconstruction result. To construct the model, the face and skull samples are represented as dense mesh and aligned in pointtopoint form by a proposed automatic dense registration algorithm, which contributes to a fully automatic craniofacial reconstruction method. In addition, to get valid evaluation for the craniofacial reconstruction results, we transform the reconstructed face into the coefficient domain of a face deformable model and the distance in the coefficient space is used as the similarity measurement. Comparing with the current measurement methods, such as the mean correspondence point distance or the Euclidean distance matrix[12], the proposed measurement is more suitable for face recognition.
The proposed hierarchical craniofacial reconstruction system is composed of four components, namely, the data acquisition and preprocessing, the global deformable model, the local deformable model, and the result evaluation (the dashed rectangles shown in Figure1). In the data acquisition and preprocessing component, skull and face data are acquired by CT scanner and the prototypic data are preprocessed to construct 3D skull and face surfaces. To construct the model, all samples are aligned by a proposed nonrigid dense mesh registration algorithm. In the global/local deformable model components, the global/local deformable models are constructed and the global/local face reconstruction results are obtained by modelmatching procedure. Fusing the global and local reconstruction results gives the final craniofacial reconstruction result. In the result evaluation component, the reconstruction result is evaluated and the validation of the proposed method is verified. To integrate the four components as a whole system, some key problems, such as the dense mesh registration, the model matching procedure, the mesh segment, and fusion (the small rectangles shown in Figure1) should be solved. In the rest of the article, we will give the detail of the main components and the solutions to these key problems.
Data acquisition and preprocessing
In order to construct the craniofacial reconstruction model, we have constructed a head database from CT images. The CT images were obtained by a clinical multislice CT scanner (Siemens SOMATOM Sensation 16) in the affiliated hospital of Shaanxi University of Chinese Medicine located in western China. More than 100 patients planned for preoperative osteotomy surgery gave informed consent to scan the whole head for scientific research. The images of each subject are stored in DICOM 3.0 with 512×512 resolution. To get complete head data, 250 to 320 slices are captured for different persons. Most of the patients belong to the Han ethnic group in northern China. In this article, 110 samples are used for craniofacial reconstruction experiments. There are 48 female and 62 male subjects in the collection. The age distribution ranges from 20 to 60.
Each prototypic sample in our database consists of a skull surface and its face surface, which are extracted from CT images. The point clouds composing of the outer surfaces of skull and skin are extracted slice by slice (Figure2a). For the skull, a threestep method is performed on each slice of CT images. The first step is to find the contour points of the skull through the Sobel operator model after filtering out noise. In general, the skull contour has inter and outer sides (Figure2). In the second step, a circular scanning is implemented to get a rough outer contour. The scanning line is radiated from the center of the image, toward the four edges of the image, to find the farthest contour point it encounters. The rough contour usually contains some points not belonging to the outer surface, as shown in Figure2c. In the last step, these pseudo contour sections are removed and the missing parts are mended. By setting the max length threshold L, the section will be deleted if its length is smaller than L pixels. In our experiment, L is set to be 10. Because the skull is nonconvex, the rough contour may disrupt in some regions where it should be connected. We adopt an 8neighborhood boundary tracing approach to connect each point of the rough contour if they are disrupted, and obtain the final contour, as shown in Figure2d. It is easier for skin to find the outer contour, as the skin contour is simple and generally close for all CT images (Figure2e). So we only need to find a point in the above second step to get the outer skin contour, as shown in Figure2f. After retrieving the outer contours for skull and skin from all CT images, the skull and face surfaces can be represented as triangle meshes by the marching cube algorithm[39]. Usually, the raw skin surface consists of about 220,000 points with 450,000 triangles, while the skull surface contains about 150,000 points with 320,000 triangles, as shown in Figure2g–j. It is dense enough to describe the rich details of skull and face shapes.
To eliminate the inconsistence of position, pose, and scale caused by data acquisition, all samples are transformed into a uniform coordinate system. The uniform coordinate system is determined by four skull feature points, the left and right porion, the left (or right) orbitale and the glabella, denoted by L_{ p }, R_{ p }, L_{ o }, G. From three points, L_{ p }, R_{ p }, and L_{ o }, the Frankfurt plane[40] is determined. The coordinate origin (denoted by O) is produced from the intersection of the line L_{ p }R_{ p }and the plane which contains point G and orthogonally intersects with L_{ p }R_{ p }. We take the line O R_{ p }as xaxis. The line contains point O and has the same direction as the normal of the Frankfurt plane is set as zaxis. yaxis is obtained by the cross product of z and xaxis. The scale of the samples is standardized by setting the distance between L_{ p } and R_{ p } to unit, i.e., every vertex (x y z) of the skull and face is replaced by$(\frac{x}{\left{L}_{p}{R}_{p}\right},\frac{y}{\left{L}_{p}{R}_{p}\right},\frac{z}{\left{L}_{p}{R}_{p}\right})$. The uniform coordinate system of skull and face is shown in Figure2k,l.
Dense registration for skull and face
The original skull and face meshes have different connectivity with different number of vertices. To investigate the modality variety of skull and face, the original skull and face samples must be registered before model construction. For the samples in dense triangle mesh representation, the registration is to build a pointtopoint correspondence according to the shape features, such as the tip of nose, the corner of mouth, and the center of eyes. It is a challenging problem to get accurate registration for dense skull and face meshes because there exists nonrigid deformations and bigblock outliers on the complex surfaces. To solve the problem of point or surface registration with nonrigid deformation, many methods and algorithms have been presented. The TPSRPM method[41, 42] incorporated TPS into the framework of iterative closest point (ICP) and adopted a softassign and deterministic annealing optimization to compute correspondence between two point sets. As all points of the aligning objects are used to determine the TPS deformation and a correspondent matrix with dimension in quadratic cardinality of the point sets is used to eliminate outliers, this method is not applicable to the dense skull and face with bigblock outliers. Hutton et al.[43] proposed an automatic 3D faces registration based on a dense face model. But in the model construction procedure, a set of landmarks should be picked up by manual work. The coherent point drift method[44] formulated the alignment of two point sets as a probability density estimation problem. The reference point set is represented as gaussian mixture model centroid and fitted to the target point set by the expectation maximization algorithm. This method does not need landmarks, but it converges slowly for the dense objects with large data volume and easily fails for bigblock outliers. In this article, we present a twostep method to solve the dense registration of skull and face. The first step is to align two samples by a nonrigid registration method based on the TPS transformation[45]. The second step is to improve the registration of all samples by a group registration method based on a linear combination model. The proposed method is implemented automatically and has good performance for bigblock outliers. The whole procedure of our registration method is shown in Figure3. The detail is given in the following sections.
TPSbased nonrigid registration
As the skull and face have complex shape modality with nonrigid deformation, the traditional rigid registration methods, for example, the widely used ICP algorithm[46], are not suitable for this problem. So we adopt nonrigid registration method to solve the dense registration of skull and face. The basic idea of the method is shown in Figure4, not constructing registration between the reference and the target directly, we construct an approaching template for the target skull or face from the nonrigid deformation of the reference. As the deformed reference is closer to the target than the reference, it will produce more accurate alignment. Considering the advantages of TPS deformation, such as good smoothness constraint, simple calculation, and the ability to be decomposed into affine transformation component and nonaffine transformation component, we adopt TPS to represent the nonrigid deformation. There are two steps to make the registration between the reference and the target. First, the reference is transformed to the target by TPS deformation. Then, the pointtopoint correspondence is achieved by the closest point searching procedure of ICP. The detail of the TPSbased nonrigid registration method is given in the following.
To begin the registration, the reference skull and face must be selected. In general, the head with common shape feature is used as the reference. We select a sample with complete data as the reference. Considering that the craniofacial reconstruction is mainly determined by the front part of head, the occipital parts of the reference are manually removed to reduce data volume. The backremoved reference skull and face are shown in Figure5a,b. There are 36,000 and 40,969 vertices with 70,827 and 81,458 triangles on the reference skull and face, respectively. The other preparing work for the nonrigid registration is to get the controlling points for TPS transformation. As TPS is type of interpolation method, generally the TPS deformation depends on a set of correspondent controlling points on the reference and the target. But it is difficult to get plenty of correspondent feature points on face and skull automatically. It generally demands timeconsuming manual work to locate the feature points on skull and face. To get automatic point registration, we use a random method to produce the controlling points for TPS, in which a mount of controlling points are generated randomly in the reference and its correspondent points on the target are obtained by ICP closest points searching method. To get uniform distribution on the 3D surface, the random controlling points are computed by the farthest point sampling method[47]. The random points on the reference skull and face are shown in Figure5c,d. As the correspondence obtained by the closest point searching method for the random points is not exactly the true correspondence according to skull and face feature, instead of making onestep transformation from the reference to target, we do the TPS transformation in a stepwise procedure. At the same time, to eliminate the influence of inconsistent correspondence of some points, the random points on the reference are updated timely. Integrating this controlling points generating procedure with the TPS deformation, the selected reference skull and face will gradually be aligned to the target skull and face.
For convenience, the selected reference skull or face is denoted by S_{ref} = {P_{ rp }P_{ rp }= (x_{ rp },y_{ rp },z_{ rp }),p = 1,…,N_{1}}, and the target, the i th sample, is denoted by S_{ i }= {P_{ iq }P_{ iq }= (x_{ iq },y_{ iq },z_{ iq }),q = 1,…,N_{2}}, where N_{1} and N_{2} are the points number of S_{ref} and S_{ i } such that N_{1} ≤ N_{2}. Then the TPS transformation can be regarded as a map from S_{ref}to S_{ i }, denoted by f(.). The correspondent random controlling point sets of S_{ref}and S_{ i } are denoted by${M}_{r}=\left\{{L}_{\mathit{\text{rj}}}\right{L}_{\mathit{\text{rj}}}=({x}_{\mathit{\text{rj}}}^{\ast},{y}_{\mathit{\text{rj}}}^{\ast},{z}_{\mathit{\text{rj}}}^{\ast}),j=1,\dots ,M\}$,${M}_{i}=\left\{{L}_{\mathit{\text{ij}}}\right{L}_{\mathit{\text{ij}}}=({x}_{\mathit{\text{ij}}}^{\ast},{y}_{\mathit{\text{ij}}}^{\ast},{z}_{\mathit{\text{ij}}}^{\ast}),j=1,\dots ,M\}$, where M is the count of the controlling points. From the definition of TPS, f(.) will satisfy the following interpolation conditions:
By TPS theory, the deformation of other noncontrolling points is restricted by the blending energy function in the following form:
where X=(x,y,z)^{T},${F}_{X}={(\frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z})}^{T}$ and I_{3} = (1,1,1)^{T}. It is proved that TPS can be decomposed into affine and nonaffine components[45]. This fact is generally represented as the following formula:
where P∈S_{ref} with the homogeneous coordinate (1,x,y,z). d is a 4×4 affine transformation matrix. K is the TPS kernel, a 1×M vector in the form of K = (K_{1}(P),…,K_{ M }(P)), where K_{ j }(P) = ∥P−L_{ rj }∥,j = 1,…,M. w is a M×4 warping coefficient matrix representing the nonaffine deformation.
To solve TPS transformation, the matrix d and w must be determined. There are two solutions to this problem, namely, the interpolating and noninterpolating methods. In the interpolating case, formula (1) is satisfied. Putting formula (3) into (1) and confining w to nonaffine transformation, i.e.,${M}_{1}^{\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\prime T}w=0$, it leads to a direct solution for d and w formed by the following matrix relation:
where${M}_{r}^{\prime}$ and${M}_{i}^{\prime}$ are M×4 matrixes corresponding to the controlling points sets M_{ r }and M_{ i } in homogeneous coordinate form. K^{′} is a M×M symmetry matrix representing the spatial relation of M_{ r }with the element k_{ uv } = ∥L_{ ru }−L_{ rv }∥,u = 1,…,M,v = 1,…,M. In the noninterpolating case, formula (1) is not strictly satisfied. The following energy function can be minimized to find the optimized answer.
where λ is the weight to control the blending component, and given a fixed λ there is a unique minimum for the energy function. It is conducted that the noninterpolating solution has a parallel form replacing K^{′} in formula (4) by K^{′} + λI. As the correspondent controlling points on the reference and target acquired by ICP closest points searching is not exactly correct, the condition in formula (1) is not satisfied. So the noninterpolating method is adopted in this article.
Having determined the TPS transformation f(.), the reference S_{ref}can be deformed by the formula (3). The deformed reference is denoted by${S}_{i}^{\prime}$. Then the correspondence between${S}_{i}^{\prime}$ and S_{ i }can be obtained using ICP closest point searching. But the closest point searching procedure of ICP is a time consuming procedure with computation in O(N_{1}×N_{2}). To get high efficiency, we adopt a Kdimensional binary search tree (KDtree)[48] to model the target, which is proved having a complexity with O(N_{1}×logN_{2}) for the pairwise closest point searching. Considering that the closest point matching based on the initial TPS deformation is more inaccurate and the beginning alignment refers to the global correspondence while the later to the local area, a deterministic annealing strategy is applied in the stepwise TPSbased registration procedure. At the beginning of the registration, the points move a little to its deformed points, and the step size increases gradually when TPS deformation result becomes desirable. At the same time, the number of the random points increase from a small initial number for enhancing the holistic deformation at the beginning, and the blending weight of TPS in (5) decreases to relax the global constrains. The following gives the implementation of the proposed nonrigid TPSbased registration method.

1.
Create KDtree for the i th sample S _{ i }, denote it by T _{ i };

2.
Apply ICP alignment between S _{ref}, S _{ i }, then transform S _{ref}by the rigid transformation of ICP, the transformed sample is denoted by ${S}_{i}^{\prime}$;

3.
Produce random controlling point set ${M}_{i}^{\ast}$ with cardinality of M for ${S}_{i}^{\prime}$;

4.
For each point in ${M}_{i}^{\ast}$, search its correspondent point on S _{ i }by querying on T _{ i }, the correspondent point set is denoted by M _{ i };

5.
Determine the TPS transformation f from ${M}_{i}^{\ast}$, M _{ i }with blending weight λ;

6.
Apply the TPS transformation f on ${S}_{i}^{\prime}$, the deformed ${S}_{i}^{\prime}$ is denoted by ${S}_{i}^{\mathrm{\prime \prime}}$;

7.
Update ${S}_{i}^{\prime}$ by adding a movement to each point ${P}^{\prime}\in {S}_{i}^{\prime}$:P ^{′}= P ^{′} + δ(f(P ^{′})−P ^{′}), where $f\left({P}^{\prime}\right)\in {S}_{i}^{\mathrm{\prime \prime}}$ and δ is the step size;

8.
For each ${P}^{\prime}\in {S}_{i}^{\prime}$, search its correspondent point P ^{′′}∈S _{ i }by querying on T _{ i };

9.
Update the parameters:M = M + △M, δ = δ + △δ, λ = λ−△λ, where △M, △δ and △λ are the preassigned increments;

10.
If the iterations l < l _{0} and $\frac{1}{{N}_{1}}\sum _{{P}^{\prime}\in {S}_{i}^{\prime}}\parallel {P}^{\prime}{P}^{\mathrm{\prime \prime}}\parallel >{\epsilon}_{0}$, where l _{0} is the given maximum loops and ε _{0} is the given threshold, goto 3;

11.
The final correspondence of S _{ref} and S _{ i }is achieved from the equivalent correspondence of ${S}^{\prime}$ and S _{ i }, denote it by ${S}_{i}^{0}$.
In our experiments, M ranges from$\frac{1}{500}$ to$\frac{1}{80}$ of the point number of the reference, l_{0} = 30, ε_{0} = 10^{−6}, the initial δ = 0 with$\u25b3\delta =\frac{1}{{l}_{0}}$, and the initial λ = 0.01 with △λ = λ∗0.05.
Group registration by linear combination
It is important to select a closest reference for all samples to get good alignment, but the fixed reference may greatly differ with some samples as there is much variety in skull and face modality. Considering there are enough samples in our database, we try to improve the above registration by a group registration method based on a linear combination model. Instead of using a fixed reference, we utilize the combination of the above aligned samples to generate dynamic reference for every samples. As the dynamic reference is closer to the given sample, aligning the dynamic reference to the target sample will give better result. Based on the new correspondence result, we can construct new dynamic reference by linear combination, which will get more accurate aligning result. By this iterative procedure, the registration precise will be improved gradually. In the following, the iterative registration procedure by linear combination is described in detail.
If we regard the reference S_{ref}as a N_{1}×1 vector in form of${({x}_{r1},{y}_{r1},{z}_{r1},\dots ,{x}_{\mathit{\text{rp}}},{y}_{\mathit{\text{rp}}},{z}_{\mathit{\text{rp}}},\dots ,{x}_{r{N}_{1}},{y}_{r{N}_{1}},{z}_{r{N}_{1}})}^{T}$, then from the pointtopoint correspondence, the N aligned samples$\left\{{S}_{i}^{0}\righti=1,\dots ,N\}$ in the firststep can be formatted as vectors with the same form as S_{ref}, i.e.,${S}_{i}^{0}={({x}_{i1},{y}_{i1},{z}_{i1},\dots ,{x}_{\mathit{\text{ip}}},{y}_{\mathit{\text{ip}}},{z}_{\mathit{\text{ip}}},\dots ,{x}_{i{N}_{1}},{y}_{i{N}_{1}},{z}_{i{N}_{1}})}^{T}$, where the point$({x}_{\mathit{\text{ip}}},{y}_{\mathit{\text{ip}}},{z}_{\mathit{\text{ip}}})\in {S}_{i}^{0}$ is the correspondent point of (x_{ rp },y_{ rp },z_{ rp }). By this representation, we can get a new object by the following linear combination:
where a = (a_{1},…,a_{ N }) is the linear combination coefficient vector. For each original sample S_{ i }, a dynamic reference${S}_{i}^{\ast}$ can be determined by the following minimizing formula:
where ∥.∥ defined as the vector module representing the difference between two samples. The overall difference for all samples is defined as$\mathit{\text{Eg}}=\frac{1}{N}\sum _{i=1}^{N}\parallel {S}_{i}{S}_{i}^{\ast}\parallel $. The following is the detail of the registration method.

1.
Align S _{ref}to each sample S _{ i }by TPSbased method and get the primary aligning result ${S}_{i}^{0}$;

2.
Produce the dynamic reference ${S}_{i}^{\ast}$ for each S _{ i }by linear combination;

3.
Align the dynamic reference ${S}_{i}^{\ast}$ to S _{ i }by TPSbased method and get the aligning result ${S}_{i}^{1}$;

4.
If the iterations are less than the given maximum loops and the global difference Eg is great than the given threshold, update ${S}_{i}^{0}$ by ${S}_{i}^{0}={S}_{i}^{1}$ and goto 2;

5.
Get the final aligning result from $\left\{{S}_{i}^{1}\right\}$.
The construction of the hierarchical deformable model
After computing dense correspondence, we can model all skulls and faces as the formation of the reference vectors by the pointtopoint correspondence. If the triangle meshes of the reference skull and face are also applied on the correspondence points of the target, then all skulls and faces will have same mesh structure with same number of vertices. For convenience, we represent the i th head sample as a high dimension vector composed of skull and face vectors in the following form:
where${S}_{i}={({x}_{i1}^{S},{y}_{i1}^{S},{z}_{i1}^{S},\dots ,{x}_{\mathit{\text{im}}}^{S},{y}_{\mathit{\text{im}}}^{S},{z}_{\mathit{\text{im}}}^{S})}^{T}$ and${F}_{i}={({x}_{i1}^{F},{y}_{i1}^{F},{z}_{i1}^{F},\dots ,{x}_{\mathit{\text{in}}}^{F},{y}_{\mathit{\text{in}}}^{F},{z}_{\mathit{\text{in}}}^{F})}^{T}$ are the vectors of the i th skull and face with dimensions of 3m and 3n, respectively.
Similar to the dynamic reference construction in the above section, the linear combination of the aligned head samples {H_{ i }i = 1,…,N} will produce new skull and face. Given an unknown skull, the closest combination skull can be achieved by the model matching procedure. Extrapolating the combination of the skull vectors to the face vectors in the model will get a reconstructed face for the given skull. The detail of the model matching will be given in the next section. As the prototypic skull and face samples have high dimension data with large redundance, PCA is applied to construct the following deformable model:
where$\overline{H}=\frac{1}{N}\sum _{i=1}^{N}{H}_{i}$, {h_{ i }i=1,…,N^{′}} are the former N^{′} components corresponding to the eigenvalues {σ_{ i }i=1,…,N^{′}} of the covariance matrix of the subtracting vectors$\{{H}_{i}\overline{H}i=1,\dots ,N\}$ in descending order. N^{′}is determined by 98% of the cumulative eigenvalues of the variance. The combination coefficient$\alpha =({\alpha}_{1},\dots ,{\alpha}_{{N}^{\prime}})$ is the parameter for the deformable model. To generate a plausible face, the probability of α is constrained by the following formula:
The model in (9) is the global model referring to the modality of whole skull and face. To characterize the local shape variety, we construct several local deformable models with respect to the main organs of face, the eye, nose, and mouth. The first step for constructing the local models is to segment the organs. It is difficult to get an ideal automatic segment for different skull and face. As our samples have been aligned, getting the segments of the reference, the segments of other samples can be obtained from the correspondence of points. So we segment the local patches of the reference by hand. The segmented local patches of the reference are shown in Figure4e–h. Based on the segmented data, the local models can be constructed by the similar method of the global model. The hierarchical deformable model is constructed by integrating the local models with the global model.
Craniofacial reconstruction
For a given skull, the craniofacial reconstruction is a model matching procedure, in which the coefficients of the deformable model are adjusted iteratively and the model combination skull approaches to the given skull gradually. To measure the difference between the model combination skull S_{ md }(α) and the given skull, after assigning the initial values for the combination coefficients, we align S_{ md }(α) to the given skull by the TPSbased registration. From the obtained correspondence, we format the given skull as a vector S_{ gv } in the same form as S_{ md }(α). So the difference between the given skull S_{ gv } and the model combination skull S_{ md }(α) can be represented as the square module of the subtracting vector as following form:
Regarding this definition as the cost function, the reconstruction problem can be solved by an optimization method. It is denoted that the model combination skull S_{ md }(α) will change as the combination coefficients updating in the optimization procedure. So the registration between S_{ md }(α) and S_{ gv } is implemented every 20 loops to ensure the error in 11 computed in correct correspondence with the updated S_{ gv }. To solve the optimization, we adopt a gradient descent algorithm to resolve the optimization problem. The core of the method is to find the gradient descent direction of E(α) about α, which is equal to the negative derivative of E(α). From 11, the partial derivative of E(α) can be formed as follows:
From formula 12, the partial derivative of E(α) depends on$\frac{{S}_{\mathit{\text{md}}}^{\partial}\left(\alpha \right)}{\partial \alpha}$, which can be deduced from formula 9 as the form$\frac{{S}_{\mathit{\text{md}}}^{\partial}\left(\alpha \right)}{\partial \alpha}=\frac{\partial (\overline{S}+\mathbf{s}{\alpha}^{T})}{\partial \alpha}=\frac{\partial \left(\mathbf{s}{\alpha}^{T}\right)}{\partial \alpha}$, where$\overline{S}$ is the skull part of$\overline{H}$,$\mathbf{s}=({s}_{1}\dots {s}_{i}\dots {s}_{{N}^{\prime}})$ and s_{ i } is the skull part of h_{ i }. For a α_{ i },$\frac{\partial \left(\mathbf{s}{\alpha}^{T}\right)}{\partial {\alpha}_{i}}=\frac{\partial \left(\sum _{i=1}^{{N}^{\prime}}{\alpha}_{i}{s}_{i}\right)}{\partial {\alpha}_{i}}={s}_{i}$. Putting it into 12 we get the following partial derivative computation for α_{ i }:
As the vectors s_{ i },$\overline{S}$, and S_{ gv } on the above formulas have the dimension same as the dimension of the reference skull vector, it is timeconsuming to implement the optimization of the gradient descent algorithm with the high dimension in 3m. To reduce computation, we extract a random subvectors of these vectors to replace them on the above equations. That is a subset indices$\{{i}_{0},\dots ,{i}_{{m}^{\prime}}\}$ are selected randomly from the continuous indices {1,…,m} in each gradient descent iteration. As m^{′}≪m, the computation is greatly reduced. As the indices of the vector element is correspondent to the points on the skull, this method is equivalent to select a subset of random points, representing the model and given skull for similarity error computation. So the similar random point selection approach used in the TPSbased nonrigid registration can be applied here to get the random subvector. Considering that the model deformation scale is smaller than the deformation between the reference and the target in data registration, and the local deformation dominates in the model matching, we use more quantity of random points in the model matching procedure. In our experiment, a subset with$\frac{1}{20}$ indices of the model skull vector is selected to implement the model matching computation. The maximal iteration is set to 500 for the global model and 1000 for the local models. To avoid the influences of noise and make use of the contribution of every points, this random subset is updated at each iteration. However, this will lead instability for the error in 11 at the beginning tens of iterations, but it behaves steadily in the later iterations and converges to a minimal value. We have tested different sizes of random subsets in the model matching experiment, smaller size than the assigned number generally cannot get satisfied precision and even not convergent. While more points added in the subset, the improvement for the model matching is insignificant. By this model matching procedure, the best matched model skull will be obtained. Then the reconstructed face can be calculated by the combination of the face part of h_{ i } in 9 with the same coefficients as the model skull.
Similar to the matching procedure of the global model, the local deformable models are also matched to the given skull. In general, the local deformable models have better reconstruction results in the local areas than the global model. But the matching procedure of the local models is independent of the global model. As a result, the local reconstruction results generally are not consistent with the global result, especially at the boundary (shown in Figure6a). To get a whole smooth reconstruction result, the fusion problem of the local and global results must be solved. We take two steps to integrate the global and local reconstruction meshes. For the global mesh F and the submesh F_{sub}, the local meshes are set onto the global mesh in proper position using translation and rotation transformation R^{∗}, T^{∗}, where R^{∗}, T^{∗} are determined by minimizing the average distance of the correspondent points of the submesh and the global mesh, i.e.,$({R}^{\ast},{T}^{\ast})=\text{arg}\underset{R,T}{\text{min}}\sum _{{P}_{0}\in {F}_{\text{sub}}}\parallel R{P}_{0}+T{P}_{1}\parallel $, where P_{1}∈F is the correspondent points of P_{0}. The first step fusion result is shown in Figure6b. Second, the inconsistence at the boundary is removed by a mesh stitching algorithm, in which both the points on the submesh and the global mesh near the boundary are deformed to an interspaced position by interpolation method. The detail of the mesh interpolation is shown in Figure6d. For the submesh boundary B_{0} and a point P_{0}∈B_{0}, we can get the correspondent contour (denoted by B_{1}) on the global mesh and the correspondent point P_{1} ∈ B_{1} of P_{0} by the above segmentation of the reference. Then the interpolating point P_{2} is calculated by${P}_{2}=\frac{({P}_{0}+{P}_{1})}{2}$. Given a scale l_{0}, the boundary B_{0} will shrink into interior with l_{0} step and get a contour${B}_{0}^{\prime}$ which is indicated by a point${Q}_{0}\in {B}_{0}^{\prime}$ in Figure6d, while the counter B_{1} shrink oppositely on the global mesh and get a contour${B}_{1}^{\prime}$ which is indicated by a point${Q}_{1}\in {B}_{1}^{\prime}$ in Figure6d. The stitching method is to find a pair of interpolation functions f_{0},f_{1} have the following conditions:
There are many interpolation methods can be used to meet the above conditions, such as RBF function. For convenience, we adopt the above TPS to solve the interpolation. Having determined the interpolation functions, the final fusion result (Figure6c) can be achieved by applying f_{0} to the points between the contours${B}_{0},{B}_{0}^{\prime}$ on the submesh and f_{1} to the points between the contours${B}_{1},{B}_{1}^{\prime}$ on the global mesh.
Experimental results and discussion
Based on the dense aligned face and skull samples, a hierarchical deformable model is constructed for craniofacial reconstruction, which includes a global model and three local models, namely, the eye, nose, and mouth model. To validate the craniofacial reconstruction method, we implement a leaveoneout craniofacial reconstruction experiment, in which each skull is used as the test skull for craniofacial reconstruction, and the rest skulls and faces are used as the samples to build the hierarchical deformable model. Both the global and the local models are matched to all tests. The final facial surfaces of the tests are reconstructed by the model matching and mesh stitching procedure. Some craniofacial reconstruction results and its actual faces are shown in Figure7.
The reconstructed faces of the hierarchical model are evaluated by comparing with their actual faces. First, the traditional measurement method is adopted for the evaluation, which adopts the average distance of correspondent points as the similarity measurement between 3D faces. We denote this method by ADCP. In ADCP method, the average reconstruction error of 110 tests are 0.01101 and 0.00998 for the global model and the hierarchical model with standard deviation 0.00297 and 0.00353, respectively. As the actual scale of our samples and the model has lost in the data preprocessing and coordinates uniforming procedure, we set the average distance between left and right porions to 15 cm according to the Chinese National Standard for Human Dimensions of Chinese Adult[49]. Then the average absolute error of correspondent points for the global and hierarchical model is 1.65 and 1.50 mm. Comparing with the results in[12, 15], the result is acceptable considering that some important properties, such as Body Mass Index (BMI), age, and gender, are not integrated into the model. The distribution of the average reconstruction error for every tests are shown as histogram in Figure8. From the histogram, it is figured out that there are more tests with error less than 1.5 mm for the hierarchical model. The distribution of the average reconstruction error for every points of the 110 tests is also computed and the results are displayed on the reference face in Figure9. The figure shows that the hierarchical model have better reconstruction results than the single global model, especially in the areas related to the local models. It is concluded that using the local models is beneficial to the improvement of the reconstruction accuracy.
Although ADCP is the dominant method for craniofacial reconstruction evaluation, it is not an ideal method considering that the aim of the craniofacial reconstruction is identification through face recognition. In general, the dense faces have very large quantity of points, the distance changing on local area usually brings little changes to the average. As a result, the ADCP method is not sensitive to local shape change. But the local feature is important for face recognition, such as the width of mouth and the height of nose. The other drawback of ADCP is that every points have same weightiness in the distance computation, though the points on different facial position have different effect for face recognition. To get proper evaluation for the craniofacial reconstruction result, we define a similarity measurement for reconstructed face inspired the ideas of face recognition, which uses the distance in the coefficient domain of a face deformable model as the measurement. The method is denoted by DCD. Similar to the above deformable model, the face deformable model is constructed from the aligned faces as follows:
where$\overline{F}$ is the average face, {f_{ i }i=1,…,k} are the former k components. When two faces F_{1}, F_{2} are compared, their model coefficients β_{1}, β_{2} are computed by the modelmatching procedure. Then the difference between F_{1} and F_{2} is measured by the distance between β_{1}, β_{2} in the coefficients space.
To get DCD measurement, 110 aligned faces are used to construct the face deformable model. Then the actual faces and the reconstructed faces of the global and hierarchical models are represented by the coefficients of the face model. The DCD distance between the reconstructed faces and its actual faces are calculated. The average reconstruction error of 110 tests are 0.1906 and 0.1837 for the global model and the hierarchical model with standard deviation 0.0516 and 0.0539, respectively. It is shown that the hierarchical model has better reconstruction results in DCD measurement method. To some extent, the distance in the coefficients domain does not correlate to the distance of correspondent points. To explore the relationship between the presented DCD method and ADCP, we do a face recognition experiment, in which the closest real face to the reconstructed faces of the global and hierarchical model is calculated in DCD and ADCP measurements, respectively. The cumulative recognition curves of the reconstruction results are shown in Figure10. It shows that the face recognition results is great different for the two measurements. The cumulative recognition rate in DCD method is greater than that in ADCP method. And the hierarchical model generally has better recognition rate than the global model. These observations do not assuredly support that DCD is more suitable for evaluating the reconstructed faces than ADCP. But it is concluded that DCD has good tolerance to the reconstruction error in face recognition application. As DCD is a distance measurement in the facial shape space which is constructed from a set of original samples. Gathering sufficient face samples, the DCD method may reflect the real measurement of the face space.
Conclusion
We proposed a hierarchical dense deformable model for automatic craniofacial reconstruction. The feature of proposed model is that the skull and face are represented as dense mesh without landmarks. The advantage of this representation is that the dense meshes contain more metadata for exploring the intrinsic relation between skull and face. In addition, the presented nonrigid dense meshes registration and the model matching procedure can be implemented automatically, which contributes to the fully automatic craniofacial reconstruction method. The craniofacial reconstruction experiments show that the hierarchical model has better reconstruction results than the single global model. The craniofacial reconstruction evaluation problem is also explored in this article. We present an evaluation method based on a deformable facial model. By comparing with the average distance of correspondent points method in face recognition experiment, the evaluation method may be the potential method for identification in the application of craniofacial reconstruction. In the future work, we plan to capture more head scans to increase the plenty of the samples, which is important for the model deformable capacity. Based on the abundant samples, the personal properties, such as gender, age, and BMI, are considered integrating with the hierarchical dense deformable model. The reconstruction result will be improved if these properties information are properly utilized. In addition, it is worthy of exploring ideal evaluation methods for the results of craniofacial reconstruction.
References
 1.
Gerasimov M: The Face Finder. 1971.
 2.
Snow C, Gatliff B, Williams KM: Reconstruction of the facial features from skull: an evaluation of its usefulness in forensic anthropology. Am. J. Phys. Anthropol 1970, 33: 221228. 10.1002/ajpa.1330330207
 3.
Lebedinskaya G, Balueva T, Veselovskaya E: Development of methodological principles for reconstruction of the face on the basis of skull material. In Forensic Analysis of the Skull. WileyLiss Inc.; 1993.
 4.
Vanezis P: Application of 3D computer graphics for facial reconstruction and comparison with sculpting techniques. Forensic Sci. Int 1989, 42: 6984. 10.1016/03790738(89)902004
 5.
Vanezis P, Vanezis M, McCombe G, Niblet T: Facial reconstruction using 3D computer graphics. Forensic Sci. Int 2000, 108: 8195. 10.1016/S03790738(99)000262
 6.
Evenhouse R, Rasmussen M, Sadler L: Computeraided forensic facial reconstruction. J. Biocommun 1992, 19: 2228.
 7.
Shahrom AW, Vanezis P, Chapman RC, Gonzales A, Blenkinsop C, Rossi ML: Techniques in facial identification: computeraided facial reconstruction using laser scanner and video superimposition. Int. J. Legal Med 1996, 108: 194200. 10.1007/BF01369791
 8.
Tyrell AJ, Evison MP, Chamberlain AT, Green MA: Forensic threedimensional facial reconstruction: historical review and contemporary developments. J. Forensic Sci 1997, 42: 653661.
 9.
Jones MW: Facial Reconstruction Using Volumetric Data. Proceedings of the Sixth International Vision Modelling and Visualisation Conference 2001, 135150.
 10.
Quatrehomme G, Cotin S, Subsol G, Delingette H, Garidel Y, Grevin G, Fidrich M: A fully threedimensional method for facial reconstruction based on deformable models. J. Forensic Sci 1997, 42: 649652.
 11.
Nelson LA, Michael SD: The application of volume deformation to threedimensional facial reconstruction: a comparison with previous techniques. Forensic Sci. Int 1998, 94: 167181. 10.1016/S03790738(98)000668
 12.
Claes P, Vandermeulen D, De Greef S, Willems G, Suetens P: Craniofacial reconstruction using a combined statistical model of face shape and soft tissuedepths: methodology and validation. Forensic Sci. Int 2006, 159(1):147158.
 13.
Claes P, Vandermeulen D, De Greef S, Willems G, Suetens P: Statistically deformable face models for craniofacial reconstruction. CIT 2006, 14(1):2130. 10.2498/cit.2006.01.03
 14.
Claes P, Vandermeulen D, De Greef S, Willems G, Clement JG, Suetens P: Bayesian estimation of optimal craniofacial reconstructions. Forensic Sci. Int 2010, 201: 146152. 10.1016/j.forsciint.2010.03.009
 15.
Berar M, Tilotta FM, Glaunes JA, Rozenholc Y: Craniofacial reconstruction as a prediction problem using a Latent Root Regression model. Forensic Sci. Int 2011, 210(13):228236. 10.1016/j.forsciint.2011.03.010
 16.
Berar M, Desvignes M, Bailly G, Payan Y: 3d semilandmarkbased statistical face reconstruction. CIT 2006, 14(1):3143. 10.2498/cit.2006.01.04
 17.
Paysan P, Luthi M, Albrecht T, Lerch A, Amberg B, Santini F, Vetter T: Face reconstruction from skull shapes and physical attributes. Proceedings of 31st DAGM Symposium for Pattern Recognition (DAGM’09) 2009, 232241.
 18.
Claes P, Vandermeulen D, De Greef S, Willems G, Clement JG, Suetens P: Computerized craniofacial reconstruction: conceptual framework and review. Forensic Sci. Int 2010, 201: 138145. 10.1016/j.forsciint.2010.03.008
 19.
Stavrianos Ch, Stavrianou I, Zouloumis L, Mastagas D: An introduction to facial reconstruction. Balkan J. Stomatol 2007, 11(2):7683.
 20.
De Greef S, Claes P, Mollemans W, Vandermeulen D, Suetens P, Willems G: Computerassisted facial reconstruction: recent developments and trends. Rev. Belge Med. Dent 2005, 60(3):237249.
 21.
Wilkinson C: Computerized forensic facial reconstruction: a review of current systems. Forensic Sci. Med. Pathol 2005, 1(3):173177. 10.1385/FSMP:1:3:173
 22.
Pei Y, Zha H, Yuan Z: Tissue map based craniofacial reconstruction and facial deformation using rbf network. The Third International Conference on Image and Graphics (ICIG’04) 2004, 398401.
 23.
Tu P, Hartley R, Lorensen W, Allyassin M, Gupta R, Heier L: Face reconstruction using flesh deformation modes. In Computergraphic Facial Reconstruction. Academic Press; 2005:145162.
 24.
Deng Q, Zhou M, Shui W, Wu Z, Ji Y, Bai R: A novel skull registration based on global and local deformations for craniofacial reconstruction. Forensic Sci. Int 2011, 208: 95102. 10.1016/j.forsciint.2010.11.011
 25.
Turner WD, Brown REB, Kelliher TP, Tu PH, Taister MA, Miller KWP: A novel method of automated skull registration for forensic facial approximation. Forensic Sci. Int 2005, 154: 149158. 10.1016/j.forsciint.2004.10.003
 26.
De Greef S, Claes P, Vandermeulen D, Mollemans W, Suetens P, Willems G: Largescale invivo caucasian facial soft tissue thickness database for craniofacial reconstruction. Forensic Sci. Int 2006, 159(1):126146.
 27.
Tilotta F, Richard F, Glaunes JA, Berar M, Gey S, Verdeille S, Rozenholc Y, Gaudy JF: Construction and analysis of a head CTscan database for craniofacial reconstruction. Forensic Sci. Int 2009, 191: 112.e1112.e12. 10.1016/j.forsciint.2009.06.017
 28.
Martin R, Saller K: Lehrbuch der Anthropologie in Systematischer Darstellung. 1956.
 29.
Menin C: La population galloromaine de la necropole de Maule (France, Yvelines): etude anthropologique. PhD thesis, University Paris 6, Pierre et Marie Curie, France. 1977.
 30.
Welcker RH: Schillers Schdel und todenmaske, nebst mittheilungen ber Schadel und todenmaske Kants. 1883.
 31.
His W: Anatomische forschungen ueber johann sebastian bachos gebeine und antliz nebst bemerkungen ueber dessen bilder. abhandlungen dermathematisch physikalischen klasse der konigl. Sachsischen Gesellschaft derWissenchaften 1895, 22: 379420.
 32.
Kollmann J, Buchly W: Die persistenz der rassen und die reconstruction derphysiognomie prahistorischer schadel. Archiv fur Anthropologie 1898, 25: 329359.
 33.
Rhine JS, Campbell HR: Thickness of facial tissues in American Blacks. J. Forensic Sci 1980, 24(4):847858.
 34.
Tu P, Book R, Liu X, Krahnstoever N, Adrian C, Williams P: Automatic face recognition from skeletal remains. IEEE Conference on Computer Vision and Pattern Recognition (CVPR’07) 2007, 17.
 35.
Vandermeulen D, Claes P, Loeckx D, De Greef S, Willems G, Suetens P: Computerized craniofacial reconstruction using CTderived implicit surface representations. Forensic Sci. Int 2006, 159: S164S174.
 36.
Berar M, Desvignes M, Bailly G, Payan Y: 3D meshes registration: application to statistical skull model. 2004.
 37.
Pei Y, Zha H, Yuan Z: The craniofacial reconstruction from the local structural diversity of skulls. Comput. Graphic Forum 2008, 27(7):17111718. 10.1111/j.14678659.2008.01315.x
 38.
Tilotta F, Glaunes J, Richard F, Rozenholc Y: A local technique based on vectorized surfaces for craniofacial reconstruction. Forensic Sci. Int 2010, 200: 5059. 10.1016/j.forsciint.2010.03.029
 39.
Lorensen WE, Cline HE: Marching cubes: a high resolution 3D surface construction algorithm. Comput. Graph 1987, 21(4):163169. 10.1145/37402.37422
 40.
Frankfurt plane http://en.wikipedia.org/wiki/Frankfurt_plane
 41.
Chui H, Rangarajan A: A new algorithm for nonrigid point matching. IEEE Conference on Computer Vision and Pattern Recognition (CVPR’00) 2000, 4451.
 42.
Chui H, Rangarajan A: A new point matching algorithm for nonrigid registration. Comput. Vis Image Understand 2003, 89: 114141. 10.1016/S10773142(03)000092
 43.
Hutton TJ, Buxton BF, Hammond P: Automated registration of 3D faces using dense surface models. The 2003 British Machine Vision Conference (BMVC’03) 2003, 439448.
 44.
Myronenko A, Song XB: Point set registration: coherent point drift. IEEE Trans. Pattern Anal. Mach. Intell 2010, 32(12):22622275.
 45.
Bookstein FL: Principal Warps: thinplate Splines and the Decomposition of Deformations. IEEE Trans. PAMI 1989, 11(6):567585. 10.1109/34.24792
 46.
Besl P, McKay N: A method for registration of 3d shapes. IEEE Trans. PAMI 1992, 14(2):239256. 10.1109/34.121791
 47.
Moenning C, Dodgson NA: Fast marching farthest point sampling for implicit surfaces and point clouds. Technical Report 565. Computer Laboratory, University of Cambridge, UK, 2003
 48.
Greenspan MA, Yurick M: Approximate KD tree search for efficient ICP. The 4th International Conference on 3D Digital Imaging and Modeling (3DIM’03) 2003, 442448.
 49.
GB 1000088: Human dimensions of Chinese adult, National Standard of the People’s Republic of China. 1988, 1213.
Acknowledgements
This study was partly supported by the 973 Program of China (No. 2011CB302703) and the National Natural Science Foundation of China (Nos. 60825203, 61171169, 61133003, 60973057, 60736008, 61272363).
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Hu, Y., Duan, F., Zhou, M. et al. Craniofacial reconstruction based on a hierarchical dense deformable model. EURASIP J. Adv. Signal Process. 2012, 217 (2012). https://doi.org/10.1186/168761802012217
Received:
Accepted:
Published:
Keywords
 Craniofacial reconstruction
 Hierarchical deformable model
 Dense mesh registration