 Research
 Open Access
 Published:
An ordered topological representation of 3D triangular mesh facial surface: concept and applications
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 144 (2012)
Abstract
In this article, we present an original unified framework for analyzing, describing, and encoding 3D facial surfaces. This framework allows the derivation of topologically ordered structures from triangular mesh surfaces, addressing thus the lack of ordered structure in such a modality. After describing the foundations of the framework and highlighting its advantages with respect to close representations, we show its adaptability to a variety of facial mesh surface processing tasks which includes mesh regularity assessment, facial surface cropping, facial surface compression, and facial surface alignment. In addition, it can be used for the extraction of a rich variety of local and global face descriptors. We validate this framework by testing it with raw 3D facial mesh surfaces.
Introduction
3D face image modality has been receiving increasing attention in applications related to people identification. Compared to its 2D counterpart, it encodes shape information, which is intrinsically insensitive to illumination, pose, and scale variation. On the other hand, it poses some challenges regarding how to best represent the facial shape to ensure effective use of it in face identification. Facial shape representation can be categorized into three classes, namely: local features representation, global feature representation and hybrid representations.
The first class employs features derived from local face surface shape attributes. Early works investigated surface curvature measures that are subsequently used to extract higher level facial features. Gordon [1] used minimum and maximum principal curvature to segment the face surface into convex, concave and saddle regions. Then local facial features are derived out of them and used for template matching. Lee and Milios [2] matched facial range images using a graph structure derived from the Extend Gaussian Image (EGI) representation proposed by Horn [3]. EGI is a kind of histogram that summarizes the surface normal orientation statistics crossways the facial surface. Tanaka et al. [4] used the curvature information to extract facial convex regions, then matching is performed by comparing their EGIs. The EGI similarity is measured by Fisher’s spherical correlation. Moreno et al. [5] derived attributes from curvaturebased segmented regions and employed them for face matching. Other approaches employed more complex descriptors, such as point signatures [6, 7], in an attempt to model the complex freeform shape of the face. The idea is to form a representation of the neighbourhood of a surface point. In this approach, these point signatures are used for surface comparisons by matching the signatures of data points representing the model’s surface to the signatures of data points of a “sensed” surface.
In general, a key limitation of localoriented approach is the difficulty of extracting reliable information from noisy or not accurate 3D data. The differential geometry techniques often used in this approach are intrinsically vulnerable to scaling and data deficiencies.
In global representation, facial features are derived from the whole 3D face data. One of the original systems is based on locating the faces plane of bilateral symmetry and to apply this for aligning faces [8]. The facial profiles along this plane are then extracted and compared. Beumier and Acheroy [9] and Wu et al. [10] used vertical and horizontal profiles of faces. Pears and Heseltine [11] used contours of intersections between nose tipcentered concentric spheres and the facial surface. Facial profiles have also been used in combination with texture information [12]. Xu et al. [13] derived invariant curve and surface moments from 3D face data. In these methods, matching is performed by evaluating the similarity between these entities with different variants of the nearest neighbor (NN) algorithm. EGI has also also used as a global representation in [14] where the matching problem is approached vis an evolutionary optimization technique.
The popular spin image has also been used in 3D facial shape analysis. Conde et al. [15] showed that spin image can be employed in the detection of facial landmarks. Bae et al. [16] confirmed this method particularly for the nose tip detection. Wu et al. [17] proposed a representation similar to the spine image (dubbed local shape map) and applied it in face recognition with a depth image modality.
Another category of approaches [18–21] extended the eigenfaces paradigm developed in 2Dimage based recognition to the 3D context. This paradigm stipulates that a face image can be defined as linear combination of a finite number of a particular facial images. This group of images, called eigenfaces, is extracted via principal component analysis (PCA) from a large face image database. This type of approach operates on the depth map image (an image whereby the intensity represents the depth, usually the z coordinate). A NN classifier is usually used for the matching. Still in the same paradigm, other researchers relied on the principle that a synergistic combination of data from multiple sources provides more reliable and accurate information [22], and therefore adopted a twomodal approach by including 2D face images in the recognition. Representative solutions of this framework, called also multimodal face recognition, are in [23–28]. However, these methods have inherited some of the shortcomings of 2D face identification, particularly with regard to the face pose, selfocclusion and scaling. Another category of methods operates on the whole 3D facial surface. In this paradigm, the query 3D facial surface is superimposed with stored instances using alignment techniques. The matching is performed by evaluating the degree of overlapping of the aligned surfaces. Representative works of this paradigm can be found in [29–31].
Global representations invariant to facial expressions has been investigated by [32–34]. In this framework, geodesic distances between sampled points on the facial surface are computed. Using these distances, the points are then flattened into a lowdimensional Euclidean space, providing a bending invariant (or isometric invariant) signature surface that is robust to certain facial expressions. Berretti et al. [35] employed geodesic stripes (i.e. a group of points at the same distance from the nose tip) and proposed a kind of graph to encode the spatial relations between stripes at different facial expressions.
Hybrid representation uses different representations from a single modality rather than using data from different modalities. This trend was fueled by two principles, namely: (1) an enriched variety of features when combined with classifiers having different statistical properties produce more accurate and more robust results. (2) Psychological findings show that humans equally rely on both local and global visual information [36]. Vandeborre et al. [37] fused local and global invariant descriptors of 3D face data in the form of 1D histograms of local surface curvatures, the distance between mesh triangles, and volumes of the tetrahedrons formed by the mesh triangles of the 3D face Data. Pan et al. [38] augmented the Eigenface paradigm with face profile. In [39], a uniformly triangulated mesh is adopted as a face template and Gaussian–Hermite moments are used to quantify shape variation around the salient facial regions (eyes, nose and mouth). PCA and NN were employed for dimensionality reduction and classification, respectively. Gokberk et al. [40] used a variety of representations that include surface normals, face profiles and depth map. The matching decision is made upon the fusion of PCA, linear discriminate analysis (LDA) and NN classifier. Mian et al. [41] and AlOsaimi et al. [42] employed a 2D histogram that encompasses rank0 tensor fields extracted at local points and from the whole depth map data.
Contributions and structure
In this article, we propose a topological framework for encoding a 3D facial mesh surface. Despite the rich and wide variety of 3D face representations developed in the literature, to the best of our knowledge,this is the first purely topologicalbased 3D face shape representation. This representation is concise (encompasses dimensionality reduction, as a means of improving the efficiency, or allowing data compression) and computationally efficient. Therefore, we list a set of five main characteristics of our proposed representation that will distinguish it from other close face shape representations. These characteristics are: (a) Intrinsically ordered: our representation exhibits a systematic arrangement of the triangular facets. This property allows extracting ordered structured patterns from a 3D triangular mesh surface. (b) Simplicity and compactness: our representation can be encompassed into a single data structure. (c) Generalization: our representation can be seen as a generalization of other popular 3D facial surface representations and it is possible that we can derive for example, approximate geodesic structures from it. (d) Geodesic processing efficiency: our representation does not require any form of mesh preprocessing when it comes to the computation of geodesic entities, whereas in other methods a mesh regularization for removing triangles with obtuse angles is required. (e) Computational efficiency: our framework is computationally more efficient. The computation complexity of the proposed representation is linear.
In addition, we show how this representation can be neatly adapted to address several 3D facial surface applications including, but not restricted to, mesh regularity assessment, facial surface cropping, face shape description, facial surface compression, and face surface alignment.
The article is a substantial extension and a continuation of the work published in [43] in which the focus was rather on facial landmark detection. The new contributions of this article are: (1) Extension of the ordered structured patterns to new types of structures that include a variety of ordered arcs of rings. (2) A new approach for the derivation of regular ordered discrete facial contours. (3) An original approach for facial surface compression with the related reconstruction algorithm. (4) Two methods for face alignment based on structured patterns and a related metric for measuring face similarity.
The rest of the article is organized as follows: Section “The ordered ring facets (ORF) framework” describes the 3D mesh surface representation and the related algorithms. It also describes the representation’s features and compares it with other close representations. Section “Applications” elaborates different applications of the proposed framework. Section “Conclusions” concludes the article and presents directions of future work.
The ORF framework
A 3D facial surface representation is derived by constructing novel structured and ordered patterns in a 3D face triangular mesh surface. While the arraybased representation of a triangular mesh is simple (array of vertices and an array of the triangular facets), it lacks an ordered structure property that would allow a systematic browsing of facets in the mesh. Indeed, the storing of the facets in the facet’s array is usually arbitrary and does not follow any particular arrangement. Therefore, processing and analyzing triangular mesh surfaces are more complex compared with other intrinsically ordered shape modalities such as range images or voxel grid. We propose a framework for constructing mesh patterns based on the topological properties of a triangular mesh surface. These patterns include concentric rings of triangular facets, which we dubbed ORF. The term ordered reflects the fact that the facets in each ring can be ordered circlewise or spiralwise across the rings.
The proposed framework has been inspired from the observation of the arrangement of triangular facets lying on a convex contour of edges, as shown in Figure 1a. We can notice that the facets can be categorized into two groups:

(1)
Facets having an edge on that contour that seem pointing outside the area delimited by the contour (e.g. f out_{1}and f out_{2}in Figure 1a.

(2)
Facets having a vertex on the contour that point inside the contour’s area (e.g. f g _{1}and f g _{2}in Figure 1a.
The facets in the second group have an effect of filling gaps between facets of the first group. These two groups of facets dubbed F out and F gap facets form together a kind of ring structure. With these ring facets we can construct a new group of F out facets that are onetoone adjacent with the F gap facets of that ring. These new F out facets will form the basis of the subsequent ring (Figure 1b). By iterating this process, we obtain a group of concentric rings. The construction process is described in the algorithm below.
Algorithm ConcentricRings
Rings ← ConcentricRings(Fin_root, Fout_root)
Rings ← [ ]; Fgap ← Fin_root ; Fout ← Fout_root
For i = 1:NumberOfRings
(Ring, NewFout, NewFgap) ← GetRing(Fout, Fgap)
Append Ring to Rings
Fout ← NewFout
Fgap ← NewFgap
End For
End ConcentricRings
The algorithm ConcentricRings has a computational complexity of O(n) where n is the number of facets in the rings. The function GetRing extracts the sequences of F gap facets across the pairs of consecutive F out facets, constructs the new ring and derives the F out facets for the subsequent ring. The most attractive aspect of the algorithm ConcentricRings is that it allows a circular ordering of the facets in each ring. To this end, the root facets Fin_root and Fout_root should be arranged clockwise or anticlockwise. This arrangement will be propagated across the rings via the function GetRing. Moreover, this circular arrangement implicitly produces a spiralwise ordering of the facets across the concentric rings. Figure 1c,d depicts the ring construction steps. The algorithm of GetRing is as follows:
Procedure GetRing
(Ring,NewFout,NewFgap) ← GetRing(Fout,Fgap)
NewFout ← [ ]; NewFgap ← [ ]
For each pair (fou t_{ i },fou t_{(i + 1)%n}),i=1… n
Append fou t_{ i }to Ring
(Fga p_{ i },NewFou t_{ i })← Bridge (fout_{ i }; fin_{ i }, fout_{(i+1)%n})
Append Fga p_{ i }to Ring
Append Fga p_{ i }to NewFgap
Append NewFou t_{ i }to NewFout
End for
End GetRing
The function Bridge extracts a circlewise ordered sequence of F gap adjacent facets and bridges the gap between a pair of consecutive F out facets. Its algorithm is as follows:
Procedure Bridge
(Fg, Fo) ← Bridge(f1; i1; f2)
(The input and output parameters Fg, Fo, f1, i1 and f2 are meant to receive Fgap_{ i }, NewFout_{ i }, fout_{bi}, fin_{ i }, and fout_{(i+1)%n} respectively.) if (f1, f2) are adjacent then else End if
Fg ← [ ]; Fo ← [ ]
v ← vertex shared by (f1, f2)
gf ← facet adjacent to f1, different from i1, and containing v
of ← facet adjacent to gf and not containing v
prev ← f1
While (gf ≠ f2)
append gf to Fg; append of to Fo
new_gf ← facet adjacent to gf, different from prev and containing v
new_of ← facet adjacent to new gf
and not containing v
prev ← gf;
gf ← new_gf;
of ← new_of;
End while
End Bridge
Figure 2a depicts different examples of root contours and their corresponding concentric rings (Figure 2b) constructed on a virtually uniform mesh. Initially, the rings follow the root contour shape, then they take a hexagonlike shape as they expand away from the root. Figure 2c shows the same examples of rings with a colormapping reflecting the spiralwise arrangement of the facets.
Studying the concentric rings, in terms of the progression of the number of facets across the rings, reveals interesting properties. For a regular mesh, composed of similar triangles, the increment of the facets,from one ring to the next, follows an arithmetic progression. Examples are shown in Figure 2d. Moreover, we realized that certain permutations of the root Fin facets produce particular sequences of arc facets (see the first three examples in Figure 3)a. In addition, it is possible to generate symmetric groups of arc facets by relaxing the conditions on the root contour, for instance by allowing a nonconvex contour, as shown in the last example in Figure 3a. Interestingly, the arc facets maintain the ordering and arithmetic progression properties of the ORF rings (see Figure 3b).
Complexity analysis
The algorithm ConcentricRings contains one loop having NumberOfRings iterations. In each iteration, the procedure GetRing is called. This procedure contains two nested loops. The number of iterations in the first loop (For each pair (fou t_{ i },fou t_{(i + 1)%n})) is equal to the number of F out facets in a ring, whereas in the second loop (While(gf≠f 2)), located within the procedure Bridge, it is equal to the number of F gap facets between each pair of consecutive F out facets. As a ring is composed of F out and F gap facets, the number of instructions in GetRing is thus a linear function of the number of facets in the i th ring. Let call n_{ i } this number, the number of instructions at each iteration of ConcentricRings can be expressed by a n_{ i } + b, where a and b are constants. Based on that, the number of instructions in ConcentricRings can then be expressed by $\left(\right)close="">{\sum}_{i=1}^{\text{Number Of Rings}}a{n}_{i}+b$. As $\left(\right)close="">{\sum}_{i=1}^{\text{Number Of Rings}}{n}_{i}=n$, where n is the number of facets in the whole set of concentric rings, we can express the total number of instructions in ConcentricRings by cn + d, where c and d are constants. This makes the computation complexity of the ConcentricRings in the order of O(n).
Ordered patterns extraction
The investigation of the indexes of the circularordered facets across the rings in a regular mesh reveals the possibility of extracting particular geometric patterns and rectangular grids around a given root facet.
Let us consider the example of the three concentric rings in an ideal triangular mesh. The facets are ordered clockwise in each facet ring (Figure 4a). Let us consider also the eight orientation emanating from the root facet, and labeled and ordered in clockwise fashion. By observing the sequence of facets along the orientation 12, we noticed that, the facet index follows the arithmetic progression a_{n + 1}=a_{ n } + 3, a_{1}=3. The sequence of dark facets along the orientation 1 follows the progression a_{n + 1}=a_{ n } + 4, a_{1}=5. A similar type of progression is observed for the rest of the orientations across both the dark and the white facets except orientation 9, which shows a constant sequence a_{ n }=1, as depicted in Table 1.
These interesting properties allow us to derive a variety of geometric patterns around the root facet. For instance, by grouping the facets according to the directions 93/71, a crosslike pattern is obtained. Similarly for the groupings in the directions 17/115 and 93/115. Figure 4b depicts instances of these patterns extracted from a real surface.
By considering the eight orientations again, by segmenting the facets in the rings into four quadrants (shown in different colors in a larger ideal mesh Figure 4c), within the pairs of orientations (912), (12,3), (3,6) and (6,9), respectively, and by examining the sequence of the facets indexes (rowwise or columnwise) across each quadrant, we realize that it is again ruled by an arithmetic progression. For instance, the facet indexes at each row of the top right quadrant follow the arithmetic progression a(i)=a(i−1) + 6. This property allows an automatic extraction of the quadrants. Moreover, we can show that a proper grouping of these obtained quadrants produces an indexed and ordered grid of facets centered on the root of the spiral facet. Figure 4d depicts an example of a small 6×6 grid of facets extracted at the cheek area of a facial surface.
Extraction of approximated isogeodesic patterns
Some configurations of the root Fin facets produce rings exhibiting central symmetry. This can be noticed in the first two examples in Figure 2. The rings take hexagonal forms, which approximate to some extent isogeodesic rings with respect to the root facet. In the second example, the geodesic distances between the ring facets and the root vary in the range [nd cos(Π/6),nd], where n is ring number and d is the average triangle edge length. Therefore, we can say that ORF rings form a kind of approximated isogeodesic facets with respect to a root facet. Moreover, we can derive from them approximated geodesic paths. This is performed in a twostage process (see Figure 5a). In the first stage, the rings are expanded from a source facet until the destination facet is reached (i.e. found in the last ring). In the second stage, the rings are browsed backwards, starting from the destination facet, and reiterated looking for the nearest connected facet in the previous ring until the source facet is reached. As the algorithm ConcentricRings implicitly computes the connectivity between facets in adjacent rings, the second stage has a complexity of O(n). Figure 5b depicts instances of geodesic paths joining facets on a given ring to the root facet.
Comparison with close representations
The most close representations to ORF representation are the nosetip centered concentric spheres [11], the isogeodesic curves [32, 34], and the isogeodesic stripes [35]. There are four main features that distinguish the ORF from these representations:

(1)
Simplicity and compactness: the representation can be stored in a monodimensional data structure.

(2)
Processing efficiency: our representation also does not require any form preprocessing, whereas in other methods, local path modeling [11], and mesh regularization [32, 34, 35] are required.

(3)
Computational complexity: our representation is computationally more efficient, as it infers a complexity of O(n) compared to O(n log(n)) in [34, 35], which do also require a mesh regularization procedure of complexity O(n).

(4)
Finally, ORF representation allows us to derive intrinsically ordered isogeodesic patterns. The representations of [34, 35] lack the ordering property, whereas the contours proposed in [11] are neither isogeodesic nor ordered. Table 2 summarizes this comparison.
Applications
This section exhibits the generality of the ORF framework by adapting it to several 3D face applications. Section “Assessing the regularity of the mesh tessellation” describes a novel method for assessing triangular mesh regularity based on the ORF rings. Section “Frontal face extraction” demonstrates a technique for extracting the frontal face from the raw 3D face scan by propagating rings from the nose tip. Section “Face shape description” elaborates the extraction of two types of global highly structured and ordered descriptors of the facial surface. Section “Facial surface compression” shows how to use these descriptors to derive a compressed representation of the facial surface. It also describes an efficient algorithm for reconstructing the original surface. Finally, Section “Face alignment” proposes two direct facial surface alignment methods inspired by the ordered structure of the ORF rings. The test samples being used in these applications are from the BU3DFE database [44].
Assessing the regularity of the mesh tessellation
We propose a novel criterion for evaluating the quality of a triangular mesh surface. We note first that the definition of “mesh quality” is context driven and tightly linked to the subsequent use of the mesh. Our criterion assesses the regularity (or the uniformity) of the mesh tessellation. That is the extent to which the mesh is composed of similar and equalsized triangles. We have showed earlier that in a uniform mesh the number of triangles across ORF evolves according to an arithmetic progression. For instance, for rings generated from a single facet, such as the first case in Figure 2d, we have the following progression:
where nrt(n) and nrt(n + 1) are the number of triangles in the rings n and n + 1, respectively. Therefore the number of facets across an nring ORF rings, in a uniform mesh, is [12,24,36,…,12n]. This sequence will not be satisfied at surface locations where the uniformity of the mesh tessellation is corrupted. We propose therefore the following local criterion for evaluating the mesh tessellation uniformity:
where η_{ n }and $\left(\right)close="">\widehat{{\eta}_{n}}$ are the sequences representing the number of triangles across nring ORF in an arbitrary mesh and an ideal mesh, respectively.
Table 3 depicts examples of five concentric rings extracted from a real mesh surface and showing the tessellation with different degrees of homogeneity. The first sample shows almost equalsized equilateral triangles, contrary to the last one, which contains disparate triangles. The corresponding sequence (row 2) and Δ values (row 3) show a clear disparity. These observations suggest a great potential of the criterion Δ for evaluating the regularity of a triangular mesh. Figure 6 shows examples of the Δ_{3}computed on some examples of facial mesh surface. We can see clearly that the criterion Δ_{3} faithfully reflects the degree of mesh uniformity. For instance, the irregularly tessellated areas at the nostrils are neatly spotted in the Δ_{3} image.
The criterion Δ is invariant to uniform scaling because it is derived form a purely topological structure. Figure 7 shows that Δ_{3} remains the same across the different scaled instances of a triangular mesh patch.
In a last experiment we conducted a qualitative comparison of the criterion Δ_{3} with two standard mesh regularity criteria, namely, the radii ratio regularity criterion and the area regularity criterion. These two criteria are defined, respectively, as follows:
where A (respectively, a, b, c) is (respectively, are) the area (respectively the edges’ lengths) of the triangular facet, and R (respectively r) is the radius of its circumscribed (respectively inscribed) circle. The experiment was carried out with on a spherical mesh surface exhibiting a kind of regionwise uniform tessellation. As shown in Figure 8a, the triangles are nearly equilateral and equalized, however, the tessellation shows different patterns across the surface. We computed the normalized criteria Δ_{3}, α, and ρ for each triangle of the sphere, and we colormapped them on the sphere’s surface (Figure 8b: 1st row). We can see that Δ_{3}successfully captured the tessellation disparity across the surface contrary to the criteria α, and ρ for which this disparity is virtually invisible. This difference in performance between Δ_{3} and the aforementioned ones can be explained by their histograms and variances (Figure 8b: 2nd and 3rd rows). We can see that that Δ_{3} covers the whole range, whereas α, and ρ are tightly confined around specific values.
Frontal face extraction
The popular technique for extracting the frontal face area, that uses a cropping sphere centered at the nose tip [41, 45], is sensitive to face scale variance. An alternative approach uses 3D point clustering based on texture information as proposed in [46]. This method requires the texture map to be available, and is unstable for head orientations greater than ±45°.
Here, we propose a method for extracting the frontal face area from the raw 3D facial data. This method requires the detection of the nose tip (using for example the method in [43]). In our approach, we exploit the ORF rings to develop an intrinsically scaleinvariant method for frontal face extraction. The procedure is as follows:
For each facet t within a 5ring size nose tip neighborhood, we generate a set of facets $\mathcal{R}\left(t\right)$ using the GetFacetSpiral algorithm initialized at t and with the stop condition of the algorithm ConcentricRings set to “Rings reaches a border of the surface”. Then we merge all the sets $\mathcal{R}\left(t\right)$ into a single set $\mathcal{F}$ using the following formula:
where ⊎ is the exclusive union. This procedure ensures a maximum coverage of the central face area. An illustration of the frontal face extraction process is shown in Figure 9.
We applied our method on a group of 90 raw facial scans; all the cropped scans encompasses both eyes, part of the forehead and the cheeks, thus ensuring integrality of the face area. Figure 10, highlights the scale invariance property of our method as compared with the cropping sphere technique. Two face samples are cropped using the same cropping sphere (2nd column). We can see that that the second instance is overcropped because the sphere radius does not fit the face. Hence, unless we have good estimation of the face’s size, the cropping sphere method might result in an over/under cropping. With our method, the two instances are correctly cropped, thus reflecting its robustness with respect to scale changes.
Face shape description
From the ORF rings, we can derive discrete 3D curves represented by a sequence of points, where each point is the center of a triangle facet. When the facets are arranged spiralwise, a single spiral curve that spans the whole surface can be extracted. In addition, the resolution of these curves is controlled by subsampling the sequence of rings. Figure 11 depicts examples of these contours and spiral curves extracted from a facial surface with different resolutions. These curves show, however, some irregularities inherited from the raw triangular mesh. This problem is addressed as follows:
Let $\left(\right)close="">{p}_{1},\dots ,{p}_{{n}_{k}}$ a sequence of ordered points representing the facets’ centers within a given ring. A basic spatial smoothing is applied followed by a cordlength parametrization, which is approximated by the following mapping:
ξ^{−1} maps the control points p_{ j } onto the unit interval [0,1]. t_{ j } is the arclength from the point p_{1}to p_{ j }, assuming that ξ^{−1}(p_{1})=0. Next, we parameterize the curves, using the inverse map, with a natural cubic spline interpolation. We obtain a 3D cubic spline curve:
Afterwards, we derive from this continuous curve function a set of uniformly and ordered sampled points via a regular subsampling of the parameter t. Figure 12 illustrates this process with one ring facet.
The obtained ordered discrete curves whether encoded in the form of concentric contours or in a spiral curve encapsulate the face shape variation in both local and global scales. Moreover, since they are attached to the mesh face surface, they can be augmented with the normal to the surface at each point. The spiral curve has also the advantage of encoding the face surface into a single monodimensional structure. To the best of our knowledge, this is the first model that encodes a facial surface in such a compact structure. The concentric contours inherit from the ORF rings their isogeodesic property. Their associated cubic spline functions below, form a family of spatial periodic functions suitable for harmonic or multiscale analysis.
They can also be used for face shape compression and matching, as will be described in Sections “Facial surface compression” and “Face alignment”, respectively.
Facial surface compression
From the set of the aforementioned cubic spline functions Θ_{ k }(t)=[x_{ k }(t),y_{ k }(t),z_{ k }(t)], $k=1\dots \mathcal{M}$, we derive the following sequence of Ordered Concentric Discrete Contours (OCDC):
We emphasize again that points in each discrete curve Γ_{ k }are ordered in a circular fashion. With this sampling scheme, the number of points across the contours Γ_{ k } follows the same arithmetic progression as the number of facets across the ORF rings, in a regular mesh surface. In addition to uniform coverage, the sequence Γ_{ k }ensures a compact encoding of the facial surface with a compression ratio above 2 (if we consider the mesh originally stored in the standard format facetsvertices arrays).
Yet the most appealing feature in the Γ contours representation is the fact that it allows an efficient reconstruction of the mesh surface using an algorithm of linear complexity (see the algorithm Contour2mesh below), as compared to the facial surface construction from geodesic curves in [34], which uses the Delaunay triangulation algorithm of quadratic complexity. The efficiency of the proposed algorithm is due to the ordered structure of the discrete contours Γ_{ k }. This property is lacking in the geodesic contours in [34]. A reconstruction example is depicted in Figure 13, which shows, from left to right, the original surface, its related Γ contours, the corresponding triangular mesh generated using the algorithm Contour2mesh, the rendered facial surface obtained with that triangulation, and the alignment of the original and reconstructed surfaces. We can clearly see that the two surfaces fit almost perfectly.
Algorithm Contours2mesh
The inputs to this algorithm are concentric contours, $\left(\right)close="">{\Gamma}_{1}=\u3008{P}_{(1,1)},{P}_{(1,2)},\dots ,{P}_{(1,{N}_{1})}\u3009$, $\left(\right)close="">{\Gamma}_{2}=\u3008{P}_{(2,1)},{P}_{(2,2)},\dots ,{P}_{(2,{N}_{2})}\u3009$ and $\left(\right)close="">{\Gamma}_{m}=\u3008{P}_{(m,1)},{P}_{(m,2)},\dots ,{P}_{(m,{N}_{m})}\u3009$, where m is the number of contours and N_{1}, N_{2}, N_{ m }are the numbers of 3D points in the contours Γ_{1},Γ_{2}and Γ_{ m }, respectively. N_{ ring }is the number of the triangles in a ring constructed of the contours k and k + 1 respectively.
T is a triangle defined by three points. d_{1}and d_{2}are the Euclidean distances between P_{(k,i)},P_{(k + 1,j + 1)}and P_{(k + 1,j)},P_{(k,i + 1)}, respectively.
for k=1→(m−1)
i=1
j=1
N_{ ring }=0
for l=1→(N_{ k } + N_{k + 1}−1)do
d_{1}=∥P_{(k,i)},P_{(k + 1,j + 1)}∥
d_{2}=∥P_{(k + 1,j)},P_{(k,i + 1)}∥
if d_{1}<d_{2}then
T←{P_{(k,i)},P_{(k + 1,j + 1)},P_{(k + 1,j)}}
i=i + 1
N_{ ring }=N_{ ring } + 1
else
T←{P_{(k + 1,j)},P_{(k,i)},P_{(k,i + 1)}}
j=j + 1
N_{ ring }=N_{ ring } + 1
end if
end for
end for
End Algorithm Contour2mesh0
Face alignment
A standard approach of face matching is aligning their corresponding surfaces and measuring their overlapping. This procedure requires computing the geometric transformation, composed of a rotation R and translation T that brings the two facial surfaces $\mathcal{F}$ and ${\mathcal{F}}^{\prime}$ into the same reference. This method raises the fundamental correspondence problem, that is finding points p_{ i },i=1…N in $\mathcal{F}$ that anatomically correspond to other points q_{ i }in ${\mathcal{F}}^{\prime}$. If a sufficient number of valid correspondences are available, then the geometric transformation can be estimated by minimizing the mean square difference function.
This function can be minimized via a twostage direct solution, where the translation is firstly computed, then the rotation is determined using the quaternion representation. In a face surface registration context, this method usually involves points that can be reliably detected. For example, points at distinctive facial landmarks, like nose tip and eye corners. A representative work using this scheme can be found in [45]. However, considering the small number of these feature points, the data noise and deficiencies can severely affect the accuracy of the estimated transformation.
To overcome this limitation, other methods involve the largest possible number of data points, using variants of the standard iterativecloset point algorithm (ICP) [47]. Basically, this algorithm starts by establishing correspondences between pairs of points across the two surfaces based on proximity criteria, then it computes the rigid transformation that maps one point set into the other. This transformation is then applied to all the points in the first set to establish better correspondences. These last two steps are then repeated until convergence is reached. This iterative process determines the transformation by successive refinements, while enhancing the plausibility of the correspondences. The ICP provides an accurate alignment with O(n^{2}) complexity in its standard variant. In addition, it requires very good initialization (i.e. roughly aligned surfaces), otherwise it might get trapped in a local minima. This raises again the issue of determining firstly a reasonable number of valid correspondences. Examples of ICPbased methods appeared in [31, 48–50].
We propose a new alignment scheme, based on the ORF rings concept, which embodies the positive aspects of the two aforementioned schemes: a closedform solution and a large number of valid correspondences. Within this scheme, we propose two methods, which we call face grid alignment (FGA) and OCDC alignment (OCDCA). These methods will be described in the next two sections. We point out that these two methods compute the rotation component of the rigid transformation. The translation is determined by detecting first the nose tip in each of the two facial surfaces using techniques such as [43, 51].
FGA method
In this method, we exploit the grid pattern described in Section “Ordered patterns extraction”. By constructing a facet grid around the nose tip in each of the two facial surfaces, we can derive, based on its ordered structure, m×n valid pairs of corresponding facets, where (m n) represent the size of the grid. Using these correspondences, we compute the rotation via a direct solution. We used the closedform solution based on the quaternion representation proposed by Faugeras and Herbert [52]. This representation has the advantage of providing a compact form of the rotation comprising only two parameters: the axis around which the face is rotated and the angle value of the rotation around that axis.
However, the FGA methods have an issue emanating from that fact that from a root facet three different grids can be constructed, depending on the order of the adjacent facets of the root facet (see Figure 14). This makes the number of potential corresponding grid pairs equal to nine. To address this ambiguity, we compute the nine potential rotations, and then we select the one having the least residual error. Figure 15 (left) shows pairs of faces in neutral and moderate angry expressions, with the three constructed grids on each instance. To the right, the corresponding nine alignment trials, including the valid one(in the right top corner), are shown.
OCDCA method
This method addresses the corresponding problem by taking advantage of the ordered structure of the OCDC Γ_{ k }and their invariance to the rigid transformations. Let us consider two contours Γ_{ k }, and $\left(\right)close="">{\stackrel{\u0301}{\Gamma}}_{k}$ derived from the facial surfaces $\mathcal{F}$ and ${\mathcal{F}}^{\prime}$ respectively. We can state that their associated points p_{1},…,p_{12k}and $\left(\right)close="">{\stackrel{\u0301}{p}}_{1},\dots ,{\stackrel{\u0301}{p}}_{12k}$, respectively, form a set of corresponding points up to a shifting factor τ_{ k }. This shifting aspect originated from the difference in the ordering of the facets adjacent to the root facet, which we have mentioned earlier and described in Figure 14.
The locations of the first points of the contours Γ_{ k }depend on this ordering. Figure 16a depicts two faces and their related contours, showing the different locations of the two first points. To set the valid correspondences, we need to apply a circular shifting of order τ_{ k } on either Γ_{ k } or $\left(\right)close="">{\stackrel{\u0301}{\Gamma}}_{k}$. The optimal shifting is the one minimizing the following crosscorrelation formula:
The estimation is performed by looping over each value of τ, computing the rotation that minimizes (11) by using the closedform solution based on the quaternion representation [52]. Afterwards, we select the one corresponding to the minimal residual value. This procedure also determines the rotation R_{ k } that aligns $\left(\right)close="">{\stackrel{\u0301}{\Gamma}}_{k}$ to Γ_{ k }. An outcome example of this procedure is illustrated in Figure 16b,c showing the two largest contours and their alignment. Figure 16e depicts the two faces after registration using the transformation estimated with these two contours.
Because of their small size and the mesh irregularity at the nostrils, contours around the nose tip are not reliable and therefore are not involved in the rotation computation. Given $\mathcal{N}$ matched pairs $\left(\right)close="">{\Gamma}_{k},{\stackrel{\u0301}{\Gamma}}_{k}$, we compute $\mathcal{N}$ rotations, and calculate their mean as the optimal rotation. This mean is simply obtained by computing the means of their related axis and angles of rotation. Another alternative would be to create a large set of point correspondences, by concatenating all the correspondences associated with the selected pairs of OCDC contours $\left(\right)close="">{\Gamma}_{k},{\stackrel{\u0301}{\Gamma}}_{k}$, then estimating the optimal rotation from this large set via a closedform solution.
Measuring face similarity
We qualitatively evaluate the potential of the OCDCA method for face matching. To this end, the symmetric Hausdorff distance [53] is adopted as a similarity criterion. This distance provides a more accurate estimate of the distance between two surfaces than the residual error of the least squares minimization. However, using the Hausdorff distance, in its standard form, between two facial surfaces $\mathcal{F}$ and ${\mathcal{F}}^{\prime}$ (i.e. between the two facial surfaces as single blocs) is computationally demanding. For this reason, we took advantage of the ORF structure to reach a more efficient similarity criterion. This criterion is the sum of the Hausdorff distances between pairs of corresponding rings facets. This criterion is defined as follows:
where r_{1},…,r_{ n }and ŕ 1,…,ŕ_{ m }, n≤m, are the sequence of ring facets associated with the facial surface $\mathcal{F}$ and ${\mathcal{F}}^{\prime}$, respectively. Figure 17 depicts a color map matrix representing pairwise distances between 12 face instances after being scaled to the interval [0 1]. The matrix reflects clearly the discriminative potential of this distance.
In the next experiment, we used the method of Gordon [1] to assess the comparison performance of this criterion (12). In this paradigm, the difference between two face instances of the same person should be smaller than the difference between two face instances of two different persons. Let us assume m is the number of different subjects and n is the 3D facial image instances of each subject. For a given subject, this set can be partitioned into two groups:
where the set $\mathcal{A}$ contains all the instances of that subject, and the set $\mathcal{B}$ contains all instances of the other objects. Ideally, the recognition hypothesis states that for all i,j,k i≠j, i,j<n, and k<(m−1)n, we have
The recognition performance can be evaluated by computing the percentage of the number of times (13) holds with respect to the total number of comparisons. Using basic counting principles, the total number of comparisons is given by mn×(n−1)×(m−1)n. This can be better understood through the following algorithm computing all the comparisons:
For each subject (m subjects)
For each target x of that subject (n targets)
For each instance of that subject different from x (n1 instances)
For each instances of the other subjects ((m1)n instances)
compute (13)
End For
End For
End For
End For
From this algorithm, we can easily deduce that the total number of comparisons is mn(n−1)(m−1)n. However, to reduce computation, we consider only the first instance of each subject as a target. The total number of comparisons becomes then m(n−1)(m−1)n.
We would like to note that our facematching method is not qualified to deal with facial expression for the obvious reason that the OCDC contours are not invariant to facial shape deformation. Therefore, in the experimentation we consider only instances in neutral and very moderate expressions. Also, we are employing a basic classification, as the objective of the experimentation is to validate the separability criterion (12), rather then assess a full 3D face recognition method, in the case of which, a more robust classification scheme shall be used.
For the testing we used the BU3DFE database [44]. This database contains about 2,500 scans of 100 subjects. Each subject is captured in seven different facial expressions (neutral, anger, disgust, fear, happy, sad, and surprise) with four levels of intensity (except for the neutral). Each scan comes in two versions, raw and cropped. These scans provide a 3D triangular mesh face model having 25,000 facets on average. We considered two sets, the first consists of 30 subjects in a 3shot corresponding to neutral, level1 sad, and level1 happy. These last two samples are the most close to the neutral expression in this database. The second includes the same 30 subjects in neutral expression and their counterparts in level2 sad and level2 happy. The purpose of experimenting this set it to assess the extent to which our criterion can accommodate to facial expression changes. Table 4 depicts the comparison results obtained with this set. The first set’s result is quite reasonable considering the fact that it is not actually composed of instances in the same neutral expression. The performance degrades significantly for the second set, thus confirming the sensitivity of the criterion (12) with respect to facial expressions.
Conclusions
In this article, we presented a unified framework for analyzing, describing, and encoding 3D triangular mesh facial surfaces. We proposed a novel representation that is simple, compact, generic and computationally less expensive than other popular representations. In addition, this representation is characterized by the innovative aspect of intrinsically embedding a structured and ordered arrangement of the triangular facets of the mesh surface. We also showcased its wide spectrum of applications, which include mesh regularity assessment, facial surface cropping, face shape description, facial surface compression, and face surface alignment. This framework can be applied to a more general surfaces containing a central feature to act as the origin of the rings.
The spiral facet has two limitations: (1) It cannot operate effectively on mesh surfaces having holes. For holes emanating from a surface digitization process, a holefilling preprocessing step is needed. (2) While our representation can be computed on a mesh having differentsized triangles, the irregularity of the produced concentric rings does not allow drawing a valid interpretation of the surface for such a kind of mesh. Figure 18 depicts some facial mesh surfaces illustrating this aspect. These examples are optimized mesh surfaces where the triangles’ properties exhibit a large variability. While the spiral facet framework can handle these surfaces, little meaningful information can be derived from the extracted rings.
For future work, we plan to investigate further the facet arc patterns (Figure 3) and their potential for some facial surface analysis tasks such as segmentation and the design of local facial descriptors. The compact and ordered structure of the spiral facet is also enticing for deriving from it a kind of a signature or a “faceprint” that would uniquely define a facial surface instance. Finally, we plan to investigate further the surface compression aspect. Here, The spiralwise arrangement of the facets and their topological constraints are appealing ingredients for the design of a onedimensional compressed model of the facial surface.
References
 1.
Gordon GG: SPIE’1991: Face recognition from depth maps and surface curvature. In Conf. on Geometric Methods in Computer Vision, vol. 1570. USA; 1991.
 2.
Lee J, Milios E: Matching range images of human faces. In Proceeding of Conference on Computer Vision. Osaka, Japan; 1990.
 3.
Horn BKP: Extended gaussian images. Proc. IEEE 1984, 72(2):671.
 4.
Tanaka HT, Ikeda M, Chiaki H: Curvaturebased face surface recognition using spherical correlation  principal directions for curved object recognition. In Third IEEE International Conference on Automatic Face and Gesture Recognition. Nara, USA; 1998.
 5.
Moreno AB, Sánchez Á, Fco J, Diaz FJ: Face recognition using 3D surfaceextracted descriptors. In Proceedings of the Irish Machine Vision and Image Processing Conference. Ulster, Ireland; 2003.
 6.
Chua CS, Jarvis R: Point signatures: a new representation for 3d object recognition. Int. J. Comput. Vis 1997, 25(1):6385. 10.1023/A:1007981719186
 7.
Chua CS, Han F, Ho YK 2000.
 8.
Cartoux JY, Lapreste JT, Richetin M: Face authentication or recognition by profile extraction from range images. In Proceedings of the IEEE Workshop on Interpretation of 3D Scenes. Austin, Texas; 1989.
 9.
Beumier C, Acheroy M: Automatic 3D face authentication. Image Vis. Comput 2000, 18(4):315. 10.1016/S02628856(99)000529
 10.
Wu Y, Pan G, Wu Z: Face authentication based on multiple profiles extracted from range data. In Proceedings Conf. Audio and VideoBased Biometric Person Authentication. Guildford, UK; 2003.
 11.
Pears N, Heseltine T: Isoradius contours: new representations and techniques for 3D face registration and matching. In Proceedings of the IEEE Symposium on 3D Data Processing, Visualization, and Transmission. Chapel Hill, NC, USA; 2006.
 12.
Beumier C, Acheroy M: Face verification from 3D and grey level clues. Pattern Recogn. Lett 2001, 22: 13211339. 10.1016/S01678655(01)000770
 13.
Xu D, Hu P, Cao W, Li H: 3D face recognition using moment invariants. In Proceedings IEEE International Conference on Shape Modeling and Applications. NY, USA; 2008.
 14.
Wong HS, Cheung KKT: HHS Ip, 3D head model classification by evolutionary optimization of the extended gaussian image representation. Pattern Recogn 2004, 37: 23072322.
 15.
Conde C, Cipolla R, Aragon LJR, Serrano A, Cabello E: 3D facial feature location with spin images. In Proceedings of IAPR Conference on Machine Vision Applications. Tsukuba, Japan; 2005.
 16.
Bae M, Razdan A, Farin GE: Automated 3D Face authentication & recognition. In Proceedings of Advanced Video and Signal Based Surveillance Conference. London, UK; 2007.
 17.
Wu Z, Wang Y, Pan G: 3D Face recognition using local shape map. In Proceedings of International Conference on Image Processing. Singapore; 2004.
 18.
Bunke B: Face recognition using range images. In Proceedings of International Conference on Virtual Systems and MultiMedia. Washington, USA; 1997.
 19.
Lee Y, Park K, Shim J, Yi T: 3D face recognition using statistical multiple features for the local depth information. In Proceedings of International Conference on Multimedia and Expo, vol. 3. Maryland, USA; 2003.
 20.
Hesher C, Srivastava A, Erlebacher G: A novel technique for face recognition using range imaging. In Proceedings of the Seventh International Symposium on Signal Processing and its Applications. Paris, France; 2003.
 21.
Xu C, Wang Y, Tan T, Quan L: A new attempt to face recognition using eigenfaces. In Proceedings of the Asian Conference on Computer Vision. Jeju Islan Korea; 2004.
 22.
Frey PJ, Borouchaki H: Surface mesh quality evaluation. Int. J. Numer. Methods Eng 1999, 45: 101. 10.1002/(SICI)10970207(19990510)45:1<101::AIDNME582>3.0.CO;24
 23.
Luo RC, ChihChen Y, Kuo Lan S: Multisensor fusion and integration: approaches, applications, and future research directions. IEEE Sensors J 2002, 2: 107. 10.1109/JSEN.2002.1000251
 24.
Tsutsumi S, Kikuchi S, Nakajima M: Face Identification Using a 3D grayscale image: a method for lessening restrictions on facial directions. In Proceedings of the third IEEE International Conference on Automatic Face and Gesture Recognition. Nara, Japan; 1998.
 25.
Wang Y, Chua C, Ho Y: Facial feature detection and face recognition from 2D and 3D images. Pattern Recogn. Lett 2002, 23: 1191. 10.1016/S01678655(02)000661
 26.
Chang KI, Bowyer KW, Flynn PJ: MultiModal 2D and 3D Biometrics for Face Recognition. In Proceedings of the IEEE International Workshop on Analysis and Modeling of Faces and Gestures. Nice, France; 2003.
 27.
Tsalakanidou F, Tzovaras D, Strinzis M: Use of depth and colour eigenfaces for face recognition. Pattern Recogn. Lett 2003, 24: 1427. 10.1016/S01678655(02)003835
 28.
Lu X, Jain AK: Integrating range and texture information for 3D face recognition. In Proceedings of the seventh IEEE Workshops on Application of Computer Vision, vol. 1. Breckenridge, CO, USA; 2005.
 29.
Cook J, Chandran V, Sridharan S, Fookes C, Face recognition from 3D data using iterative closest point algorithm and gaussian mixture models 2004.
 30.
Irfanoglu MO, Gokberk B, Akarun L: 3D Shapebased face recognition using automatically registered facial surfaces. In Proceedings of the International Conference on Pattern Recognition. Cambridge, UK; 2004.
 31.
Lu X, Colbry D, Jain AK: Threedimensional model based face recognition. In Proceedings of the International Conference on Pattern Recognition. Cambridge, UK; 2004.
 32.
Bronstein A, Bronstein M, Kimmel R: Three dimensional face recognition. Int. J. Comput. Vis 2005, 64: 5. 10.1007/s112630051085y
 33.
Bronstein A, Bronstein M, Kimmel R: Robust expressioninvariant face recognition from partially missing data. In Proceedings of the European Conference on Computer Vision. Graz, Austria; 2006.
 34.
Samir C, Srivastava A, Daoudi M, Klassen E: An intrinsic framework for analysis of facial surfaces. Int. J. Comput. Vis 2009, 82(1):80. 10.1007/s1126300801878
 35.
Berretti S, Bimbo AD, Pala P: Description and retrieval of 3D face models using isogeodesic stripes. In Proceedings of the Conference on Multimedia Information Retrieval. Philadelphia, USA; 2006.
 36.
Vogel J, Schwaninger A, Wallraven C, Blthoff HH: Categorization of natural scenes: local vs. global information. In Proceedings fo the 3D Symposium on Applied Perception in Graphics and Visualization. Boston, USA; 2006.
 37.
Vandeborre J, Couillet V, Daoudi M: A practical approach for 3D model indexing by combining local and global invariants. In Proceedings of First International Symposium on 3D Data Processing Visualization and Transmission. Padova, Italy; 2002.
 38.
Pan G, Wu Y, Wu Z, Liu W: 3D Face recognition by profile and surface matching. In Proceedings of the International Joint Conference on Neural Networks. USA; 2003.
 39.
Xu C, Wang Y, Tan T, Quan L: Automatic 3D face recognition combining global geometric features with local shape variation information. In Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition. Seoul, Korea; 2004.
 40.
Gokberk B, Salah A, Akarun L: Rankbased decision fusion for 3D shapebased face recognition, Lecture Notes in Computer Science. Springer, Berlin; 2005.
 41.
Mian A, Bennamoun M, Owens R: An efficient multimodal 2D3D hybrid approach to automatic face recognition. IEEE Trans. Pattern Anal. Mach. Intell 2007, 29(11):1927.
 42.
AlOsaimi FR, Bennamoun M, Mian A: Integration of local and global geometrical cues for 3D face recognition. Pattern Recogn 2008, 41(3):10301040. 10.1016/j.patcog.2007.07.009
 43.
Werghi N, Boukadida H, Meguebli Y: The spiral facets: a unified framework for the analysis and description of 3D facial mesh surface. 3D Res. 2010, 3(5):1.
 44.
Yin L, Wei X, Sun Y, Wang J, Rosato MJ: 3D facial expression database for facial behavior research. In Proceedings of 7th International Conference on Automatic Face and Gesture Recognition. Southampton, UK; 2006.
 45.
Nair P, Cavallaro A: 3D face detection, landmark localization, and registration using a point distribution, model. IEEE Trans. Multimed 2009, 1(4):611.
 46.
Niese R, AlHamadi A, Michaelsi B: A novel method for 3D face detection and normalization. J. Multimed 2007, 2(5):112.
 47.
Besl PJ, McKay ND: A method for registration of 3D Shapes. IEEE Trans. Pattern Anal. Mach. Intell 1992, 14: 239. 10.1109/34.121791
 48.
Irfanoglu MO, Gokberk B, Akarun L: 3D Shapebased face recognition using automatically registered facial surfaces. In Proceedings of the Conference on Pattern Recognition. Cambridge, England, UK; 2004.
 49.
Lu X, Jain AK: Deformation analysis for 3D face matching. In Proceedings of the IEEE Workshops on Application of Computer Vision. Breckenridge, CO, USA; 2005.
 50.
Chang K, Bowyer K, Flynn P: Multiple nose region matching for 3D face recognition under varying facial expression. IEEE Trans. Pattern Anal. Mach. Intell 2006, 28(10):1695.
 51.
Xu C, Tan T, Wang Y, Quan L: Combining local features for robust nose location in 3d facial data. Pattern Recogn. Lett 2006, 27: 1487. 10.1016/j.patrec.2006.02.015
 52.
Faugeras OD, Hebert M: The representation, recognition, and locating of 3D objects. Int. J. Robot. Res 1986, 5(3):27. 10.1177/027836498600500302
 53.
Aspert N, SantaCruz D, Ebrahimi T: Mesh: measuring errors between surfaces using the Hausdorff distance. In Proceedings of Proc. IEEE International Conference in Multimedia and Expo. Switzerland; 2002.
Acknowledgements
This work was supported by the Emirates Foundation Grant Ref:2009134.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Werghi, N., Rahayem, M. & Kjellander, J. An ordered topological representation of 3D triangular mesh facial surface: concept and applications. EURASIP J. Adv. Signal Process. 2012, 144 (2012). https://doi.org/10.1186/168761802012144
Received:
Accepted:
Published:
Keywords
 3D facial mesh surface
 Ordered triangular mesh patterns
 3D facial shape analysis
 3D facial shape description
 3D face matching