Density-Based 3D Shape Descriptors

We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf) of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide e ﬃ cient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally di ﬀ erent in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.


INTRODUCTION
The use of 3D models is becoming increasingly more commonplace with their distribution on the Internet and with the availability of 3D scanners.Many fields are focused on 3D object models: computer graphics, computer-aided design, medical imaging, molecular analysis, cultural heritage in virtual environments, movie industry, military target detection, or industrial quality control to name a few.Efficient organization and access to these databases demand effective tools for indexing, categorization, classification, and representation of 3D objects.All these database activities hinge on the development of 3D object similarity measures.There are two paradigms for 3D object database operations and design of similarity measures, namely, the feature vector approach and the nonfeature vector approach [1,2].The feature vector paradigm aims at obtaining numerical values of certain shape descriptors and measuring the distances between these vectors.A typical example of nonfeature-based approach is to describe the object as a graph and then use graph similarity metrics.In this work, we follow the feature vector paradigm, and furthermore we limit our scope to the subclass of histogram-based descriptors.
Representations used for shape matching are often referred to as 3D shape descriptors and they usually differ substantially from those intended for 3D object rendering and visualization [3].Shape descriptors aim at encoding geometrical and topological properties of an object in a discriminative and compact manner.The diversity of shape descriptors range from 3D moments to shape distributions, from spherical harmonics to ray-based sampling, from point clouds to voxelized volume transforms [1,2,[4][5][6][7].In this work, inspired from histogram-based 3D shape descriptors [8][9][10][11][12], we propose a density-based approach that applies to local geometrical features of arbitrary dimension.Our interest in histogram-based 3D shape descriptors stems from their generality and their simplicity.They are global descriptors based on sets of local measurements and they have been shown to be effective in classifying shapes into broad categories [2].Our objective is to show that, in addition to their categorization capability, they have also satisfactory retrieval performance.
Any histogram-based 3D shape descriptor must face the problem of estimating the histogram from any given mesh composed of triangles usually with arbitrary forms and sizes.In the previous histogram-based approaches, the surface samples are either chosen as the centers of gravity of the triangles or obtained by randomly sampling several points from the surface.A single sample from each triangle may not adequately represent the mesh.The random sampling of the surface may compensate for the nonuniform distribution of triangles, provided that a sizeable number of surface points is taken.Although the random sampling approach proves to be useful for computing histograms of scalar features [10], it is not practical in the multidimensional case due to the curse of dimensionality: the number of samples required to fill in the multivariate histogram bins increases exponentially as dimensionality increases [13], resulting in a significant extra computational load which is not affordable for most applications such as retrieval.
Our density-based framework makes a more effective use of each triangle and also takes care of the nonuniformity of their areas and orientations without resorting to expensive random sampling.First, we do not use samples but exploit the information in the whole triangle area using an integration scheme, as described in Section 3.3.Second, we resort to nonparametric kernel density estimation (KDE) with rulebased bandwidth parameter assignment [13,14].In other words, local geometric information emanating from each mesh triangle contributes to the geometric feature density by the intermediary of a kernel.Thus local evidences about surface shape are accumulated at targeted density points to result in a global shape description.Third, we use a Gaussian kernel.Since the Gaussian density is completely determined by its first two moments, we only need to estimate the mean and the variance of the feature for each triangle.For certain cases, these moments can be approximated very accurately by making use of the geometry of a triangle in 3D space.The choice of Gaussian kernel brings in the additional advantage of alleviating the computational burden of calculating large sums of Gaussians, as occur in the proposed set of descriptors, by enabling the use of the efficient fast Gauss transform (FGT) [15,16].Thus the main contribution of our work is to propose an analytical framework for the extraction of 3D descriptors from local surface features that characterize the object geometry.This framework computes probability densities of local features instead of their conventional histograms.Here, we interpret histograms and densities in a broad sense: any descriptor that uses an accumulator scheme of measured quantities qualifies as a histogram-based descriptor.As a byproduct, we also introduce some novel local features.
The rest of the paper is structured as follows.In Section 2, we provide an overview of histogram-based 3D shape descriptors.Section 3 introduces the local geometric features we have considered and describes the KDE-based computational framework.In Section 4, we illustrate the retrieval performance of our method in comparison to other equivalent or similar histogram-based descriptors [8][9][10][11][12].In Section 5, we draw conclusions and discuss further directions in density-based 3D shape descriptors.

PREVIOUS WORK ON 3D SHAPE DESCRIPTORS
There are two main paradigms of 3D shape description, namely, graph-based and vector-based.Graph-based representations are more elaborate and complex, harder to obtain, but represent shape properties in a more faithful and intuitive manner.Shock graphs [17], multiresolution Reeb graphs [6,18,19], and skeletal graphs [20] are methods that fall in this category.However, they do not generalize easily and hence they are not very convenient to use in unsupervised learning, for example, to search for natural shape classes in a database.Vector-based representations, on the other hand, are more easily computed.Although they do not necessarily conduce to plausible topological visualizations, they can be naturally employed in both supervised and unsupervised classification tasks.Typical vector-based representations are extended Gaussian images [8,9], cord and angle histograms [11], 3D shape histograms [21], spherical harmonics [7,[22][23][24], and shape distributions [10].In this work, we are exclusively interested in histogram-based 3D shape descriptors that constitute a particular branch of vector-based representations.In the following, we provide a brief overview of histogram-based descriptors.References [1,2,4] provide also excellent surveys.
In [11], Paquet and Rioux present cord and angle histograms for matching 3D objects.A "cord," which is actually a ray, joins the barycenter of the mesh with a triangle center.The histograms of the length and of the angles of these rays (with respect to a reference frame) are used as the 3D shape descriptors.Although automatic determination of a canonical reference frame for 3D meshes is still not totally solved [7], the common practice is to obtain the eigendecomposition of the covariance matrix of the surface points.The covariance matrix itself can be computed using the mesh vertices, the triangle centers, or in a "continuous" way as described in [7].The resulting eigenvectors, which are the orthogonal directions along which the mesh has maximal spread, are taken as a reference frame.Notice that the eigendirections may not necessarily correspond to the "natural" pose of the object; however, they can serve as a canonical reference frame.In conclusion, Paquet and Rioux [11] consider the shape descriptors consisting of the ray length and the relative ray angles with respect to the largest two eigenvectors.One shortcoming of all such approaches that reduce the triangles to their center points is that they do not take into consideration the size and shape of the mesh triangles.First, because triangles of any size have equal weight in the final shape distribution; second, because the triangle shapes can be arbitrary, so that the center may not represent adequately the impact of the triangle on the shape distribution.
In the shape distributions approach, Osada et al. [10] use a collection of shape functions, which are geometrical quantities estimated by a random sampling of the surface of the 3D object.Their shape functions are defined as the distance of surface points to the center of mass of the model (D1), the distance between two surface points (D2), the area of the triangle defined by three surface points (D3), the volume of the tetrahedron defined by four surface points (D4), and so on.The descriptors of the object are then defined as the histograms of these shape functions.The randomization of the surface sampling process improves the estimation over Paquet and Rioux's approach [11], since a more representative and dense set of surface points is used.Obviously, the histogram accuracy can be controlled with the sample size.
Ankerst et al. use shape histograms for the purpose of molecular surface analysis [21].A shape histogram is defined by partitioning the 3D space into concentric shells and sectors around the center of mass of a 3D model.The histogram is constructed by accumulating the surface points in the bins (in the form of shells, sectors, or both) based on a nearestneighbor rule.Ankerst et al. [21] illustrate the shortcomings of Euclidean distance to compare two shape histograms and make use of a Mahalanobis-like quadratic distance measure taking into account the distances between histogram bins.
Extended Gaussian images (EGI), introduced by Horn [8], form another class of histogram-based 3D shape descriptors.An EGI consists of a spherical histogram with bins indexed by (θ j , ϕ k ), where each bin corresponds to some quantum of the spherical azimuth and elevation angles (θ, ϕ) in the range 0 ≤ θ < 2π and 0 ≤ ϕ < π.The histogram bins accumulate the count of the spherical angles of the surface normal per triangle, usually weighted by the triangle area.Kang and Ikeuchi have extended the EGI approach by considering the normal distances of the triangles to the origin [9].Accordingly, each histogram bin accumulates a complex number whose magnitude and phase are the area of the triangle and its signed distance to the origin, respectively.The resulting 3D shape descriptor is called complex extended Gaussian images (CEGI) [9].
In [12], Zaharia and Prêteux present the 3D Hough transform descriptor (3DHT) as a histogram constructed by accumulating surface points over planes in 3D space.Each triangle of the mesh contributes to each plane with a weight equal to the projected area of the triangle on the plane but only if the scalar product between their normals is higher than a given threshold.Although we have not encountered in the literature a direct comparison between 3DHT and EGI, 3DHT can be considered as a generalized version of EGI, where concentric spherical shells of different radii are constructed around the object's center of mass.One can consequently conjecture that the 3DHT descriptor captures the shape information better than the EGI descriptor, as will be shown experimentally in Section 4.
An important property of a 3D shape descriptor is its invariance to similarity transformations, that is, translation (T), rotation (R), and scale (S) [1,2,4,7].In Table 1, we summarize invariance properties of the histogram-based shape descriptors discussed above.

Local geometric features
We assume that each 3D shape is represented as a triangular mesh and that its center of mass coincides with the origin of the coordinate system.In what follows, capital italic letter P stands for a point in 3D, a small case boldface letter p = (p x , p y , p z ) for its vector representation, n P = ( n P,x , n P,y , n P,z ) for the unit surface normal vector at P when P belongs to a surface M ⊂ R 3 , and •, • for the usual dot product.
We define a local geometric feature as a mapping S from the points of a surface M ⊂ R 3 into a d-dimensional space, generally a subspace of R d .Each dimension of this space corresponds to a specific geometric property that can be calculated at each point of the surface.For example, the distance of a surface point to the center of the 3D shape is a onedimensional (d = 1) geometric feature, while the mesh triangle normal n P is a three-dimensional feature vector (d = 3).In this work, we consider three different multidimensional local geometric features that we describe in the sequel.
The radial feature S r at a point P is a 4-tuple defined as S r (P) r P , r P,x , r P,y , r P,z with r P,x p x r P , r P,y p y r P , r P,z p z r P . ( Accordingly, S r consists of a magnitude component r P measuring the distance of the point P to the origin, and a direction component r P ( r P,x , r P,y , r P,z ) that gives the orientation of the point P (see Figure 1).Observe that we can write S r also as S r (P) = (r P , r P ).The direction component r P is a three-dimensional vector with unit norm; hence it lies on the unit sphere.The tangent plane-based feature S t at a point P is a 4-tuple defined as S t (P) = d t,P , n P,x , n P,y , n P,z with d t,P r P r P , n P .

EURASIP Journal on Advances in Signal Processing
( Similar to the S r feature, S t has a magnitude component d t,P , which stands for the distance of the tangent plane at P to the origin, and a direction component n P = ( n P,x , n P,y , n P,z ) (see Figure 1).Thus, we may write S t (P) = (d t,P , n P ).The normal n P is a unit norm vector by definition and lies on the unit sphere.
The cross-product feature S c aims at encoding the relationship between the former two features, namely, the radial feature S r and the tangent plane-based feature S t .To this end, we define S c at a point P as S c (P) r P , c P,x , c P,y , c P,z r P , c P with c P r P × n P . ( In much the same way as in S r and S t , S c is decoupled into a magnitude component r P and a direction component c P .
Notice, however, that c P is not a unit-norm vector unless the angle between the radial direction r P and the normal direction n P is π/2.Both r P and n P being unit norm vectors, the norm of c P is lower than or equal to unity and it lies inside the unit ball.The local geometric features presented above and their invariance properties are summarized in Table 2.

Kernel density estimation
Given a set of observations {s k } K k=1 for a random variable (scalar or vector) S, the kernel approach to estimate the probability density of S is formulated in its most general form as , the choice of a kernel function K and the setting of bandwidth parameters {H k } K k=1 .We compute the probability density values of a certain local geometric feature S from a set of observations {s k } K k=1 .We assume that the 3D shape is represented as a triangular mesh consisting of K triangles.Thus we can obtain an observation s k from each of the triangles in the mesh, as will be explained in Section 3.3.Since, in general, the mesh is made up of nonuniformly sized triangles, the data should be weighted accordingly.A natural choice for the importance weight w k of a data point s k is the ratio of the kth triangle area to the total surface area, yielding K k=1 w k = 1.It is known that the particular functional form of the kernel does not significantly affect the accuracy of the estimator [14].The Gaussian kernel has become a popular choice, first because it lends itself more easily to asymptotic error analysis [14]; and second, for the existence of efficient algorithms to calculate large sums of Gaussians, as the fast Gauss transform (FGT) already mentioned in the introduction [15,16].Actually, FGT is the dominant reason why we choose the Gaussian kernel since computational efficiency is an important requirement for 3D object retrieval [1,2] (see Section 3.6 for details).
The setting of the bandwidth parameters {H k } K k=1 is critical for an accurate kernel density estimation [14,25].For the Gaussian kernel, the bandwidth matrix H k simply corresponds to the feature covariance matrix.For setting/estimating the bandwidth parameters, there exist several guidelines and computational methods with varying complexity [14,25].We discuss different alternatives in Section 3.4.The probability density function f S (s), when computed over predefined target points using (4), results in the shape descriptor sought for a given triangular mesh.The methodology that we employ to choose the target points for each specific feature is explained in Section 3.5.

Feature calculation
Given a d-dimensional local feature S = (S 1 , . . ., S d ), the observation s k can be obtained from the mesh triangle T k by evaluating the value of S at the barycenter of the triangle.However, the mesh triangles having in general arbitrary shapes, the feature value at the barycenter may not be the most representative one.The shape of the triangle should be in some way taken into account in order to reflect the local feature characteristics more faithfully.The expected value of the local feature E{S | T} over the triangle T is more informative than the feature value only sampled at a single point, the barycenter of the triangle.
Consider T as an arbitrary triangle in 3D space with vertices A, B, and C represented by p A , p B , and p C , respectively, (see Figure 2).By noting e 1 = p B − p A and e 2 = p C − p A , we can obtain a parametric representation for a point P inside the triangle T as p = p A + xe 1 + ye 2 , where the two parameters x and y satisfy the constraints x, y ≥ 0 and x + y ≤ 1.We assume that the point P is uniformly distributed inside the triangle T. Thus, the expected value of the ith component of S, denoted by E{S i | T}, is given by (5) where S i (x, y) is the feature value at (x, y) and f (x, y) is the probability density function of the pair (x, y) over the domain Ω = {(x, y) : x, y ≥ 0, x + y ≤ 1}.Accordingly, f (x, y) = 2 when (x, y) ∈ Ω or zero otherwise.The integration is performed over the domain Ω.To approximate (5), we apply Simpson's 1/3 numerical integration formula [26].We avoid the arbitrariness in vertex labeling by considering the three permutations of the labels A, B, and C.This yields us three approximations, which are in turn averaged to yield Equation ( 6) boils down to take a weighted average of feature values calculated at 9 points on the triangle.

Bandwidth selection
There are three levels of analysis at which the parameters in the bandwidth matrix H k involved in KDE can be chosen (see (4) in Section 3.2).
(1) Triangle level: this option allows a distinct bandwidth parameter for each triangle in the mesh.In principle, this choice is very flexible since it does not make any assumptions about the shape of the kernel function and hence about the shape of the kth triangle.In general, finding a KDE bandwidth matrix specific to each observation is a difficult problem [25].For the Gaussian kernel, however, estimation of the bandwidth matrix H k reduces to the estimation of the feature covariance matrix.The moment formula in (5) and its numerical approximation in (6) can directly be used for moments of any order.For example, the (i, j)th component h i j of H is computed by (2) Mesh level: the second option is to use a fixed bandwidth matrix for all triangles in a given mesh, but different bandwidths for different meshes.In this case, the bandwidth matrix for a given feature can be obtained from its observations using Scott's rule of thumb [14]: , where d is the dimension of the feature, C is the estimate of the feature covariance matrix, and w k is the weight associated to each observation.Scott's rule of thumb is proven to provide the optimal bandwidth in terms of estimation error when the kernel function and the unknown density are both Gaussian.Although, there is no guarantee that feature distributions to be Gaussian, Scott's rule of thumb is still used for its simplicity.(3) Database level: in the last option, the bandwidth parameter is fixed for all triangles and meshes, that is, H k = H.Setting the bandwidth at database level has the implicit effect of smoothing the resulting densities.In this case, we estimate the bandwidth parameters from a representative subset of the database by averaging the Scott bandwidth matrices over the selected meshes.

Choice of the targets
Targets are defined as the points at which the feature density functions are explicitly calculated.The density values computed at these targets constitute the 3D shape feature vector.Selection of target points must result in parsimonious yet discriminative descriptors.For single-dimensional features, it suffices to uniformly sample the density function within its dynamic range.However, the multidimensional features, S r , S t , and S c , which consist of magnitude and direction components, require more attention.We denote the target size by N mag for the magnitude component and by N dir for the direction component.The target points for these multidimensional features are then obtained by the Cartesian product of the two sets, yielding an overall target set size of N = N mag × N dir .The magnitude components of S r and S t are uniformly quantized in the interval [0, r max ], while those of S t in the [0, d t,max ] interval.The setting of r max and d t,max is discussed in Section 4.2.The direction components of S r and S t features, namely, r P and n P , lie on the unit sphere.To complete the design of target points, following [12], we consider an octahedron circumscribed by the unit sphere and we subdivide each of its 8 triangles into four, twice, by radially projecting back the subdivided triangles to the surface of the sphere.As targets of the direction components of S r and S t , we select the barycenters of the resulting 128 triangles, 16 per each of the 8 faces of the octahedron.This leads to a uniform partitioning of the sphere, as shown in Figure 3.
The S c feature has a direction component c P with nonunit norm, which lies within the unit ball.For the target set of the direction component c P , we thus similarly consider octahedra, but circumscribed by spheres of various radii.We take four such octahedra within spheres of radial length 0.25, 0.5, 0.75, and 1.We subdivide the two inner octahedra once, each yielding 32 targets, and the two outer octahedra twice, each yielding 128 targets.This gives a total of N dir = 320 regularly spaced targets for the c P -component of the S c feature.The inner spheres have sparser targets to balance out the target densities of the outer spheres.

Computational complexity of KDE
The computational complexity of KDE using directly (4) is O(KN), where K is the number of observations (the number of triangles in our case) and N is the number of density evaluation points, that is, targets.For applications such as content-based retrieval, the O(KN)-complexity is prohibitive.To give an example, on a Pentium 4 PC (2.4 GHz CPU, 2 GB RAM) and for a mesh of 130, 000 triangles, the direct evaluation of the S r -descriptor (1024-point pdf) takes 125 seconds.However, when the kernel function in ( 4) is chosen as Gaussian, we can use the fast Gauss transform (FGT) [15,16] to reduce the computational complexity by two orders of magnitude.For example, with FGT, the S rdescriptor computation takes only 2.5 seconds.FGT is an approximation scheme enabling the calculation of large sums of Gaussians within reasonable accuracy and reducing the complexity down to O(K + N).In our 3D shape description system, we have used an improved version of FGT implemented by Yang et al. [16].
For the sake of completeness, we provide the conceptual guidelines of the FGT algorithm (see [15,16] for mathematical and implementation details).FGT is a special case of the more general fast multipole method [15], which trades off computational simplicity for acceptable loss of accuracy.The basic idea is to cluster the data points and target points using appropriate data structures and to replace the large sums with smaller ones that are equivalent up to a given precision.In the case of FGT, each exponential in the sum is shifted and expanded into a truncated Hermite series in O(K) operations.The gain in complexity is achieved by avoiding the computation of every Gaussian at every evaluation point unlike the direct approach, which has O(KN)complexity.The accuracy can be controlled by the truncation order.Truncated Hermite series are constructed about a small number of cluster centers formed by target points; the series are shifted to target cluster centers, and then evaluated at N targets in O(N) operations.Since the two sets of operations are disjoint, the total complexity of FGT becomes O(K + N).

Flow diagram of the algorithm
We summarize below the proposed algorithm to obtain a density-based 3D shape descriptor.
(1) For a chosen local feature S, specify a set of targets t n , n = 1, . . ., N. (2) Normalize the 3D triangular mesh M = K k=1 T k according to the invariance requirements of S.
(3) For each mesh triangle T k , calculate its feature value s k using ( 6) and its weight w k .(4) Set the bandwidth parameters H k according to the strategy chosen among the three options described in Section 3.4.( 5) For each target t n , n = 1, . . ., N, evaluate the local feature density f S (t n ), using ( 4). ( 6) Store the resulting density values f S (t n ) in the shape descriptor Note that the descriptors corresponding to L different local features S 1 , . . ., S L can be concatenated to obtain a combined descriptor f S1,...,SL = [f S1 , . . ., f SL ]. Figure 4 depicts the flow diagram of the algorithm when the bandwidth parameters are set at database level.Alternatively, in the triangle or mesh level setting, a bandwidth matrix is to be computed for each triangle or for the entire mesh, respectively.Note that in Figure 4, we assume that the mesh M has already undergone a pose and/or scale normalization step depending on the missing invariance properties of the local feature S chosen.

EXPERIMENTAL RESULTS
In this section, we illustrate the performance of the proposed shape descriptors in 3D retrieval applications.When a query model is presented to the 3D object database, its descriptor is calculated and then compared to all the stored descriptors using a distance function.The outcome is a set of database models sorted in increasing distance.The models at the top of the list are expected to resemble the queried model more than those at the bottom of the list.
We have experimented on two different 3D model databases: the Princeton Shape Benchmark (PSB) [5] and the Sculpteur Database (SCUdb) [6,27].Both databases consist of objects described as triangular meshes, though they differ substantially in terms of content and mesh quality.PSB is a publicly available database containing a total of 1814 synthesis models, categorized into general classes such as animals, humans, plants, household objects, tools, vehicles, buildings, and so forth.An important feature of the database is the availability of two equally sized sets.One of them is a training set (90 classes) reserved for tuning the parameters involved in the computation of a particular shape descriptor, and the other for testing purposes (92 classes).By contrast, SCUdb is a private database containing over 800 models corresponding mostly to scanned archeological objects residing in museums [6,27].Presently, 513 of the models are classified into 53 categories with comparable set populations, which include utensils of ancient times (e.g., amphorae, vases, bottles, etc.), pavements, and artistic objects such as human statues (parts or as a whole), figurines, and moulds.The database has been augmented by artificially generated 3D objects such as spheres, tori, cubes, or cones in order to build a set of simple well-controlled classes.The meshes in SCUdb are highly detailed and reliable in terms of connectivity and orientation of triangles.To give an idea of the significant differences between PSB and SCUdb, we can quote average mesh resolution figures.The average number of triangles in SCUdb and in PSB is 175250 and 7460, respectively, corresponding to a ratio of 23.In terms of vertices, SCUdb meshes contain 87670 vertices on the average while for PSB this number is 4220.Furthermore, the average triangular area relative to the total mesh area is 33 times smaller in SCUdb than in PSB.

Evaluation tools
The most commonly used statistics for measuring the performance of a shape descriptor in a content-based retrieval application are summarized below [5].

(i) Precision-recall curve
For a query q that is a member of a certain class, Precision (vertical axis) is the ratio of the relevant matches K q (matches that are within the same class as the query) to the number of retrieved models K ret , and Recall (horizontal axis) is the ratio of relevant matches K q to the size of the query class C q : The percentage of the first-closest matches that belong to the query class.
(iii) First-tier and second-tier First-tier (FT) is the recall when the number of retrieved models is the same as the size of the query class and secondtier (ST) is the recall when the number of retrieved models is two times the size of the query class.
(iv) E-measure This is a composite measure of the precision and recall for a fixed number of retrieved models, for example, 32, based on the intuition that a user of a search engine is more interested in the first page of query results than in later pages.E-measure is given by A statistic that weights correct results near the front of the list more than correct results later in the ranked list under the assumption that a user is less likely to consider elements near the end of the list.Specifically, the ranked list of retrieved objects is converted to a list L, where an element L k has value 1 if the kth object in the ranked list is in the same class as the query and otherwise has value 0. Discounted cumulative gain DCG k is then defined as The final DCG score for a query q is obtained for k = K max , where K max is the total number of objects in the database, and normalizing DCG Kmax by the maximum possible DCG that would be achieved if the first C q retrieved elements were in the class of the query q (C q is the size of the query class).Thus DCG reads as This is a very useful statistic based on averaging DCG values of a set of algorithms on a particular database.Normalized DCG (NDCG) gives the relative performance of an algorithm with respect to the other ones.A negative value means that the performance of the algorithm is below the average; similarly a positive value indicates above the average performance.Let DCG (A) be the DCG of a certain algorithm A and let DCG (avg) be the average DCG values of a series of algorithms on the same database, then NDCG for the algorithm A is defined as All these quantities are normalized within the range [0, 1] (except NDCG) and higher values reflect better performance.
In order to give the overall performance of a shape descriptor on a database, the values of a statistic for each query are averaged to yield a single performance figure.The retrieval statistics presented in the sequel are obtained using the utility software included in PSB [5].

Retrieval experiments
In all of our retrieval experiments, we use the Minkowski-l 1 distance measure to assess the similarity between descriptors since we have observed that this distance function gives better performance in most of the cases as compared to other distance measures such as l 2 or χ 2 .We apply the following normalization to all the meshes of the database to secure RST invariance of the features.For translation invariance, the object's center of mass is translated to the origin.For scale invariance, the area-weighted average distance of surface points to the origin is set to unity.We have observed that, with this scaling operation, the frequency of the distance of a surface point to the mesh center exceeding 2 becomes negligible.This allows us to set empirical upper limits r max and d t,max to the magnitude components r P and d t,P , respectively.Finally, to guarantee rotation and reflection invariance, we follow the "continuous" PCA approach of Vranić [7].All the codes for our descriptors as well as for those proposed in the literature (cord and angle histograms [11], D1 and D2 shape distributions [10], EGI [8] and CEGI [9], 3DHT [12]) have been implemented in MATLAB 7.0 (R14) environment, using C MEX external interface for time-consuming jobs.For FGT, we have used the implementation provided by Yang et al. [16].
The acronyms of the descriptors we have experimented are listed in Tables 3 and 4. They will subsequently be used in graph annotations.The details about descriptor sizes are given in the corresponding sections.
There are two alternative ways of combining descriptors, by multivariate density evaluation or by concatenating estimated univariate densities.The multivariate descriptors (Sr, St, Sc, and Sn) that we consider in our experiments are derived from S r , S t , S c , and S n features as given in the first four rows of Table 4.Alternatively, descriptors for multiple scalar features, for example, S ri , i = 1, . . ., 4, can For notational simplicity, we will refer to the descriptor f A1 consisting of the density vector as A1descriptor; similarly, [A1, A2] will be the shorthand notation for the descriptor [f A1 , f A2 ].Note finally that the generic feature A i can be either a vector by construction or a scalar obtained by taking a component of some other multidimensional feature.

Impact of bandwidth selection
The KDE approach critically depends upon the judicious setting of the bandwidth parameters.We tested the triangle, mesh and database level alternatives presented in Section 3.4 on our multidimensional local features S r , S t , and S c (the computationally expensive triangle-level setting was only tested for S r ).Since we have observed that the off-diagonal terms of the bandwidth matrices are negligible as compared to the diagonal terms, we use only diagonal bandwidth matrices H = diag(h 1 , . . ., h d ).For the mesh level and database level, we apply the Scott's rule-of-thumb.For the triangle level, we employ the KDE toolbox developed by Ihler [28] since the available FGT implementation does not allow a different bandwidth per triangle [16].The KDE toolbox makes use of kd-trees and reduces the computational burden considerably, though not to the extent achieved by FGT.  level settings for Sr and St-descriptors.We clearly observe that setting the bandwidth H at database level is more advantageous as compared to triangle and mesh level settings.Any further results reported are therefore for the database level setting of H.In Table 6, we provide the average Scott bandwidth values obtained from PSB training meshes for S r , S t , and S c features.

Univariate versus multivariate density-based descriptors
In this section, we compare the impact of combining descriptors on the retrieval performance.As discussed before,

Comparison of density-based descriptors with their histogram-based peers
One of the motivations of this work is to show that a considerable improvement in the retrieval performance can be obtained by more rigorous and accurate computation of shape distributions as compared to more practical ad hoc histogram approaches.Notice that we interpret the term "histogram-based descriptor" for any count-and-accumulate type of procedure.This way we can refer to analogous descriptors in the literature as histogram-based whenever they count-and-accumulate local information to obtain a global shape descriptor [8][9][10][11][12].An interesting case in point is Cord and Angle Histograms (CAH) [11].The features in CAH are identical to the individual scalar components r P , r P,x , r P,y , and r P,z of our S r feature up to a parameterization.In [11], the authors consider the length of a cord (corresponding to r P ) and the two angles between a cord and the first two principal directions (corresponding to r P,x and r P,y ).Notice that in our parameterization of S r , we consider the Cartesian coordinates rather than the angles.In order to compare with our [Sr1,Sr2,Sr3,Sr4]-descriptor, we implemented the CAH-descriptor by also considering the histogram of the angle with the third principal direction.The resulting CAH-descriptor is thus the concatenation of one cord length and three angle histograms.Each histogram consisting of 64 bins leads to a descriptor of total size N = 4 × 64 = 256.[Sr1,Sr2,Sr3,Sr4]-descriptor, again of size 256,  differs from CAH in three aspects: first, it uses a different parameterization of the angle (direction) components; second, the local feature values are calculated by (6) instead of using mere barycentric sampling; third, it employs KDE instead of histogram computation.In Figure 8, we provide the precision-recall curve corresponding to CAH and [Sr1,Sr2,Sr3,Sr4] on PSB test set and on SCUdb.The respective DCG values are 0.434 and 0.501 for PSB, 0.681 and 0.698 for SCUdb, indicating the superior performance of our framework under identical feature sets.An additional improvement can be gained by estimating the joint density of S r , leading to the Sr-descriptor.That is, in contrast to the concatenation of univariate densities, we directly use the joint density of S r as a descriptor.The DCG value  of the Sr-descriptor is 0.533 on PSB and 0.708 on SCUdb, one more step of improvement as compared to the concatenated univariate case [Sr1,Sr2,Sr3,Sr4] (DCG = 0.501 on PSB and DCG = 0.698 on SCUdb).Note that the performance improvement using our scheme is less impressive over SCUdb than over PSB.This can be explained by the fact that SCUdb meshes are much denser than PSB meshes in number of triangles.As the number of observations increases, the accuracies of the histogram method and KDE become comparable and both methods result in similar descriptors.This also indicates that the KDE methodology is especially appropriate for coarser mesh resolutions as in PSB.
A second instance of our framework outperforming its competitor is with the EGI-descriptor [2, 5, 8], which consists of binning the surface normals.The density of our S n (P) = n P feature is equivalent to the EGI-descriptor.There can be different choices for binning surface normals, for example, by mapping the normal of a certain mesh triangle to the closest bin over the unit sphere and augmenting that bin by the relative area of the triangle.Such an approach requires a very densely discretized unit sphere and the resulting descriptor is not very efficient in terms of storage.In the present work, similarly to [12], we preferred the following implementation for the EGI-descriptor.First, 128 unit norm vectors n bin, j , j = 1, . . ., 128, are obtained as histogram bin centers by octahedron subdivision, as described in Section 3.5.
Then, the contribution of each triangle T k , k = 1, . . ., K, with normal vector n k to the nth bin center is computed as , n bin, j | ≥ 0.7 or otherwise as zero (recall that w k is the relative area of the kth triangle).The use of the absolute value is needed because some models as those in the PSB set cannot provide orientation information.The Sndescriptor of the same size, that is, 128, achieves a superior DCG of 0.478 as compared to the DCG score of 0.438 for EGI on PSB (see Figure 9).For SCUdb, the DCG-performance differential is even more pronounced (DCG = 0.589 for Sn, DCG = 0.535 for EGI) noting that for low recall values (recall < 0.2), the EGI-descriptor is better than Sn (see Figure 9).A third instance of comparison can be considered between our St-descriptor and the 3DHT-descriptor [12] since both of them use local tangent plane parameterization.The procedure for the 3DHT descriptor is carried out as follows.We first recall that the 3DHT-descriptor is a histogram constructed by accumulating mesh surface points over planes in 3D space.Each histogram bin corresponds to a plane P i j parameterized by its normal distance d t,i , i = 1, . . ., N mag , to the origin and its normal direction n bin, j , j = 1, . . ., N dir .Clearly, there can be N mag × N dir such planes and the resulting descriptor is of size N = N mag × N dir .We can obtain such a family of planes exactly as described in Section 3.5 and in [12].In our experiments, we have used N mag = 8 distance bins sampled within the range [0, 2] and N dir = 128 uniformly sampled normal directions.This results in a 3DHT descriptor of size N = 1024.To construct the Hough array, one first takes a plane with normal direction n bin, j , j = 1, . . ., N dir , at each triangle barycenter m k , k = 1, . . ., K,  and then calculates the normal distance of the plane to the origin by | m k , n bin, j |.The resulting value is quantized to the closest d t,i , i = 1, . . ., N mag , and then the bin corresponding to the plane P i j is augmented by 7 (the value of 0.7 is suggested by Zaharia and Prêteux [12] and we have also verified its performancewise optimality).In Figure 10, we compare the St-and the 3DHT-descriptors in terms of precision-recall curves.On PSB, the St-descriptor yields a DCG of 0.543, a worse score against 0.577 of the 3DHT-descriptor.This can be attributed largely to the fact that the 3DHT-descriptor employs an implicit correction for normal orientations by the weighting scheme w k | n k , n bin, j | according to which only normal direction n k matters but not its orientation.Our St-descriptor  does not make use of such a correction and considers the normal orientations as they are provided by the list of triangles in the mesh.Accordingly, we explain the negative performance gap between St and 3DHT by the fact that, on PSB meshes, information regarding normal orientations might be compromised.On the other hand, for SCUdb, the performance of St (DCG = 0.712) parallels that of 3DHT noting that 3DHT remains slightly better (DCG = 0.727).

General performance comparison
In this section, we compare the descriptors that we propose (univariate, concatenated, or multivariate) first among themselves and then with various other descriptors existing in the literature.
In Table 8, we see the competition within the Sr, St, and Sc set and their various combinations.Since pairing the features results in higher dimensions (8 or 12) precluding multivariate density estimation, we use concatenation of the 4variate densities.It is interesting to observe that the pairwise concatenations [Sr,St], [Sr,Sc], and [St,Sc] of size 2048, 3584 and 3584, respectively, increase the DCG and NN scores significantly.We can conclude that each local feature must be reporting aspects on the shape not covered by the remaining ones, albeit their similarity.Furthermore, the triplet concatenation [Sr,St,Sc] of size 4608 boosts the DCG and NN performance further.We also note that, on a Pentium 4 PC (2.4 GHz CPU, 2 GB RAM), the [Sr,St,Sc]-descriptor can be computed in less than one second on the average over PSB test set meshes, which indicates that our density-based descriptors are very time-efficient and suitable for practical online applications.
Table 9 finally summarizes the experimental results conducted to compare our density-based descriptors with other histogram-based descriptors.For both databases, PSB and SCUdb, the [Sr,St,Sc]-descriptor comes at the top in all performance fields.Furthermore, the second place is taken by a pairwise concatenation which is more storage-efficient and even more time-efficient than [Sr,St,Sc]: [Sr,St] for PSB and [St,Sc] for SCU.
The density-based framework does not only outperform histogram-based descriptors but also proves to be effective as compared to other more general state-of-the-art shape descriptors.In fact, based on the scores on PSB test set reported in [5], the [Sr,St,Sc]-descriptor has the highest DCG score among all other well-known 3D shape descriptors, as shown in Figure 11.Except for 3DHT [12] and CAH [11], all the descriptor scores shown in Figure 11 are taken from [5].We refer the reader to [5] for brief descriptions and acronyms of these descriptors.The [Sr,St,Sc]-descriptor has a DCG value of 0.607, while the next best descriptor radialized extent function (REXT) [7,24] has a DCG value of 0.601 [5].Note also that the [Sr,St]-descriptor (DCG = 0.599) ranks third in the competition.The average REXT-descriptor size reported in [5] is 17.5 kilobytes, while for our [Sr,St,Sc]descriptor this figure is 22 kilobytes.The average generation time for the REXT-descriptor is 2.2 seconds [5], while our [Sr,St,Sc]-descriptor can be computed in 0.9 seconds on the average on comparable hardware configurations.

CONCLUSION
We have proposed a novel methodology to obtain 3D shape descriptors and evaluated its impact in a retrieval scenario.We have shown that shape descriptors derived as kernel density estimates of local surface features prove more advantageous compared to the count-and-accumulate-based histogram descriptors.Firstly, one main advantage accrues from the fact that our descriptors are true probability density functions of geometrical quantities defined over the model surface.Secondly, our surface sampling is not as crude as just considering triangle barycenters or as profuse as random sampling, but judiciously chooses the triangle characteristics.Thirdly and most importantly, the KDE-based approach deals with multidimensional surface features as easily as with scalar features.The bandwidth parameters in KDE provide a more gracious control over finite sample-size and dimensionality problems, while with multivariate histograms one can only adjust the bin widths [13,14].The local surface information brought by multidimensional features proves to be more discriminating than scalar ones.
The proposed framework applies to 3D objects represented as triangular meshes but extension to point-cloud representations is straightforward.Concerning hidden triangles encountered in triangular "soups," we remark that we do not try to detect such degeneracies and process them as any other triangles.They introduce noise in the density estimation but not to the extent to alter the density-based descriptor drastically.Furthermore, hidden triangles present in PSB remain in small proportion and SCUdb models are manifold and free of hidden triangles.Our framework should be viewed as an application of kernel density estimation [13,14] with either variable (triangle or mesh levels) or fixed (database level) bandwidth parameters selection [25].We have also demonstrated that density-based descriptors are much more discriminative in retrieval when the bandwidth parameters are set at database level as compared to mesh or triangle level setting.We think that the database level strategy smoothes out individual shape details and emphasizes global shape properties as appropriate for object retrieval and classification tasks; while the other two options, especially the triangle level strategy, result in an overfitting of the feature density and hamper the descriptor's discrimination ability.Furthermore, the computational advantage of density-based descriptors enabled by FGT with a database-dependent bandwidth matrix is very promising for practical online applications.
When combined together, the multivariate density-based 3D shape descriptors introduced in this work outperform the existing histogram-based techniques in the literature.The retrieval competition took place on two databases, PSB and SCUdb, which are fundamentally different in semantic content and mesh quality.In addition, the performance advantage of density-based descriptors over its competitors is not limited to histogram-based ones, as shown in the more general comparison where our [Sr,St,Sc]-descriptor reaches the top position in the category of purely 3D descriptors reported in [5].As a side remark, based on nearest-neighbor scores of our descriptors, we conjecture that they would also perform well in recognition applications.
In summary, a general framework using KDE has been developed, that covers existing and novel descriptors.Our method enables the use of arbitrary one-or multidimensional surface features for retrieval, recognition, and classification of 3D objects.Future research will concentrate on potential improvements of decision fusion.For example, several retrievers can operate in parallel and one can consider rankweighted reordering of the retrieved objects.A second natural avenue of research is in the direction of second-order features.We will tackle the problem of designing second-order features that would serve as natural proxies for curvature-like quantities.Curvature is in fact difficult to work with because of the estimation inaccuracies involved in its computation.Nevertheless, it can be conjectured that the kernel-based approach, thanks to its smoothing behavior, may be useful in deriving curvature-driven 3D shape descriptors.One of our future objectives is thus to arrive at an exhaustive set of firstand second-order features and to discover computational limits of the density-based approach.A side issue is to render the proposed descriptors more effective in discrimination and more efficient in terms of storage size by adequately sampling the local feature domains for target evaluation points.A further question that should be considered is to which extent the combination of the available features can be exploited, that is, how large the feature dimension of the multivariate densities can be.

Call for Papers
Many important applications of multimedia revolve around the detection of humans and the interpretation of human behavior, for example, surveillance and intrusion detection, automatic analysis of sports videos, broadcasts, movies, ambient assisted living applications, video conferencing applications, and so forth.Success in this task requires the integration of various data modalities including video, audio, and associated text, and a host of methods from the field of machine learning.Additionally, the computational efficiency of the resulting algorithms is critical since the amount of data to be processed in videos is typically large and real-time systems are required for practical implementations.
Recently, there have been several special issues on the human detection and human-activity analysis in video.The emphasis has been on the use of video data only.This special issue is concerned with contributions that rely on the use of multimedia information, that is, audio, video, and, if available, the associated text information.
Papers on the following and related topics are solicited: • Video characterization, classification, and semantic annotation using both audio and video, and text (if available).• Video indexing and retrieval using multimedia information.

Call for Papers
In recent years, the proliferation of mobile computing devices and wireless technologies has fostered a growing interest in location-aware systems and services.The availability of location information on objects and human beings is critical in many military and civilian applications such as emergency call services, tracking of valuable assets, monitoring individuals with special needs in assisted living facilities, locationassisted gaming (e.g., Geocaching), etc.
Existing positioning systems can be categorized based on whether they are intended for indoor or outdoor applications.Within both of these application areas, there are two major categories of position estimation techniques, as discussed below.
• Geometric techniques-Position is estimated by exploiting time of arrival (TOA), time difference of arrival (TDOA), angle of arrival (AOA) or other information derived from the relationship between the geometry of an array of receivers and the modeled propagation characteristics of the transmitted signal.• Mapping approaches-Position is estimated based on comparison of local measurements to a "map" of expected distribution of the measured values.For example, in a wireless LAN application, received signal strength (RSS) might be observed either at the location of the client or at a remote reference point.Mapping approaches are also known as location fingerprinting.
Although geometric approaches have the potential to achieve higher precision than mapping approaches, they generally require direct-path signal reception or accurate environmental information at the receiver and often perform poorly in complex multipath environments.On the other hand, estimation accuracy of mapping approaches is limited by both the accuracy of the reference map and the accuracy of observed measurements.Furthermore, frequent and extensive site-survey measurements are often needed to accommodate the time varying nature of wireless channels, structural changes in the environment, and upgrades of wireless infrastructure.
In addition to snapshots of AOA, TOA, TDOA or RSS measurements, motion models or prior knowledge of structural constraints can often be used to enhance location estimation accuracy for mobile objects by "tracking" location estimates over time.Trackers that integrate such information into the computation of location estimates are generally implemented using techniques such as Kalman filters, particle filters, Markov chain Monte Carlo methods, etc.
The purpose of the proposed special issue is to present a comprehensive picture of both the current state of the art and emerging technologies in signal processing for location estimation and tracking in wireless environments.Papers are solicited on all related aspects from the point of view of both theory and practice.Submitted articles must be previously unpublished and not concurrently submitted for publication on other journals.
Topics of interest include (but are not limited to): • Authors should follow the EURASIP JASP manuscript format at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP Manuscript Tracking System at http://www.hindawi.com/mts/,according to the following timetable:

Call for Papers
Seamless detection and tracking schemes are able to integrate unthresholded (or below target detection threshold) multiple sensor responses over time to detect and track targets in low signal-to-noise ratio (SNR) and high clutter scenarios.These schemes, also called "track-before-detect (TBD)" algorithms are especially suitable for tracking weak targets that would only very rarely cross a standard detection threshold as applied at the sensor level.
Thresholding sensor responses result in a loss of information.Keeping this information allows some TBD approaches to deal with the classical data association problem effectively in high clutter and low SNR situations.For example, in detection scenarios with simultaneous activation/illumination from different signal sources this feature allows the application of triangulation techniques, where in the case of contact tracking approaches essential information about weak targets would often be lost because these targets did not produce signals that cross the normal detection threshold.Extending this example to a multi-sensor network scenario, a TBD algorithm that can use unthresholded (or below threshold) data has the potential to show improved performance compared to an algorithm that relies on thresholded data.In low SNR situations, this can substantially increase performance particularly in the case of a dense multi-target scenario.
Naturally, TBD algorithms consume high computational processing power: An efficient realization and coding of the TBD scheme is mandatory.
Another issue that arises when using the TBD scheme is the quality of the sensor model: Practical experience with thresholded data shows that a coarser modelling of the likelihood function might be sufficient and often leads to robust algorithms.How much have these sensor models to be improved in order to allow the TBD algorithms to exploit the information provided with the unthresholded data?TBD algorithms that are well known to the tracking community are the likelihood ratio detection and tracking (LRDT), maximum likelihood probabilistic data association (MLPDA ), maximum likelihood probabilistic multihypothesis tracking (MLPMHT), Houghtransform based methods and dynamic programming techniques; also related are the probability hypothesis density (PHD), the histogram probabilistic multi-hypothesis tracking (H-PMHT) algorithms, and, of course, various particle filter approaches.Some of these algorithms are capable of tracking extended targets and performing signal estimation in multi-sensor measurements.
The aim of this special issue is to focus on recent developments in this expanding research area.The special issue will focus on one hand on the development and comparison of algorithmic approaches, and on the other hand on their currently ever-widening range of applications such as in active or passive surveillance scenarios (e.g. for object tracking and classification with image and video based sensors, or scenarios involving chemical, electromagnetic and acoustic sensors).Special interest lies in multi-sensor data fusion and/or multi-target tracking applications.
Authors should follow the EURASIP Journal on Advances in Signal Processing manuscript format described at the journal site http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP Manuscript Tracking System at http://www.hindawi.com/mts/,according to the following timetable:

Call for Papers
Biometric identification has established itself as a very important research area primarily due to the pronounced need for more reliable and secure authentication architectures in several civilian and commercial applications.The recent integration of biometrics in large-scale authentication systems such as border control operations has further underscored the importance of conducting systematic research in biometrics.Despite the tremendous progress made over the past few years, biometric systems still have to reckon with a number of problems, which illustrate the importance of developing new biometric processing algorithms as well as the consideration of novel data acquisition techniques.Undoubtedly, the simultaneous use of several biometrics would improve the accuracy of an identification system.For example the use of palmprints can boost the performance of hand geometry systems.Therefore, the development of biometric fusion schemes is an important area of study.Topics related to the correlation between biometric traits, diversity measures for comparing multiple algorithms, incorporation of multiple quality measures, and so forth need to be studied in more detail in the context of multibiometrics systems.Issues related to the individuality of traits and the scalability of biometric systems also require further research.The possibility of using biometric information to generate cryptographic keys is also an emerging area of study.Thus, there is a definite need for advanced signal processing, computer vision, and pattern recognition techniques to bring the current biometric systems to maturity and allow for their large-scale deployment.
This special issue aims to focus on emerging biometric technologies and comprehensively cover their system, processing, and application aspects.Submitted articles must not have been previously published and must not be currently submitted for publication elsewhere.Topics of interest include, but are not limited to, the following: • Fusion of biometrics Authors should follow the EURASIP JASP manuscript format described at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP manuscript tracking system at http://www.hindawi.com/mts/,according to the following timetable:

Call for Papers
Data converters (ADCs and DACs) ultimately limit the performance of today's communication systems.New concepts for high-speed, high-resolution, and power-aware converters are therefore required, which also lead to an increased demand for high-speed and high-resolution sampling systems in the measurement industry.Present converter technologies operate on their limits, since the downscaling of IC technologies to deep submicron technologies makes their design increasingly difficult.Fortunately, downscaling of IC technologies allows for using additional chip area for digital signal processing algorithms with hardly any additional costs.Therefore, one can use more elaborate signal processing algorithms to improve the conversion quality, to realize new converter architectures and technologies, or to relax the requirements on the analog design.Pipelined ADCs constitute just one example of converter technology where signal processing algorithms are already extensively used.However, time-interleaved converters and their generalizations, including hybrid filter bank-based converters and parallel sigma-delta-based converters, are the next candidates for digitally enhanced converter technologies, where advanced signal processing is essential.Accurate models constitute one foundation of digital corrected data converters.Generating and verifying such models is a complex and time-consuming process that demands high-performance instrumentation in conjunction with sophisticated software defined measurements.
The aim of this special issue is to bring forward recent developments on signal processing methods for data converters.It includes design, analysis, and implementation of enhancement algorithms as well as signal processing aspects of new converter topologies and sampling strategies.Further, it includes design, analysis, and implementation of software defined measurements for characterization and modeling of data converters.
Topics of interest include (but are not limited to): • Analysis, design, and implementation of digital algorithms for data converters • Analysis and modeling of novel converter topologies and their signal processing aspects • Digital calibration of data converters • Error identification and correction in timeinterleaved ADCs and their generalizations • Signal processing for application-specific data converters (communication systems, measurement systems, etc.) • New sampling strategies • Sampling theory for data converters • Signal processing algorithms for data converter testing • Influence of technology scaling on data converters and their design • Behavioral models for converter characterization • Instrumentation and software defined measurements for converter characterization Authors should follow the EURASIP JASP manuscript format at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP Manuscript Tracking System at http://www.hindawi.com/mts/,according to the following timetable:

Special Issue on
Cooperative Localization in Wireless Ad Hoc and Sensor Networks

Call for Papers
One of the major requirements for most applications based on wireless ad hoc and sensor networks is accurate node localization.In fact, sensed data without position information is often less useful.
Due to several factors (e.g., cost, size, power), only a small fraction of nodes obtain the position information of the anchor nodes.In this case, a node has to estimate its position without a direct interaction with anchor nodes and a cooperation between nodes is needed in a multihop fashion.In some applications, none of the nodes are aware of their absolute position (anchor-free) and only relative coordinates are estimated instead.
Most works reported in the literature have studied cooperative localization with the emphasis on algorithms.However, very few works give emphasis on the localization as estimation or on the investigation of fundamental performance limits as well as on experimental activities.In particular, the fundamental performance limits of multihop and anchor-free positioning in the presence of unreliable measurements are not yet well established.The knowledge of such limits can also help in the design and comparison of new low-complexity and distributed localization algorithms.Thus, measurement campaigns in the context of cooperative localization to validate the algorithms as well as to derive statistical models are very valuable.
The goal of this special issue is to bring together contributions from signal processing, communications and related communities, with particular focus on signal processing, new algorithm design methodologies, and fundamental limitations of cooperative localization systems.Papers on the following and related topics are solicited: • anchor-based and anchor-free distributed and cooperative localization algorithms that can cope with unreliable range measurements • derivation of fundamental limits in multihop and anchor-free localization scenarios Authors should follow the EURASIP JASP manuscript format at http://www.hindawi.com/journals/asp/.Prospective authors should submit an electronic copy of their complete manuscript through the EURASIP JASP Manuscript Tracking System at http://www.hindawi.com/mts/,according to the following timetable:

Figure 1 :
Figure 1: Radial and normal directions of a surface point.

1 =
p B p A p = p A + xe 1 + ye 2 with x, y 0 and x + y 1

Figure 2 :
Figure 2: A local basis for a triangle in 3D.

Figure 3 :
Figure 3: Distribution of target points over the unit-sphere, obtained by subdividing an octahedron once (left: 32 points) and twice (right: 128 points).

Figure 4 :
Figure 4: Flow diagram to compute a density-based 3D shape descriptor when the bandwidth is set at database level.

Figure 5 Figure 5 :
Figure 5: Precision-recall curves with a bandwidth selection made at mesh level versus database level for Sr-descriptor (a) and Stdescriptor (b) on PSB training set.

Figure 9 :
Figure 9: Precision-recall curves for EGI and Sn on PSB test set (a) and SCUdb (b).

Figure 10 :
Figure 10: Precision-recall curves for 3DHT and St on PSB test set (a) and SCUdb (b).

Ceyhun
Burak Akgül received the B.S. and M.S. degrees in electrical and electronics engineering from Bogazic ¸i University, Istanbul, in 2002 and 2004, respectively.He has been pursuing his Ph.D. degree jointly at Bogazic ¸i University and Télécom Paris (Ecole Nationale Supérieure des Télécommunications) since 2004.In the framework of his Ph.D. thesis, he is currently working on 3D shape descriptors and statistical similarity learning for object retrieval and classification.His main research interests are 2D/3D image analysis and statistical pattern recognition with applications on multimedia data.Bülent Sankur received his B.S. degree in electrical engineering at Robert College, Istanbul, and completed his M.S. and Ph.D. degrees at Rensselaer Polytechnic Institute, NY, USA.He has been teaching at Bogazic ¸i University in the Department of Electric and Electronics Engineering.His research interests are in the areas of digital signal processing, image and video compression, biometry, cognition, and multimedia systems.He held visiting positions at University of Ottawa, Technical University of Delft, and Ecole Nationale Supérieure des Télécommunications, Paris.He was the Chairman of ICT96 (International Conference on Telecommunications) and of EUSIPCO05 (The European Conference on Signal Processing) as well as Technical Chairman of ICASSP00.Yücel Yemez received the B.S. degree from Middle East Technical University, Ankara, Turkey, in 1989, and the M.S. and Ph.D. degrees from Bogazic ¸i University, Istanbul, Turkey, respectively, in 1992 and 1997, all in electrical engineering.From 1997 to 2000, he was a Postdoctoral Researcher in the Image and Signal Processing Department of Télécom Paris (Ecole Nationale Supérieure des Télécommunications).Currently, he is an Assistant Professor of the Computer Engineering Department at Koc ¸University, Istanbul, Turkey.His current research is focused on various fields of computer vision and graphics.Francis Schmitt received an Engineering degree from Ecole Centrale de Lyon, France, in 1973 and received a Ph.D. degree in applied physics from the University Pierre et Marie Curie, Paris VI, France, in 1979.He has been a Member of Télécom Paris (Ecole Nationale Supérieure des Télécommunications) since 1973.He is currently Full Professor at the Image and Signal Processing Department and Head of the image processing group.His main interests are in computer vision, 3D modeling, image and 3D object indexing, computational geometry, multispectral imagery, and colorimetry.He is the author or coauthor of about 150 publications in these fields.

•
Received signal strength (RSS), angle-of-arrival (AOA), and time-based location estimation • Ultrawideband (UWB) location estimation • Bayesian location estimation and tracking • Pattern recognition and learning theory approaches to location estimation • Applications of expectation-maximization (EM) and Markov chain Monte Carlo (MCMC) techniques • Applications of electromagnetic propagation modeling to location estimation • Mitigation of errors due to non-line-of-sight propagation • System design and configuration • Performance evaluation, performance bounds, and statistical analysis Computational complexity and distributed computation • Distributed location estimation • Synchronization issues • Testbed implementation, real-world deployment, and measurement

Table 2 :
Local geometric features and their invariance properties (assuming that the barycenter of the surface M is at the origin).R d → R is a kernel function, H k is a d × d matrix composed of a set of design parameters called bandwidth parameters (smoothing parameters or scale parameters) for the kth observation, and w k is the importance weight associated with the kth observation.The contribution of each data point s k to the density function f S (s) at a target point s is computed through the kernel function K scaled by the matrix H k and the weight w k .Thus KDE involves a data set {s k } K k=1 with the associated set of importance weights

Table 3 :
Histogram-based 3D shape descriptors and their sizes.

Table 4 :
Density-based 3D shape descriptors and their sizes.

Table 5 :
DCG values for possible bandwidth selection strategies on PSB training meshes.

Table 4 .
Let A 1 , A 2 , . . ., A L denote L generic (one-or multidimensional) features and let f A1 , f A2 , . . ., f AL denote the corresponding density-based descriptors with N 1 , N 2 , . . ., N L components, respectively, (N i , i = 1, . . ., L corresponds to the number of target points on which the density of feature A i has been evaluated or equivalently to the size of the vector f Ai ).Square bracketing [A 1 , A 2 , . . ., A L ] that appears in subsequent graphs and tables indicates the concatenation of the shape descriptors [f A1 , f A2 , . . ., f AL ] resulting in a vector of size

Table 5
compares the DCG scores obtained with Sr, St, and Sc-descriptors on the PSB training set.

Table 6 :
The average Scott bandwidth obtained from the PSB training meshes.
= 4×64 = 256.For multivariate density descriptors Sr and St, recall that N dir = 128 and for Sc, N dir = 320 (see Section 3.5).N mag being chosen equal to 8 in all cases, the size of the Sr and St-descriptors is N = 8 × 128 = 1024 and the size of the Sc-descriptor is N = 8 × 320 = 2560.Figures 6 and 7 with Table7explicitly show that the multivariate density-based descriptors are superior to the descriptors obtained by the concatenation of univariate densities for all feature types on both databases.

Table 7 :
Retrieval statistics for univariate and multivariate densitybased descriptors.

Table 8 :
DCG and NN scores for the combination of density-based descriptors.

Table 9 :
General performances of histogram and density-based descriptors.

ADVANCES IN SIGNAL PROCESSING Special Issue on Signal Processing for Location Estimation and Tracking in Wireless Environments
• Analysis of facial/iris/palm/fingerprint/hand images • Unobtrusive capturing and extraction of biometric information from images/video • Biometric identification systems based on face/iris/palm/fingerprint/voice/gait/signature • Emerging biometrics: ear, teeth, ground reaction force, ECG, retina, skin, DNA • Biometric systems based on 3D information • User-specific parameterization • Biometric individuality • Biometric cryptosystems • Quality measure of biometrics data • Sensor interoperability • Performance evaluation and statistical analysis • new localization algorithms design methodologies based, for example, on statistical inference and factor graphs • low-complexity and energy-efficient distributed localization algorithms • distributed ranging and time synchronization techniques • measurement campaigns and statistical channel modeling • algorithm convergence issues • UWB systems • localization through multiple-antenna systems • experimental results