3D point cloud registration based on a purposedesigned similarity measure
 Carlos TorreFerrero^{1}Email author,
 José R Llata^{1},
 Luciano Alonso^{1},
 Sandra Robla^{1} and
 Esther G Sarabia^{1}
https://doi.org/10.1186/16876180201257
© TorreFerrero et al; licensee Springer. 2012
Received: 9 March 2011
Accepted: 6 March 2012
Published: 6 March 2012
Abstract
This article introduces a novel approach for finding a rigid transformation that coarsely aligns two 3D point clouds. The algorithm performs an iterative comparison between 2D descriptors by using a purposedesigned similarity measure in order to find correspondences between two 3D point clouds sensed from different positions of a freeform object. The descriptors (named with the acronym CIRCON) represent an ordered set of radial contours that are extracted around an interestpoint within the point cloud. The search for correspondences is done iteratively, following a cell distribution that allows the algorithm to converge toward a candidate point. Using a single correspondence an initial estimation of the Euclidean transformation is computed and later refined by means of a multiresolution approach. This coarse alignment algorithm can be used for 3D modeling and object manipulation tasks such as "Bin Picking" when freeform objects are partially occluded or present symmetries.
Keywords
1. Introduction
The alignment of two point clouds is quite a frequent task, both in 3D modeling and in object recognition. Similarly, the need for automating certain applications, such as computeraided manufacturing or binpicking, has necessitated the use of 3D information about the parts being manipulated. This information can be sensed by 3D acquisition methods [1], such as laser scanners or timeofflight cameras, which provide a range image for every different pose of the object.
Finding the rigid transformation producing a suitable alignment of the resulting point clouds, without having a previous estimate, is a problem that has been approached using different strategies [2, 3]. Although no solution has prevailed as the most accepted, algorithms based on intrinsic properties have been more widely applied due to their generality. These algorithms extract shape descriptors [4–15], curves [16, 17], structures [18, 19], or graphs [20] from both point clouds (sometimes meshes are used instead) in order to compare them. If several correspondences are found, then a coarse transformation that aligns them in a suitable way can be calculated.
On the other hand, the algorithms that use extrinsic properties will be subject to one important restriction: as they match properties that are relative to a coordinate system, the surfaces must be roughly aligned in order to establish point correspondences. Therefore, these algorithms (such as the ICP algorithm [21, 22] and its variants [23]) are used to refine that initial transformation and obtain a more precise one. Since this refinement process has been successfully achieved, the most challenging part of the 3D point cloud alignment problem is to determine the rough initial transformation.
2. CIRCON descriptor
2.1. Introduction
After reviewing the stateoftheart, the following problems were found in the most significant alignment algorithms [2, 3]: lack of generality (they work well with objects of a given topology), excessive computation time, problems with symmetries, poor behavior when point clouds have low density and when they overlap each other in a small region, need for a method (usually based on robust statistical techniques) that discards false correspondences and obtains a valid correspondence group to obtain the Euclidean transformation. All the methods reviewed show, to a greater or lesser extent, at least two of these drawbacks.
The three characteristics to which we have given most importance when designing our alignment algorithm are it must have no restrictions regarding the type of objects that can be used, a good performance in the presence of symmetries (which is quite common in industrial components) and good behavior when the overlap and the density of the point clouds are low. However, we have also taken into account the other problems that may occur.
One of the main drawbacks observed after analysis of the commonly used descriptors in the stateoftheart is that although many of them are based on geometric properties of the environment of the pointofinterest, the evaluation of their similarity does not have a direct relationship with the distance between the point clouds [4, 8–15]. Moreover, since a good alignment is characterized by a small distance between corresponding points, it would be more convenient to use a descriptor that represents the geometry of the environment better. Furthermore, the descriptors analyzed need to find at least three good correspondences to determine an approximate Euclidean transformation.
Another drawback associated with some descriptors is that at the end of the local matching stage, a considerable percentage of false matches can be found. This is usually caused by a descriptor with low discriminating capacity and a choice of similarity measure that is not sufficiently appropriate.
In our opinion, for the correspondence search to be effective, the descriptor used by the coarse alignment algorithm should:

Be based on the geometry in the environment of the pointsofinterest.

Be highly descriptive, so that the correspondences can be adequately discriminated and no false matches appear.

Enable the use of a similarity measure based on distances between points of the cloud.

Enable the calculation of a Euclidean transformation based on a single correspondence.

Be useful for 3D modeling (alignment of two scans from two different views of the object) and for 3D object recognition (alignment of the point clouds in the scene and the model).
2.2. Descriptor construction
In order to obtain the descriptor associated with a particular pointofinterest in the cloud (let ^{ w }p_{ q } be this point), it is necessary to express the cloud points in a local coordinate system centered on ^{ w }p_{ q } and whose Z_{ q }axis is its normal vector. The X_{ q }axis is chosen so that it is perpendicular to both the Y_{ w }axis of the reference coordinate system and the normal vector at the pointofinterest. Thus the Y_{ q }axis is determined by the cross product of unit vectors along the X_{ q } and Z_{ q } axes. This criterion establishes a unique reference for the angles of rotation about the Z_{ q }axis (i.e., above normal ${\stackrel{\u0304}{n}}_{q}$), which will subsequently facilitate the calculation of the Euclidean transformation associated with that correspondence.
Since the sequence should be closed because it describes the environment of the pointofinterest, the first and last rows must be considered adjacent, since their elements with the same column index correspond to adjacent cells. In other words, this descriptor has the property of being cyclical.
3. The proposed similarity measure
3.1. Introduction
Unlike the similarity measures based on correlation coefficient (CC), mutual information (MI) [24, 25], joint entropy [26, 27], or others [28, 29] that have been used by the most popular coarse alignment algorithms, such as spin images, this similarity measure is based on distance between pixels and takes into account the problems of occlusion that can appear in real situations that need 3D registration or object recognition.
This similarity measure gives weighting to both the overlap and the proximity of two point clouds. This enables the simultaneous evaluation of the geometric consistency of the correspondences. Although computational cost increases with the number of correspondences evaluated, the Euclidean transformation associated with this correspondence can be directly calculated, given that the rotation around the normal necessary to align the two point clouds is determined. Therefore, it will be possible to determine which correspondences give rise to the Euclidean transformation which fits best and to base the stopping criterion on their validation. In this way, the algorithm finalizes when the coarse transformation satisfies the end conditions imposed (detailed in the algorithm in Section 5.3), without necessity to evaluate all the pointsofinterest selected in the two point clouds.
3.2. Sets of pixels
Assuming two CIRCON descriptors A and B corresponding to two matched points; if both a pixel from A, a_{ ij }, and a pixel from B, b_{ ij }, represent a part of the point cloud, it will be considered that this pair of pixels, with indexes (i, j), are overlapped.
Taking into account that the matrix elements not belonging to the point cloud will be considered computationally as 'notanumber' (i.e., NaN), the following sets of pixels have been defined:
These sets of pixels will be taken into account in order to define the similarity measure.
3.3. Area represented by each cell
n_{ s } being the number of angular divisions.
3.4 Weight of the pixels
Using this expression, the area represented by the pixels in different columns will be taken into account to correctly weight the contribution of each pixel to the average distance in the overlapped area.
3.5. Similarity measure expression
As was explained previously, the similarity measure selected will depend on the distance and the overlap among the CIRCON images.
To calculate the average distance between the pixels of the two images, those of the overlapped area and those of the nonoverlapped area will be considered separately.
which expresses the relationship between the weighted number of overlapping pixels and the weighted total number of pixels pertaining to the object (overlapping and nonoverlapping).
where λ' is defined as λ' = ρ λ., λ being a parameter whose value represents the additional distance with which nonoverlapping pixels are penalized. In contrast the parameter ρ modifies the relationship between the expected similarity value and the distance D_{ov} that produces it, when the overlap is 100%.
The values for these parameters used in our experiments are ρ = 1 and λ = 1, which give a similarity value of 0.5 both when σ_{ov} = 0.5 and D_{ov} = 0 and when D_{ov} = 1 and σ_{ov} = 1.
Since, given a matching pair, the point coordinates and their corresponding normal vectors are known for both points, only a free parameter is needed to compute the rigid transformation: the rotation around the normal. This can be easily calculated using CIRCON images since these are cyclical. By shifting the last row to the top for the first CIRCON image (from point cloud 1) and leaving the second one fixed (from point cloud 2), the similarity measure for a rotation ρ_{ θ } can be calculated. If the last two rows are shifted to the top, the equivalent rotation will be 2ρ_{ θ }, and so on. This can be practically implemented by means of matrix blocks so that the similarity measures for all the shifts can be computed at the same time. Subsequently, the similarity value for a matching pair will be the maximum for all the possible rotations and it will be associated with an angle k·ρ_{ θ } (k being the number of row shifts). A preliminary analysis of this similarity measure can be found in [30].
4. Coarse alignment algorithm
4.1. Pointofinterest selection
The number of points extracted depends on the topology of the object, although a minimum distance between them is considered so that this number is not very high.
4.2. Correspondence search algorithm
This search algorithm is the core of the coarse alignment algorithm. It is based on an iterative search for the greatest value of similarity measure using the array of cells, C_{1}, into which the environment of a pointofinterest in point cloud 1 is divided. The chosen stopping criterion ensures that this search is convergent, since the environment where the correspondences are searched for is progressively reduced.
The algorithm will evaluate correspondences between different points of cloud 1 and a pointofinterest selected from cloud 2. The degree of validity of two matching points is to be determined based on the similarity between their CIRCON images: I_{1x}(image of a point P_{1x}in cloud 1) and I_{2} (target image). Since a single point, P_{1x}, is extracted from each cell of the array C_{1}, for each iteration, the algorithm performs as many similarity measure evaluations as the number of 'valid' cells in the distribution C_{1} around the point of the previous iteration. Note that the points whose CIRCON images obtain a low similarity value are stored in a list of nonvalid indexes, ind_{ nv }. Thus, a cell will be considered 'valid' when its ratio of nonvalid points, r_{ nv }, is less than a prefixed threshold τ_{ nv }. This allows to progressively reduce the number of cells to be checked.
Therefore, the similarity value returned by the algorithm, M_{Sc}, is the highest obtained in all the iterations until the stopping condition is met. The algorithm ends when an iteration uses a starting point whose distance to one of the previously used points is less than a preset δ (in our implementation δ = ρ_{ r }/16).
As will be explained in Section 5.3, the size of the descriptors used by this search algorithm depends on the resolution level in the main algorithm. Moreover, since one of its goals must be to avoid an incorrect alignment in the presence of occlusions and symmetries of the objects, the CIRCON images will represent the entire point clouds in order to increase their descriptiveness. However, depending on the application (e.g., mixed objects), the environment size of the descriptor can be varied.
4.3. Main algorithm: selection of the most suitable transformation
For each interestpoint chosen in cloud 2, P_{2y}, the search for correspondences by cells is established for n_{v} levels of resolution. The number of levels and the lowest resolution must be determined through a compromise between computation time and accuracy of the point cloud alignment, which will depend on the application.
The starting points for the first level are the interestpoints chosen in cloud 1, {p_{1i}}. This first level enables the discarding of those zones (cells) of the surroundings of the chosen point, P_{1x}, where, due to their low similarity, it is unlikely to find the desired correspondence. Once the Correspondence Search Algorithm has found an approximate correspondence, P_{1c}, for the first level, the resolution is increased and a new search around this new starting point is performed but with smaller cells. In this way, the search zone is reduced, which will be the object of progressive refinement for the next resolution levels (since only the first n_{ c } columns of the array of cells C_{1c}are used for the correspondence search and this number is halved when the resolution is increased). When the convergence of the search associated with the last resolution level is achieved and the similarity value M_{Sc} of the resulting correspondence is greater than a value ${\tau}_{{M}_{S}}\left({n}_{v}\right)$, its corresponding Euclidean transformation, T_{c}, is calculated using Equation (22).
Then the matrix associated with the fictitious correspondence, T_{ f }, is compared with that obtained by the algorithm, T_{ c }. The execution is stopped if a distance measure for the rotation, d_{ R }, and another for the translation, d_{ t }, do not exceed their respective thresholds τ_{ R } and τ_{ t }.
where (α_{ R }, β_{ R }, γ_{ R }) are the ZYX Euler angles from the rotation matrix ${R}_{f}^{T}\cdot {R}_{c}$; R_{ c } being the rotation matrix obtained by the algorithm, and R_{ f } the rotation matrix associated with the fictitious correspondence.
The translation distance d_{ t } will be calculated as the RMS distance between the translation vectors, t_{ c } and t_{ f }, associated with both correspondences.
As will be shown in the results, the solution obtained by the algorithm can be sufficiently accurate for object manipulation tasks; however, the Euclidean transformation could be refined using the ICP algorithm by taking advantage of the data provided by our algorithm about the correspondences between the points in the two clouds.
4.4. Calculation of the Euclidean transformation using a single correspondence
Once the correspondence with the highest similarity measure within the surroundings of the pointofinterest chosen for an iteration of the algorithm is found, the Euclidean transformation that coarsely aligns the point clouds can be calculated.
In the first place, it is necessary to express both point clouds within a frame of coordinates whose origin is each of the two points that have been matched and whose zaxis is aligned with the normal vectors at these points.
By convention, the xaxis of the new frame is perpendicular to the normal (new zaxis) and to the yaxis of the original frame W, since the CIRCON images were generated in this way.
Given that the transformation matrix is 4 × 4, all the points in the following equations are expressed in homogeneous coordinates.
Suppose a correspondence between a point ^{W 1}P_{ α } of cloud 1 and a point ^{W 2}P_{ β } of cloud 2. Let ${}_{{W}_{1}}^{\alpha}T$ be the transformation matrix that enables the expression within the interestpoint frame of the coordinates of a point in the cloud 1 expressed in the original frame W_{1}. In the same way, ${}_{{W}_{2}}^{\beta}T$ permits a similar transformation for point cloud 2.
As the points ^{W 1}P_{ α } and ^{W 2}P_{ β } form a correspondence, the origins of coordinates and the zaxes of the new frames must be coincident. To align a point ^{ α }P_{ i } in point cloud 1 with its corresponding point ^{ β }P_{ j } in cloud 2 it is necessary to rotate cloud 1 about the zaxis of the new frame by an angle of k·ρ_{ θ } radians, where k is the number of rows the CIRCON image 1 was rotated to achieve the similarity value associated with this correspondence.
5. Results
In order to enable the comparison of the proposed alignment algorithm with some of the existing ones we use some objects that were employed in different comparative studies [2, 23, 31].
As in the comparative study by Salvi et al. [2], an analysis of efficiency of the algorithm will not be carried out since, as discussed here, this is very implementation dependent (in our case Matlab^{®} was used). For this reason, we assess the performance of the algorithm in terms of effectiveness by measuring the alignment error for different freeform objects.
However, as a reference, it can be said that when the point clouds have an overlap of more than 70%, the time spent by the alignment algorithm is, in most cases, less than 5 s, while for very low overlap percentages, that value can be exceeded. In this case the time increment is due, first, to the need to use more starting points for the algorithm in order to avoid false matches and secondly because the choice of these points is not sufficiently suitable (given the simplicity of the pointsofinterest selection algorithm), which implies, in both cases, an additional number of iterations.
The CIRCON images shown in Figure 9c correspond to the points in which the greatest similarity measure was obtained for the three resolution levels. The color map used to show these images was chosen with the aim of visualizing the similarity better.
The number of resolution levels used for the experiments was three with 12, 24, and 48 angular divisions. The number of search columns, n_{ c }, was, respectively, 8, 4, and 2 so that the number of search cells was always 96. These values were chosen for the implementation of our algorithm by testing different combinations to align synthetic point clouds. It was noted that with less than ten angular divisions the algorithm was faster at the first level, but it favored the emergence of false correspondences, which increases the computation time of the next levels and can lead to incorrect final alignment. On the other hand, we observed that 48 angular divisions for the highest resolution level are sufficient to obtain an acceptable approximate alignment. Although the maximum error on the rotation around the normal vector that could be committed is 3.75°, in practice the algorithm evaluates the similarity of so many correspondences with different orientations of the normal vector that usually the rotation error is under that value (as shown in the results).
Figure 9d shows the coarse alignment obtained for the reduced point cloud and a 3D rendering when the transformation obtained by the algorithm is also applied to the original data.
The rotation and translation errors were computed using similar expressions to those introduced in Section 5.3. In this case the transformation matrix chosen for comparison was obtained by refinement using a variant of the ICP algorithm [24].
Errors obtained by the coarse alignment algorithm for ten different objects
Reduced point cloud 1  Reduced point cloud 2  Rot. error (degrees)  Translation error (mm)  

Number of points (% of total)  Resolution (mm)  Number of point (% of total)  Resolution (mm)  
Ducky  859 (1.61%)  3.83  1142 (1.90%)  3.41  2.99  1.15 
Femur  453 (1.54%)  3.75  497 (1.25%)  4.32  3.77  1.94 
Igea  544 (0.93%)  4.67  419 (0.73%)  5.16  2.49  1.89 
Fighter  635 (2.13%)  2.37  401 (2.67%)  2.20  1.50  0.47 
Dino  528 (4.92%)  2.31  314 (2.45%)  2.87  4.09  1.00 
Mole  673 (1.45%)  3.79  652 (1.65%)  3.41  2.34  1.06 
Isis  543 (3.50%)  2.21  381 (1.47%)  3.37  3.62  2.06 
Liberty  715 (5.27%)  1.93  428 (2.47%)  2.77  4.84  0.66 
Pitbull  153 (0.83%)  4.99  108 (0.58%)  5.29  2.31  0.89 
Female  549 (5.58%)  1.90  244 (1.67%)  3.38  3.54  0.62 
This demonstrates that the algorithm is able to achieve a good performance despite using point clouds with few points (less than 6% of the original quantity) and different resolutions, which make it suitable for aligning point clouds with low density acquired by different devices.
6. Conclusions
We have introduced a novel descriptor (CIRCON) which represents, through a cyclical image, the geometry of the environment of a pointofinterest in the cloud. In order to construct the image matrix we distribute the points in sectors which, in turn, are subdivided into cells that have the same radial length. The values of the matrix elements represent the maximum z coordinate of the points contained in their corresponding cells. This represents an important difference with respect to other methods that use 2D histograms, such as spin images [8], which make them more vulnerable to the density changes of the point clouds (especially when their densities are significantly different).
We have also designed a novel similarity measure that takes into account both the distances between the pixels of the descriptors and their degree of overlap, which are not considered by other methods due to the particular characteristics of the descriptors. Furthermore, this similarity measure takes advantage of the cyclical nature of the descriptor to obtain, along with the similarity value, an index that represents the rotation around the normal at the pointofinterest. When the similarity of two descriptors is evaluated, this rotation index, the matched points and their normal vectors can be used to calculate a Euclidean transformation matrix; that is, the two point clouds can be aligned by determining one single correspondence.
Using this similarity measure, the descriptors can be compared without having to restrict the neighborhood of the pointofinterest, so the discriminating power could be increased in order to avoid problems of misalignment when the objects have symmetries or repeated regions (problems that are not well solved by other methods, such as spin images, as is explained in [2]).
Based on this combination of descriptor and similarity measure we have designed a coarse alignment algorithm that eliminates the need to find a group of valid correspondences (which is necessary in most algorithms, including spin images [8]). One of the main advantages of this algorithm is that the stopping criterion is always evaluated when it finds a correspondence that exceeds the maximum similarity value found until that moment. Thus, if certain conditions are met, the algorithm ends without having to find additional correspondences.
The results show that the proposed algorithm is able to find a proper alignment despite using simple criteria for selecting the pointsofinterest. However, in some cases these starting points are not the most appropriate and the algorithm has to perform more iterations than necessary. As one of the advantages of our proposed algorithm is that it can end once it finds a correspondence that has high similarity and that meets the stopping criterion, if the pointsofinterest are appropriately selected, it is very likely that the algorithm could end after the first iterations on the majority of occasions. Furthermore, if these keypoints are obtained by new multiscale methods [34], the support sizes can be calculated for the descriptors in both point clouds and the alignment could be carried out, as in [35], using point clouds with different scale.
Declarations
Acknowledgements
This study was carried out with the support of the Spanish CICYT project DPI200615313.
Authors’ Affiliations
References
 Sansoni G, Trebeschi M, Docchio F: Stateoftheart and applications of 3D imaging sensors in industry, cultural heritage, medicine, and criminal investigation. Sensors 2009, 9: 568601. 10.3390/s90100568View ArticleGoogle Scholar
 Salvi J, Matabosch C, Fofi D, Forest J: A review of recent range image registration methods with accuracy evaluation. Image Vis Comput 2007, 25: 578596. 10.1016/j.imavis.2006.05.012View ArticleGoogle Scholar
 Planitz BM, Maeder AJ, Williams JA: The correspondence framework for 3D surface matching algorithms. Comput Vis Image Understand 2005, 97: 347383. 10.1016/j.cviu.2004.08.001View ArticleGoogle Scholar
 Chua CS, Jarvis R: Point signatures: A new representation for 3D object recognition. Int J Comput Vis 1997, 25: 6385. 10.1023/A:1007981719186View ArticleGoogle Scholar
 Gelfand N, Mitra NJ, Guibas LJ, Pottmann H: Robust global registration. In Symposium on Geometry Processing. Vienna, Austria; 2005:197206.Google Scholar
 Feldmar J, Ayache N: Rigid, affine and locally affine registration of freeform surfaces. Int J Comput Vis 1996, 18: 99119. 10.1007/BF00054998View ArticleGoogle Scholar
 Barequet G, Sharir M: Partial surface matching by using directed footprints. In Proc 12th Annual Symp Computational Geometry. Philadelphia, USA; 1996:409410.Google Scholar
 Johnson AE, Hebert M: Using spin images for efficient object recognition in cluttered 3D scenes. IEEE Pattern Anal Mach Intell 1999, 21: 433449. 10.1109/34.765655View ArticleGoogle Scholar
 Ashbrook AP, Fisher RB, Robertson C, Werghi N: Aligning arbitrary surfaces using pairwise geometric histograms. In Proc NMBIA98. Glasgow, UK; 1998:103108.Google Scholar
 Ashbrook AP, Fisher RB, Robertson C, Werghi N: Finding surface correspondence for object recognition and registration using pairwise geometric histograms. In Computer VisionECCV'98. Freiburg, Germany; 1998:674686.View ArticleGoogle Scholar
 Yamany SM, Farag AA: Surface signatures: an orientation independent freeform surface representation scheme for the purpose of objects registration and matching. IEEE Pattern Anal Mach Intell 2002, 24: 11051120. 10.1109/TPAMI.2002.1023806View ArticleGoogle Scholar
 Masuda T: Automatic registration of multiple range images by the local logpolar range images. In Proc Third International Symposium on 3D Data Processing, Visualization, and Transmission. Chapel Hill, USA; 2006:216223.View ArticleGoogle Scholar
 Körtgen M, Novotni M, Klein R: 3D shape matching with 3D shape contexts. In The 7th Central European Seminar on Computer Graphics. Budmerice, Slovakia; 2003:112.Google Scholar
 Zhang D: Harmonic shape images: a 3D freeform surface representation and its application in surface matching. Ph.D. dissertation, Carnegie Mellon University; 1999.Google Scholar
 Stein F, Medioni G: Structural indexing: efficient 2D object recognition. IEEE Trans Pattern Anal Mach Intell 1992, 14: 11981204. 10.1109/34.177385View ArticleGoogle Scholar
 Wyngaerd JV, Koch R, Proesmans M, Gool LV: Invariantbased registration of surface patches. In IEEE International Conference on Computer Vision. Volume 1. Kerkyra, Greece; 1999:301306.Google Scholar
 Krsek P, Pajdla T, Hlavác V, Martin R: Range image registration driven by a hierarchy of surfaces. In 22nd Workshop of the Austrian Association for Pattern Recognition. Illmitz, Austria; 1998:175183.Google Scholar
 Song Chen C, Ping Hung Y, Bo Cheng J: RANSACbased DARCES: a new approach to fast automatic registration of partially overlapping range images. IEEE Trans Pattern Anal Mach Intell 1999, 21: 12291234. 10.1109/34.809117View ArticleGoogle Scholar
 Chua CS, Jarvis R: 3D freeform surface registration and object recognition. Int J Comput Vis 1996, 17: 7799. 10.1007/BF00127819View ArticleGoogle Scholar
 Cheng J, Don H: A graph matching approach to 3D point correspondences. Pattern Recogn Artif Intell 1991, 5: 399412. 10.1142/S0218001491000223View ArticleGoogle Scholar
 Besl PJ, McKay HD: A method for registration of 3D shapes. IEEE Pattern Anal Mach Intell 1992, 14: 239256. 10.1109/34.121791View ArticleGoogle Scholar
 Chen Y, Medioni G: Object modelling by registration of multiple range images. Image Vis Comput 1992, 10: 145155. 10.1016/02628856(92)90066CView ArticleGoogle Scholar
 Rusinkiewicz S, Levoy M: Efficient variants of the ICP algorithm. In Proceedings of the Third Intl Conf on 3D Digital Imaging and Modeling. Quebec City, Canada; 2001:145152.View ArticleGoogle Scholar
 Wells WM III, Viola P, Atsumi H, Nakajima S, Kikinis R: Multimodal volume registration by maximization of mutual information. Med Image Anal 1996, 1: 3551. 10.1016/S13618415(01)800049View ArticleGoogle Scholar
 Studholme C, Hill D, Hawkes D: An overlap invariant entropy measure of 3D medical image alignment. Pattern Recogn 1999, 32: 7186. 10.1016/S00313203(98)000910View ArticleGoogle Scholar
 Collignon A, Maes F, Delaere D, Vandermeulen D, Suetens P, Marchal G: Automated multimodality image registration based on information theory. In Proc of International Conference on Information Processing in Medical Imaging. Ile de Berder, France; 1995:263274.Google Scholar
 Studholme C, Hill DL, Hawkes DJ: Multiresolution voxel similarity measures for MRPET registration. In Proc of International Conference on Information Processing in Medical Imaging. Ile de Berder, France; 1995:287298.Google Scholar
 Skerl D, Likar B, Pernus F: A protocol for evaluation of similarity measures for rigid registration. IEEE Trans Med Imag 2006, 25: 779791.View ArticleGoogle Scholar
 Penney G, Weese J, Little J, Desmedt P, Hill D, Hawkes D: A comparison of similarity measures for use in 2D3D medical image registration. IEEE Trans Med Imag 1998, 17: 586595. 10.1109/42.730403View ArticleGoogle Scholar
 Torre Ferrero C, Llata J, Robla S, Sarabia E: A similarity measure for 3D rigid registration of point clouds using imagebased descriptors with low overlap. S3DV09. In IEEE 12th International Conference on Computer Vision, ICCV Workshops 2009. Kyoto, Japan; 2009:7178.View ArticleGoogle Scholar
 Zinsser T, Schmidt J, Niemann H: A refined ICP algorithm for robust 3D correspondence estimation. In Proceedings of the International Conference on Image Processing. Barcelona, Spain; 2003:695698.Google Scholar
 Eisele K, Hetzel G: Range image database, University of Stuttgart.[http://range.informatik.unistuttgart.de/htdocs/html/]
 Mian A, Bennamoun M, Owens RA: A novel representation and feature matching algorithm for automatic pairwise registration of range images. Int J Comput Vis 2006, 66: 1940. 10.1007/s1126300532210View ArticleGoogle Scholar
 Mian A, Bennamoun M, Owens RA: On the repeatability and quality of keypoints for local featurebased 3D object retrieval from cluttered scenes. Int J Comput Vis 2010, 89: 348361. 10.1007/s112630090296zView ArticleGoogle Scholar
 Novatnack J, Nishino K: Scaledependent/invariant local 3D shape descriptors for fully automatic registration of multiple sets of range images. In Proceedings of the 10th European Conference on Computer Vision: Part III. Marseille, France; 2008:440453.Google Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.