 Research
 Open Access
 Published:
A general geometric transformation model for linescan image registration
EURASIP Journal on Advances in Signal Processing volumeÂ 2023, ArticleÂ number:Â 78 (2023)
Abstract
A reasonable geometric transformation model is the key to image registration. When the relative motion direction between the linescan camera and the object is strictly parallel to the planar object, it is possible to align the image by using the eightparameter geometric transformation model of the linescan image. However, it will be invalid when the relative motion direction is arbitrary. Therefore, a new general geometric transformation model of linescan images is proposed for linescan image registration in this paper. Considering the different initial poses and motion directions of the linescan camera, the proposed model is established based on the imaging model of the linescan camera. In order to acquire linescan images to verify the proposed model, a linescan image acquisition system was built. The method based on feature points is used to register the linescan images. The experimental results show that the proposed geometric transformation model can align the linescan image collected under arbitrary relative motion direction, not just the parallel case. Besides, the statistical errors of the image feature point coordinates are the best performance after registration. The accuracy of the registration results is better than that of other existing geometric transformation models, which verifies the correctness and generality of the geometric transformation model of the linescan camera proposed in this paper.
1 Introduction
The linescan camera is an imaging device with only one column of sensor which has the characteristics of high resolution and high imaging frequency [1,2,3,4,5]. The imaging principle of the linescan camera is the same as that of the areascan camera and also conforms to the perspective projection relationship of the pinhole model, but the imaging models of the two kinds of cameras are different [6]. The sensor of the areascan camera is an array and the camera is relatively static with the object during imaging. Only one sampling is needed to obtain the complete image of the object [7]. On the contrary, the relative motion is kept between the linescan camera and the object, which means that the different lines correspond to the different imaging poses [8]. In order to obtain a useful linescan image, the linescan camera must move with a constant speed along a straight line relative to the object and the orientation of the camera is fixed [9, 10]. According to the imaging resolution in the application scene, the motion speed should be matched with the line frequency of the camera to obtain the undistorted scale of the object in an image.
The initial pose variation of the linescan camera will influence the position of the object projected to the image and eventually makes the object present different geometric shapes in the image. Image registration can eliminate the geometric difference of the object caused by different imaging poses of the camera. According to the geometric transformation complexity of the object in the image, the corresponding transformation model [11], such as the translation transformation model, rigid transformation model, similarity transformation model, affine transformation model and projection transformation model, is selected. Then, the geometric alignment of the two images is realized by using the image feature points [12,13,14,15] or the direct registration method [16,17,18]. Since the difference in the imaging model between the linescan camera and the areascan camera, the geometric changes of the object in the two kinds of the image caused by the variations of the pose are different. Hence, the geometric transformation model (GTM) derived from the imaging model of the areascan camera cannot be used to register the linescan image.
There are mainly two methods to realize the relative motion between the linescan camera and the imaged object. In one scenario, the linescan camera is stationary to image the moving object and in the other is that the stationary object is scanned by the moving camera [19]. However, it is difficult to ensure that the imaging plane of the linescan camera is parallel to the object plane. For example, the Trouble of moving EMU Detection System (TEDS) was developed by relevant departments [20,21,22,23], composed of ten linescan cameras, which are installed at different locations along the railway. The linescan camera at different locations cannot be guaranteed to be in the same pose with respect to the railway; this not only means that the imaging plane cannot be parallel to the train surface, but also imaging poses of the camera are different. Therefore, the train collected by the linescan camera at different locations has geometric changes, and the GTM of the areascan image cannot align these train images, which makes the anomaly detection method based on image registration impossible. To solve this problem, we have established the eightparameter GTM for the linescan image registration acquired by the linescan camera under different initial poses [24]. However, only the relative motion direction paralleled to the object plane is considered in this model; the geometric changes of the object in the image caused by the arbitrary relative motion directions are ignored. Hence, this model is not generality and not suitable to meet the registration problem of images acquired in the scenario that the arbitrary relative motion direction between the linescan camera and the object.
In this paper, a more general GTM of the linescan camera is proposed, which can represent the geometric transformation relationship of the image when the linescan camera with any initial poses and any motion directions scans the object. Based on the imaging model of the linescan camera, the geometric changes of the object with different motion directions in the image are analysed: (i) the object will undergo a shear change in the image caused by the camera moving along the xaxis of the camera coordinate system; (ii) the object will have a scale change perpendicular to the sensor in the image caused by the camera different motion speed along the yaxis of the camera coordinate system; and (iii) the object will undergo a hyperbolic arcs change in the image caused by the camera moving along the zaxis of the camera coordinate system. To verify the correctness of the model, a linescan image acquisition system is built. The correctness of the GTM proposed in this paper is verified by the registration experiment of images acquired under the arbitrary relative motion direction between the linescan camera and the object. At the same time, the registration results of linescan images show that the proposed model is reduced to the eightparameter linescan image GTM when the relative motion direction between the linescan camera and the object is parallel, which means the eightparameter GTM is a particular case of our model and our model is more general.
Our contributions are listed as follows:

1.
A general geometric transformation model of the linescan image is established. It is suitable for the registration problem of images collected by the linescan camera under different initial poses and relative motion directions. In practical applications, the linescan camera needs neither to match the motion speed with the sampling frequency nor to strictly adjust the pose and relative motion direction. The geometric difference caused by the above factors can be eliminated based on the proposed model in this paper so that the applications of the linescan camera are extensive and convenient.

2.
The geometric changes caused by the different relative motion directions are analysed, including the common geometric transformation of shear and scale in the areascan camera, and the unique hyperbolic arcs transformation of the linescan camera.

3.
A linescan acquisition system is built to collect the image with the different imaging poses and relative motion directions, which not only verifies the correctness and the general of the proposed model but also provides a data set for the registration of the linescan image.
2 The general geometric transformation model of the linescan image
2.1 The imaging model of the linescan camera
Imaging is the process that the coordinate points in the threedimensional (3D) world are mapped to the twodimensional (2D) image plane [25]. Firstly, it is necessary to establish a world coordinate system in the 3D world and a camera coordinate system which takes the optical centre of the camera as the origin. Then, the points in the world coordinate system are transformed into the camera coordinate system through coordinate transformation. The coordinate transformation relationship is represented by the rotation matrix R and the translation vector T which are called the external camera parameters [26]. According to the pinhole imaging principle, points in the camera coordinate system are projected to the imaging plane coordinate system [27]. Generally, the imaging plane coordinate system takes the upper left corner of the image as the origin, and the points mapped to the imaging plane coordinate system need to be translated according to the offset of horizontal and vertical. These parameters used to describe the transformation from the camera coordinate system to the image plane coordinate system are called internal camera parameters.
The imaging model of the areascan camera is obtained by combing the twice coordinate transformations of the points in the 3D. However, the imaging model of the linescan camera is different from that of the areascan camera in two aspects. First, there is a relative motion between the linescan camera and the object, which means that the coordinate transformation relationship between the world coordinate system and the camera coordinate system is not fixed. It is necessary to establish the relationship between the camera coordinate system at any sampling time and the camera coordinate system at the time tâ€‰=â€‰0 based on the sampling time and relative motion velocity [28]. Second, the linescan camera collected an image that is one pixel high, which means that the points in the world coordinate system mapped to the horizontal axis of the image coordinate system conform to the perspective projection relation of the pinhole principle and the vertical axis of the image coordinate system is determined by the camera's motion speed and sampling frequency. Therefore, the imaging model of the linescan camera [29] is:
where (X, Y, Z) is a point in the world coordinate system, (u, v) is the coordinate of the point mapped to the image, w is a scale factor, R and T represent the rotation matrix and the translation vector, respectively, (v_{x}, v_{y}, v_{z}) is the relative movement speed between the camera and the object, F is the sampling frequency of the camera, f is the principal distance and p_{u} is the principal point offset in the sensor direction.
2.2 The geometric variation caused by the motion direction change of the linescan camera
The variations of the internal parameters and external parameters in the camera imaging model will make the point in the world coordinate system change its position in the image. The former variation means the image is taken by different devices, while the latter variation means the image is acquired under different imaging poses of the camera. According to the imaging model of the linescan camera, i.e. EquationÂ (1), the position of the 3D world point mapped in the image will also be changed by the variation of the velocity. FigureÂ 1 summarizes the geometric variations of the object when the motion velocity of the camera changes. These images are all simulated by the linescan camera imaging model and the simulation process is as follows. Firstly, the world coordinate system is established by taking an image containing the checkerboard pattern as the imaging object. The distance represented by each pixel is set, and the internal and external parameters of the linescan camera imaging system are determined. Secondly, the velocity (v_{x}, v_{y}, v_{z}) is given, and the points of the checkerboard image in the world coordinate system are mapped to the coordinate system of the linescan image according to the imaging model of linescan camera, i.e. EquationÂ (1). Finally, the grey value of the simulated image is obtained by bilinear interpolation. By changing (v_{x}, v_{y}, v_{z}), the simulation results of the object captured by the linescan camera under different relative motion velocities can be achieved.
FigureÂ 1a is the image obtained by the linescan camera under ideal conditions that the imaging plane of the camera is parallel to the object plane and the motion direction of the camera is perpendicular to the imaging sensor, that is, the motion velocity of the linescan camera in the camera coordinate system is (v_{x}, v_{y}, v_{z})â€‰=â€‰(0, v_{1}, 0). When the linescan camera has a velocity along the direction of the sensor, i.e. v_{x}â€‰â‰ â€‰0, the object will be shifted along the direction of the sensor in each sampling of the linescan camera. However, since the collected columns are directly spliced, the object appears as a shear transformation in the image, as shown in Fig.Â 1b, c. It should be noted that the shear transformation of the imaging object will appear in the linescan image when v_{x}â€‰â‰ â€‰0. This is independent of the parameters selected in the simulation imaging, i.e. 0.1 andâ€‰âˆ’â€‰0.1. The value of the parameter can only affect the degree of shear transformation, and the exact values used in the paper are only for the observation convenience of the simulation results. The same is true for the parameters selected in the subsequent simulation process. When the v_{y} becomes larger, since the sampling frequency F is unchanged, the total sampling numbers of the object will be decreased so that it will scale down along the vaxis in the image, as shown in Fig.Â 1d. Conversely, when the v_{y} is reduced, the object will scale up along the vaxis in the image, as shown in Fig.Â 1e. If there is a nonzero velocity along the optical axis, it is clear that the camera with motion will be towards or away from the object. Obviously, the magnification factor of each sampling will become larger or smaller with the distance closer or farther between the camera and the object which makes the straight contour of the object mapped to hyperbolic arcs in the image [30], as shown in Fig.Â 1f and g.
In our previous research, a linescan image GTM with eight parameters was established, which ignored the motion of the linescan camera along the optical axis, i.e. v_{z}â€‰=â€‰0. In this scenario, the relative motion direction is parallel to the object plane. No matter how the pose of the camera changes, the distance between the optical centre of the camera and the object does not change with the relative motion. As shown in Fig.Â 2a, linescan camera collects a planar object with parallel motion direction. The distances z_{1}, z_{2} and z_{3} between the optical centre of the camera and the object at times t_{1}, t_{2} and t_{3} do not change with the movement of the camera. In this case, the eightparameter GTM can be used to express the geometric transformation relationship of the linescan image acquired under different imaging poses. On the contrary, in the case of the arbitrary relative motion direction, the distance between the optical centre and the object will be changed with the possible motion along the optical axis. As shown in Fig.Â 2b, the linescan camera collects the planar object at times t_{1}, t_{2} and t_{3}, respectively. However, the component of its motion velocity in the zaxis will lead to z_{1}, z_{2} and z_{3} differences. At this time, the geometric changes as shown in Fig.Â 1f, g are thus generated in the linescan images of the object. Both the projection transformation model of the areascan image and the eightparameter GTM of the linescan image cannot achieve the geometric alignment of the linescan image in this case.
2.3 Derivation of the general geometric transformation model of the linescan image
The object is collected by the two different cameras with different initial poses and motion directions, as shown in Fig.Â 3. According to Eq.Â (1), the imaging models for linescan camera I (LSC I) and linescan camera II (LSC II) can be expressed as:
where (X, Y, Z) is the point on the object plane and (u_{1}, v_{1}) and (u_{2}, v_{2}) are the coordinates of the object plane points mapped to LSC I and LSC II, respectively.
Let the object plane coincide with the XOY plane of the world coordinate system. In this case, the components of all points on the object in the Zaxis of the world coordinate system are 0. Hence, the components related to Z can be ignored in the imaging model, and the matrices in the imaging models of LSC I and LSC II can be combined to obtain their simplified expressions as follows:
where m_{1}â€‰~â€‰m_{9} and n_{1}â€‰~â€‰n_{9} are the combinations of the internal parameters, external parameters and motion parameters of LSC I and LSC II, respectively.
The elements m_{1}, m_{2}, m_{4} and m_{5} in the matrix M come from a combination of the internal parameter matrix, the motion matrix and the rotation matrix. Since all of these matrices are full rank, the matrix (m_{1}, m_{2}; m_{3}, m_{4}) is also fullrank matrix. According to the imaging model of the linescan camera, we can get m_{9}â€‰=â€‰t_{zâ€‰}âˆ’â€‰v_{z}t_{y}/v_{y}. Since the origin of the linescan camera coordinate system cannot coincide with the origin of the world coordinate system, m_{9} cannot be 0 at any time. Therefore, matrix M has an inverse matrix, and the following form can be obtained:
According to Eq.Â (4), the relation between w_{1} and the image coordinate (u_{1}, v_{1}) can be written as:
By substituting Eq.Â (4) into LSC II of Eq.Â (3), one obtains:
where O_{1}â€‰~â€‰O_{9} is the elements of the combined matrix of matrix N and inverse matrix M.
By substituting Eq.Â (5) into Eq.Â (6), we thus have:
By eliminating w_{2} in Eq.Â (7), we get:
O_{1}â€‰~â€‰O_{9} and M_{1}â€‰~â€‰M_{9} are determined by the internal parameters, external parameters and motion parameters of LSC I and LSC II. These parameters have been determined before imaging, so their combinations are constants. h_{1}â€‰~â€‰h_{12} is used to represent the coordinate coefficients of the LSC I in Eq.Â (8), which can be written in the following matrix form:
EquationÂ (9) is the GTM of linescan images in the general form established in this paper. (u_{1}, v_{1}) and (u_{2}, v_{2}) are the pixel coordinates of two linescan images, s_{1} and s_{2} are the scale factors of image coordinates, and h_{1}â€‰~â€‰h_{12} is the geometric transformation parameters to express the coordinate mapping relationship of two linescan images. Geometric transformation forms of linescan images expressed by 12 parameters in the model are shown in Table 1. The parameters in Eq.Â (9) are determined by the combination of the internal parameters, external parameters and motion parameters during two imaging. However, when estimating the parameters in Eq.Â (9), prior knowledge of the imaging system parameters is not required. The parameters in Eq.Â (9) are estimated only according to the information in the two images to be registered.
The 12parameter GTM of linescan image established in this paper is very different from the projection transformation model of areascan image [31] [see Eq.Â (10)] and the eightparameter GTM of linescan image [24] [see Eq.Â (11)]. First, the GTM contains the quadratic term of image coordinates, i.e. u_{1}v_{1} term. When the linescan camera is close to or far from the object, the straight line will be projected into hyperbolic arcs. Second, the horizontal and vertical coordinates of the warped image have independent scale factors. According to the imaging model of the linescan camera, the image is a perspective projection along the sensor direction and the vertical projection along the motion direction, so the geometric transformation rules of the horizontal and vertical coordinates of the linescan image are different.
where s in Eqs.Â (10) and (11) both is a scale factor of image coordinates.
3 Verification method and experimental data
3.1 Verification method
The GTM is used to register two images acquired by different cameras, at different times and under different poses. In the image registration process, one image is selected as the template image and the other is selected as the target image. The optimal geometric transformation parameters can be used to warp the target image such that the transformed target image geometrically matches the template image. When the selected GTM cannot satisfy the geometric transformation relation between the template image and the target image, the two images cannot be geometrically aligned. Therefore, the featurebased registration method will be used to align two linescan images in this paper, and the correctness and generality of the proposed GTM will be illustrated by the registration results.
The process of the featurebased registration method is described as follows. The feature points of the template image and the target image are extracted and then matched according to their information. Ignoring mismatching, each pair of matched feature points can be considered as the same point on the object, and m pairs of matched feature points are obtained. EquationÂ (9) can be rewritten in the following form:
According to Eq.Â (12), parameters h_{1}â€‰~â€‰h_{7} and h_{8}â€‰~â€‰h_{12} can be estimated by equations u_{2} and v_{2}, respectively. The equation u_{2} contains a maximum of seven parameters, and seven equations need to be combined to solve them. Since each pair of matched feature points only provides one equation, at least seven pairs are required to calculate the geometric transformation parameters of two registered images. Therefore, different pairs are randomly selected from the matched feature point set m and their coordinates are denoted as (u_{1i}, v_{1i}) and (u_{2i}, v_{2i}), iâ€‰=â€‰1â€¦7. According to Eq.Â (12), one gets the following form:
EquationÂ (13) is rewritten in the matrix form:
EquationÂ (14) can be abbreviated as:
Then, the geometric transformation parameters of the two linescan images can be obtained by the following formula:
Due to the extraction error and mismatch of the feature points, the geometric parameters estimated based on inaccurately matched feature point pairs cannot represent the global geometric mapping relationship. To avoid this situation, the geometric transformation parameters of the two images will be solved iteratively in this paper. The registration result based on the geometric transformation parameters estimated by inaccurately matched point pairs has a larger feature point coordinate error. Therefore, the root mean square error of feature point coordinates is taken as the measurement of the optimal estimation of geometric transformation parameters and the geometric parameter corresponding to the minimum value of coordinate error is selected as the optimal geometric transformation parameter. The specific registration process is described in Algorithm 1. The computational complexity of each iteration in Algorithm 1 is analysed in the following. Step 2 takes time O(n), where n is the number of feature point pairs. Step 3 takes time O(n^{3}) to invert the matrix A_{1}. M feature points in I_{2} are transformed into I_{1}, and the computational complexity is O(nM) in step 4. Time O(M) is taken in step 5. Therefore, the total computational complexity of each iteration is O(n^{3}â€‰+â€‰nM).
3.2 Experimental data
To verify the correctness of the GTM established in this paper, a linescan image acquisition system was built. The system is composed of a onedimensional motion platform, a linescan camera, a tripod and a planar object. The linescan images can be obtained by the linescan camera under different initial poses and different motion directions. The pose of the linescan camera and motion velocity are adjusted before each imaging which makes the different geometric deformation of the object in the linescan images. The model established in this paper can eliminate the geometric changes caused by these different imaging parameters. These imaging parameters need not be taken as prior knowledge, nor do they have any limitations on the process of estimating parameters in Algorithm 1. The checkerboard pattern is selected as the planar object in this paper since the feature points in this pattern are rich and evenly distributed. The feature points can be extracted with high accuracy, and the distribution is regular to avoid the mismatching of feature points. The influence of feature point extraction error and mismatch on the registration result can be reduced to verify the correctness of the established model with the checkerboard pattern linescan images.
The tripod and the linescan camera are placed on the onedimensional motion platform to realize the arbitrary relative motion direction between the linescan camera and the object, as shown in Fig.Â 4a. The motion platform moves uniformly in a straight line, and the object is scanned by the linescan camera to complete an image acquisition. The imaging poses of the linescan camera are changed by rotating the tripod, while the motion direction is changed by shifting the onedimensional motion platform. FigureÂ 5 shows a partial example of the linescan image collected in this scenario, and the geometric transformation of the hyperbolic arc form is generated in the image.
The linescan camera is fixed on the tripod and placed on the ground, and the planar object is placed on the onedimensional motion platform. The parallel relative motion direction between the linescan camera and the object can be realized, as shown in Fig.Â 4b. The object passes in front of the camera at a constant speed on the platform to complete an image acquisition and the different poses of the camera are achieved by rotating and shifting the tripod. FigureÂ 6 shows image examples of the checkerboard collected using the acquisition system in Fig.Â 4b. Due to the different imaging poses of the camera, the checkerboard pattern presents different geometric shapes in the images.
4 Experimental results
4.1 Model validation based on our data
The GTM of the linescan image established in this paper is compared with the eightparameter GTM of the linescan image (Eq.Â (11), short for eightparameter model in the following paper) and the projection transformation model of the areascan image (Eq.Â (10), short for projection model in the following paper). The former model is the only one that conforms to the geometric transformation law of the linescan image, while the latter is the model that can represent the most complex geometric transformation of areascan images. Linescan images collected under two scenarios in Fig.Â 4 are selected as the experimental data. These three GTMs are used to, respectively, register the linescan image pairs, and each registered image pair comes from the same scenario. In the scenario that the linescan camera scans the object under the arbitrary relative motion direction, 31 linescan images are collected, that is, 465 image pairs to be registered; in the scenario that the relative motion direction parallel to the object, 20 linescan images are collected, that is, 190 image pairs to be registered. The method in Ref. [32] is used to extract the feature points in the checkerboard pattern linescan images.
The registration method of the eightparameter model and projection model is the same as that in Algorithm 1. In the eightparameter model, h_{7} and h_{8} are only related to u_{2} so there are five unknown parameters in u_{2} and three unknown parameters in v_{2}. Therefore, five matching feature point pairs are needed to solve the parameters in the eightparameter model. The parameters in the projection model can be solved by combining u_{2} and v_{2}, as h_{7} and h_{8} are both related to them. Since each matching feature point pair can provide two equations, four pairs are needed to solve the parameters in the projection model. The number of iterations is set to 100,000 to expect that the estimation of parameters in each model can get better results. The registration results based on different GTMs will be evaluated from subjective and objective aspects. Subjectively, the target image is transformed to the coordinate system of the template image according to the optimal geometric transformation parameters, and their overlapped image and feature points are drawn. The registration results can be evaluated by observing intuitively the positions of checkerboard intersections in the same coordinate system. If two images are geometrically aligned, the intersections in the two images will also coincide in the same coordinate system. Objectively, the mean error (ME), the root mean square error (RMSE) and the correctly matched rate (CMR) of the image feature point coordinates are calculated. CMR is calculated according to Eq.Â (17), where m is the pair numbers of the feature points and m_{in} is the number of geometric errors of feature point coordinates in m that are less than 1 pixel after registration. Since the feature points in the checkerboard pattern are evenly distributed, the higher CMR, the better the global registration accuracy. Meanwhile, the computational time of the registration process based on the different GTMs is recorded. The correctness of the GTM established in this paper is quantified through these statistical indexes.
4.1.1 Image registration results collected under different motion directions
Part figure a in Fig.Â 5 is selected as the template image, and part figures bâ€“d in Fig.Â 5 are selected as the target images. Table 2 shows the ME, RMSE and CMR of the feature point coordinate and the computational time of the registration. It can be seen that the ME and RMSE of the feature point coordinate after registration using the proposed GTM are the smallest. Both errors are less than 0.5 pixels, so the geometric alignment accuracy of linescan images is very high. The ME and RMSE of feature point coordinates are between 1 and 3 pixels after registration with the eightparameter model and are greater than 11 pixels after registration using the projection model. This means that the two images based on these two GTMs are not geometrically aligned. The CMR of the three image pairs after registration based on the model proposed in this paper is nearly 100%. It indicates that the three image pairs have very highprecision global geometric alignment. On the contrary, the CMR registered based on the other two GTMs is lower than 33%.
FigureÂ 7 lists the box plot of statistic indexes after registration of all images collected in the scenario of Fig.Â 4a. It can be seen that the ME and RMSE of 465 image pairs registered based on the proposed model are very small. It can be concluded that the registration accuracy is high and the proposed model can be used to express the geometric transformation relationship of linescan images collected in this scenario. FigureÂ 7c shows the CMR of feature points after the registration of three GTMs. The CMR of 75% of 465 image pairs registered by our model is nearly 100%, and the lowest CMR is also around 80%. On the contrary, the other two models cannot represent the geometric transformation law of the images collected in this scenario, and the CMR of 75% of 465 image pairs is under 80% and the lowest CMR is even close to 0%. Since there are more parameters to be solved in the model established in this paper, it takes more time to register the linescan images compared with the other two GTMs, as shown in Fig.Â 7d. However, only the proposed GTM can align the image collected in this scenario.
FigureÂ 8 shows the coincidence situation of feature point pairs registered based on three GTMs, and the accuracy of registration results is intuitively presented. The first column in Fig.Â 8 is the registration results based on the GTM established in this paper. It can be seen that the feature points in the target image almost coincide with the feature points in the template image, which means that the coordinate error between the feature points is small and the proposed model gives excellent geometric alignment accuracy. The second and the third columns of Fig.Â 8 are the registration results based on the eightparameter model and projection model, respectively. The feature points of the template image do not coincide with the feature points of the target image, which indicates that they are not geometrically aligned. It also indicates that these two GTMs do not conform to the geometric transformation law of images acquired in the scenario that the relative motion between the linescan camera and the object is arbitrary direction.
4.1.2 Image registration results collected under parallel motion direction
The model established in this paper is based on the changes in poses and motion directions, which should contain the geometric transformation relationship of the eightparameter model. In this subsection, the proposed model and the eightparameter model will be used to register the linescan image acquired under parallel motion direction relative to the planar object and verify whether it can realize the registration of the image acquired in this scenario.
Part figure a in Fig.Â 6 is selected as the template image, and part figures bâ€“d in Fig.Â 6 are selected as the target images. Table 3 shows the statistical errors of feature point coordinates after registration and computational time of the sample image in Fig.Â 6 based on two GTMs, respectively. As shown in Table 3, the ME and RMSE of the feature point coordinates are both less than 1 pixel. CMR also shows that the two models both have high global registration accuracy for the image acquired in this scenario. In terms of computational time, the model established in this paper still needs more time to solve. The first row of Fig.Â 9 is the registration result based on the proposed model, and the second row is based on the eightparameter model. As can be seen from the registration results, the coincidence of feature point pairs which are transformed into the same coordinate system is very high. It is difficult to intuitively distinguish which model has a higher accuracy of registration. It means that the GTM established in this paper contains the geometric transformation law described by the eightparameter linescan image GTM.
FigureÂ 10 shows the statistical error of the feature point coordinates after registration and computational time of all images collected in the scenario of Fig.Â 4b. Although the ME and RMSE of the feature point coordinates of the 190 linescan image pairs registered based on the eightparameter model are less than 1 pixel, the registration accuracy based on the model proposed in this paper is higher. FigureÂ 10c shows the CMR after the registration of 190 linescan image pairs. It can be seen that the model proposed in this paper can also realize the highprecision global registration of linescan images collected under parallel motion direction scenarios, and its universality has been verified. In order to be fair, the iteration number of the experiment in this paper is set very high and the image registration based on the proposed model requires more computational time, as shown in Fig.Â 10d. When the iteration number is reduced, the computational time of the two GTMs will be smaller or even negligible.
4.2 Registration results on real train linescan images
In this subsection, train linescan images are taken as the registration data to verify the correctness of the model established in this paper. As mentioned in introduction, the geometric shape of the train in linescan images collected by TEDS installed at different railway locations is discrepant. The area in the locomotive is selected as the reference image because its surface is not parallel to the motion direction of the train. As shown in Fig.Â 11a, d, the distance between the linescan camera and the train surface is from far to near with movement. The boxes in them are selected as template images, as shown in Fig.Â 11b and e. FigureÂ 11c and f is the target image to be registered.
RIFT [12] feature points are extracted from train linescan images, and seven matched feature point pairs are manually selected. Then, the parameters of the GTM established in this paper and eightparameter model are estimated, respectively. Registration results are presented in the form of pseudocolour image. FigureÂ 12a and c is the registration result based on our model. The geometric of template images is precisely aligned with the geometric of the warped target images. On the contrary, the registration results using the eightparameter model show that there are still local geometric errors, as shown in the boxes in Fig.Â 12b and d. The registration results of practical application also demonstrate the correctness and general of the GTM proposed in this paper.
5 Conclusions
Considering the geometric changes of the object in the image caused by different initial poses and motion directions of linescan cameras, a general GTM with 12 parameters based on the imaging model of the linescan camera is established. The model contains the geometric relationship described by the eightparameter linescan image GTM, and the relationship is similar to the affine transformation model and the projection transformation model in the GTM of areascan images. The biggest difference between the proposed model and the eightparameter linescan image GTM is that our model can express the geometric transformation of hyperbolic arcs caused by the relative motion that makes the linescan camera towards or away from the imaged object. The correctness and general of the proposed GTM are verified by the registration experiments of the linescan images collected in two scenarios that the linescan camera moves in arbitrary directions and the relative motion parallel to the object. Furthermore, the accuracy of the real application is also verified by the registration of the train linescan image. The GTM established in this paper broadens the use conditions of the linescan camera so that it does not have to move parallel to the object plane and perpendicular to the sensor. It can be more flexible in building the linescan camera acquisition system in practical applications.
Availability of data and materials
The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Abbreviations
 GTM:

Geometric transformation model
 TEDS:

The trouble of moving electric multiple units detection system
 3D:

Threedimensional
 2D:

Twodimensional
 LSC I:

Linescan camera I
 LSC II:

Linescan camera II
 RIFT:

Radiationvariation insensitive feature transform
References
B. Sun, J. Zhu, L. Yang, S.R. Yang, Z.Y. Niu, Calibration of linescan cameras for precision measurement. Appl. Optics 55(25), 6836â€“6843 (2016). https://doi.org/10.1364/AO.55.006836
B. Sun, J.G. Zhu, L.H. Yang, Y. Guo, J.R. Lin, Stereo linescan sensor calibration for 3D shape measurement. Appl. Optics 56(28), 7905â€“7914 (2017). https://doi.org/10.1364/AO.56.007905
K.C. Song, B.M. Hou, H. Niu, X. Wen, Y.H. Yan, Flexible linescan camera calibration method using a coded eight trigrams pattern. Opt. Lasers Eng. 110, 296â€“307 (2018). https://doi.org/10.1016/j.optlaseng.2018.06.014
M. Yao, Z.Y. Zhao, B.G. Xu, Geometric calibration of linescan camera using a planar pattern. J. Electron. Imaging 23(1), 013028 (2014). https://doi.org/10.1117/1.JEI.23.1.013028
B.W. Hui, G.J. Wen, P. Zhang, A novel line scan camera calibration technique with an auxiliary frame camera. IEEE Trans. Instrum. Meas. 62(9), 2567â€“2575 (2013). https://doi.org/10.1109/TIM.2013.2256815
C. Steger, M. Ulrich, A camera model for linescan cameras with telecentric lenses. Int. J. Comput. Vis. 129, 80â€“99 (2021). https://doi.org/10.1007/s11263020013583
R. Usamentiaga, D.F. Garcia, Multicamera calibration for accurate geometric measurements in industrial environments. Measurement 134, 345â€“358 (2019). https://doi.org/10.1016/j.measurement.2018.10.087
B.W. Hui, J.R. Zhong, G.J. Wen, D.R. Li, Determination of line scan camera parameters via the direct linear transformation. Opt. Eng. 51(11), 113201 (2012). https://doi.org/10.1117/1.OE.51.11.113201
B.W. Hui, G.J. Wen, Z.X. Zhao, D.R. Li, Linescan camera calibration in closerange photogrammetry. Opt. Eng. 51(5), 053602 (2012). https://doi.org/10.1117/1.OE.51.5.053602
R.Y. Liao, J.G. Zhu, L.H. Yang, J.R. Lin, B. Sun, J.C. Yang, Flexible calibration method for linescan cameras using a stereo target with hollow stripes. Opt. Lasers Eng. 113, 6â€“13 (2019). https://doi.org/10.1016/j.optlaseng.2018.09.014
X.X. Zhang, C. Gilliam, T. Blu, Allpass parametric image registration. IEEE Trans. Image Process. 29, 5625â€“5640 (2020). https://doi.org/10.1109/TIP.2020.2984897
J.Y. Li, Q.W. Hu, M.Y. Ai, RIFT: multimodal image matching based on radiationvariation insensitive feature transform. IEEE Trans. Image Process. 29, 3296â€“3310 (2020). https://doi.org/10.1109/TIP.2019.2959244
Y.X. Ye, J. Shan, L. Bruzzone, Robust registration of multimodal remote sensing images based on structural similarity. IEEE Trans. Geosci. Remote Sensing 55(5), 2941â€“2958 (2017). https://doi.org/10.1109/TGRS.2017.2656380
G.A. IdroboPizo, J.M.S.T. Motta, D.L. Borges, Novel invariant feature descriptor and a pipeline for range image registration in robotic welding applications. IET Image Process. 13(6), 964â€“974 (2019). https://doi.org/10.1049/ietipr.2018.6105
C.C. Lin, Y.C. Tai, J.J. Lee, Y.S. Chen, A novel point cloud registration using 2D image features. EURASIP J. Adv. Signal Process. 2017, 5 (2017). https://doi.org/10.1186/s136340160435y
C.X. Li, Z.L. Shi, Y.P. Liu, T.C. Liu, L.Y. Xu, Efficient and robust direct image registration based on joint geometric and photometric lie algebra. IEEE Trans. Image Process. 27(12), 6010â€“6024 (2018). https://doi.org/10.1109/TIP.2018.2864895
S.J. Chen, H.L. Shen, C.G. Li, J.H. Xin, Normalized total gradient: a new measure for multispectral image registration. IEEE Trans. Image Process. 27(3), 1297â€“1310 (2018). https://doi.org/10.1109/TIP.2017.2776753
W.W. Kong, P.X. Zang, S.J. Niu, D.W. Li, Iterative registration for multimodality retinal fundus photographs using directional vessel skeleton. IET Image Process. 15(3), 696â€“704 (2021). https://doi.org/10.1049/ipr2.12054
C. Steger, M. Ulrich, C. Wiedemann, Machine vision algorithms and applications, in Translation, 1st edn., ed. by S.R. Yang, D.J. Wu, D.S. Duan (Tsinghua University Press, Beijing, 2008), pp.47â€“53
S.F. Lu, Z. Liu, Automatic visual inspection of a missing split pin in the China railway highspeed. Appl. Optics 55(30), 8395â€“8405 (2016). https://doi.org/10.1364/AO.55.008395
S.F. Lu, Z. Liu, Y. Shen, Automatic fault detection of multiple targets in railway maintenance based on timescale normalization. IEEE Trans. Instrum. Meas. 67(4), 849â€“865 (2018). https://doi.org/10.1109/TIM.2018.2790498
J.Y. Xu, R. Sun, P.Y. Tian, Q. Xie, Y. Yang, H.D. Liu, L. Cao, Correction of rolling wheel images captured by a linear array camera. Appl. Optics 54(33), 9736â€“9740 (2015). https://doi.org/10.1364/AO.54.009736
L. Liu, F.Q. Zhou, Y.Z. He, Automated visual inspection system for bogie block key under complex freight train environment. IEEE Trans. Instrum. Meas. 65(1), 2â€“14 (2016). https://doi.org/10.1109/TIM.2015.2479101
L. Fang, Z.L. Shi, C.X. Li, Y.P. Liu, E.B. Zhao, Geometric transformation modeling for linescan images under different camera poses. Opt. Eng. 61(10), 103103 (2022). https://doi.org/10.1117/1.OE.61.10.103103
R. Usamentiaga, Static calibration for linescan cameras based on a novel calibration target. IEEE Trans. Instrum. Meas. 71, 5015812 (2022). https://doi.org/10.1109/TIM.2022.3190039
D.D. Li, G.J. Wen, S.H. Qiu, Crossratiobased line scan camera calibration using a planar pattern. Opt. Eng. 55(1), 014104 (2016). https://doi.org/10.1117/1.OE.55.1.014104
R. Usamentiaga, D.F. Garcia, F.J. Calle, Linescan camera calibration: a robust linear approach. Appl. Optics 59(30), 9443â€“9453 (2020). https://doi.org/10.1364/AO.404774
D. Poly, A rigorous model for spaceborne linear array sensors. Photogramm. Eng. Remote Sens. 73, 187â€“196 (2007). https://doi.org/10.14358/PERS.73.2.187
R. Gupata, R.I. Hartley, Linear pushbroom cameras. IEEE Trans. Pattern Anal. Mach. Intell. 19(9), 963â€“975 (1997). https://doi.org/10.1109/34.615446
C. Steger, M. Ulrich, C. Wiedemann, Machine Vision Algorithms and Applications, 2nd edn. (WileyVCH, Weinheim, 2018), pp.618â€“622
P. Monteiro, J. Ascenso, F. Pereira, Perspective transform motion modeling for improved side information creation. EURASIP J. Adv. Signal Process. 2013, 189 (2013). https://doi.org/10.1186/168761802013189
A. Geiger, F. Moosmann, Ã–. Car, B. Schuster, Automatic camera and range sensor calibration using a single shot, in Proceeding of IEEE International Conference on Robotics and Automation, (Saint Paul, USA, 2012), pp. 3936â€“3943. https://doi.org/10.1109/ICRA.2012.6224570
Acknowledgements
Not applicable.
Funding
This work was supported by the Science and Technological Innovation Field Fund Projects under Grant No. E01Z041101.
Author information
Authors and Affiliations
Contributions
LF and ZS conceived the idea of the study. LF, YL, CL and MP wrote the paper. LF and MP validated the results. EZ collected the data. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fang, L., Shi, Z., Liu, Y. et al. A general geometric transformation model for linescan image registration. EURASIP J. Adv. Signal Process. 2023, 78 (2023). https://doi.org/10.1186/s1363402301041y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363402301041y
Keywords
 Linescan image
 Geometric transformation model
 The relative motion direction
 Linescan camera