Skip to main content

A general geometric transformation model for line-scan image registration

Abstract

A reasonable geometric transformation model is the key to image registration. When the relative motion direction between the line-scan camera and the object is strictly parallel to the planar object, it is possible to align the image by using the eight-parameter geometric transformation model of the line-scan image. However, it will be invalid when the relative motion direction is arbitrary. Therefore, a new general geometric transformation model of line-scan images is proposed for line-scan image registration in this paper. Considering the different initial poses and motion directions of the line-scan camera, the proposed model is established based on the imaging model of the line-scan camera. In order to acquire line-scan images to verify the proposed model, a line-scan image acquisition system was built. The method based on feature points is used to register the line-scan images. The experimental results show that the proposed geometric transformation model can align the line-scan image collected under arbitrary relative motion direction, not just the parallel case. Besides, the statistical errors of the image feature point coordinates are the best performance after registration. The accuracy of the registration results is better than that of other existing geometric transformation models, which verifies the correctness and generality of the geometric transformation model of the line-scan camera proposed in this paper.

1 Introduction

The line-scan camera is an imaging device with only one column of sensor which has the characteristics of high resolution and high imaging frequency [1,2,3,4,5]. The imaging principle of the line-scan camera is the same as that of the area-scan camera and also conforms to the perspective projection relationship of the pinhole model, but the imaging models of the two kinds of cameras are different [6]. The sensor of the area-scan camera is an array and the camera is relatively static with the object during imaging. Only one sampling is needed to obtain the complete image of the object [7]. On the contrary, the relative motion is kept between the line-scan camera and the object, which means that the different lines correspond to the different imaging poses [8]. In order to obtain a useful line-scan image, the line-scan camera must move with a constant speed along a straight line relative to the object and the orientation of the camera is fixed [9, 10]. According to the imaging resolution in the application scene, the motion speed should be matched with the line frequency of the camera to obtain the undistorted scale of the object in an image.

The initial pose variation of the line-scan camera will influence the position of the object projected to the image and eventually makes the object present different geometric shapes in the image. Image registration can eliminate the geometric difference of the object caused by different imaging poses of the camera. According to the geometric transformation complexity of the object in the image, the corresponding transformation model [11], such as the translation transformation model, rigid transformation model, similarity transformation model, affine transformation model and projection transformation model, is selected. Then, the geometric alignment of the two images is realized by using the image feature points [12,13,14,15] or the direct registration method [16,17,18]. Since the difference in the imaging model between the line-scan camera and the area-scan camera, the geometric changes of the object in the two kinds of the image caused by the variations of the pose are different. Hence, the geometric transformation model (GTM) derived from the imaging model of the area-scan camera cannot be used to register the line-scan image.

There are mainly two methods to realize the relative motion between the line-scan camera and the imaged object. In one scenario, the line-scan camera is stationary to image the moving object and in the other is that the stationary object is scanned by the moving camera [19]. However, it is difficult to ensure that the imaging plane of the line-scan camera is parallel to the object plane. For example, the Trouble of moving EMU Detection System (TEDS) was developed by relevant departments [20,21,22,23], composed of ten line-scan cameras, which are installed at different locations along the railway. The line-scan camera at different locations cannot be guaranteed to be in the same pose with respect to the railway; this not only means that the imaging plane cannot be parallel to the train surface, but also imaging poses of the camera are different. Therefore, the train collected by the line-scan camera at different locations has geometric changes, and the GTM of the area-scan image cannot align these train images, which makes the anomaly detection method based on image registration impossible. To solve this problem, we have established the eight-parameter GTM for the line-scan image registration acquired by the line-scan camera under different initial poses [24]. However, only the relative motion direction paralleled to the object plane is considered in this model; the geometric changes of the object in the image caused by the arbitrary relative motion directions are ignored. Hence, this model is not generality and not suitable to meet the registration problem of images acquired in the scenario that the arbitrary relative motion direction between the line-scan camera and the object.

In this paper, a more general GTM of the line-scan camera is proposed, which can represent the geometric transformation relationship of the image when the line-scan camera with any initial poses and any motion directions scans the object. Based on the imaging model of the line-scan camera, the geometric changes of the object with different motion directions in the image are analysed: (i) the object will undergo a shear change in the image caused by the camera moving along the x-axis of the camera coordinate system; (ii) the object will have a scale change perpendicular to the sensor in the image caused by the camera different motion speed along the y-axis of the camera coordinate system; and (iii) the object will undergo a hyperbolic arcs change in the image caused by the camera moving along the z-axis of the camera coordinate system. To verify the correctness of the model, a line-scan image acquisition system is built. The correctness of the GTM proposed in this paper is verified by the registration experiment of images acquired under the arbitrary relative motion direction between the line-scan camera and the object. At the same time, the registration results of line-scan images show that the proposed model is reduced to the eight-parameter line-scan image GTM when the relative motion direction between the line-scan camera and the object is parallel, which means the eight-parameter GTM is a particular case of our model and our model is more general.

Our contributions are listed as follows:

  1. 1.

    A general geometric transformation model of the line-scan image is established. It is suitable for the registration problem of images collected by the line-scan camera under different initial poses and relative motion directions. In practical applications, the line-scan camera needs neither to match the motion speed with the sampling frequency nor to strictly adjust the pose and relative motion direction. The geometric difference caused by the above factors can be eliminated based on the proposed model in this paper so that the applications of the line-scan camera are extensive and convenient.

  2. 2.

    The geometric changes caused by the different relative motion directions are analysed, including the common geometric transformation of shear and scale in the area-scan camera, and the unique hyperbolic arcs transformation of the line-scan camera.

  3. 3.

    A line-scan acquisition system is built to collect the image with the different imaging poses and relative motion directions, which not only verifies the correctness and the general of the proposed model but also provides a data set for the registration of the line-scan image.

2 The general geometric transformation model of the line-scan image

2.1 The imaging model of the line-scan camera

Imaging is the process that the coordinate points in the three-dimensional (3D) world are mapped to the two-dimensional (2D) image plane [25]. Firstly, it is necessary to establish a world coordinate system in the 3D world and a camera coordinate system which takes the optical centre of the camera as the origin. Then, the points in the world coordinate system are transformed into the camera coordinate system through coordinate transformation. The coordinate transformation relationship is represented by the rotation matrix R and the translation vector T which are called the external camera parameters [26]. According to the pinhole imaging principle, points in the camera coordinate system are projected to the imaging plane coordinate system [27]. Generally, the imaging plane coordinate system takes the upper left corner of the image as the origin, and the points mapped to the imaging plane coordinate system need to be translated according to the offset of horizontal and vertical. These parameters used to describe the transformation from the camera coordinate system to the image plane coordinate system are called internal camera parameters.

The imaging model of the area-scan camera is obtained by combing the twice coordinate transformations of the points in the 3D. However, the imaging model of the line-scan camera is different from that of the area-scan camera in two aspects. First, there is a relative motion between the line-scan camera and the object, which means that the coordinate transformation relationship between the world coordinate system and the camera coordinate system is not fixed. It is necessary to establish the relationship between the camera coordinate system at any sampling time and the camera coordinate system at the time t = 0 based on the sampling time and relative motion velocity [28]. Second, the line-scan camera collected an image that is one pixel high, which means that the points in the world coordinate system mapped to the horizontal axis of the image coordinate system conform to the perspective projection relation of the pinhole principle and the vertical axis of the image coordinate system is determined by the camera's motion speed and sampling frequency. Therefore, the imaging model of the line-scan camera [29] is:

$$\left( {\begin{array}{*{20}c} {wu} \\ v \\ w \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} f & 0 & {p_{u} } \\ 0 & F & 0 \\ 0 & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - \frac{{v_{x} }}{{v_{y} }}} & 0 \\ 0 & {\frac{1}{{v_{y} }}} & 0 \\ 0 & { - \frac{{v_{z} }}{{v_{y} }}} & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{3 \times 3} } & {T_{3 \times 1} } \\ {0_{1 \times 3} } & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} X \\ Y \\ Z \\ 1 \\ \end{array} } \right)$$
(1)

where (X, Y, Z) is a point in the world coordinate system, (u, v) is the coordinate of the point mapped to the image, w is a scale factor, R and T represent the rotation matrix and the translation vector, respectively, (vx, vy, vz) is the relative movement speed between the camera and the object, F is the sampling frequency of the camera, f is the principal distance and pu is the principal point offset in the sensor direction.

2.2 The geometric variation caused by the motion direction change of the line-scan camera

The variations of the internal parameters and external parameters in the camera imaging model will make the point in the world coordinate system change its position in the image. The former variation means the image is taken by different devices, while the latter variation means the image is acquired under different imaging poses of the camera. According to the imaging model of the line-scan camera, i.e. Equation (1), the position of the 3D world point mapped in the image will also be changed by the variation of the velocity. Figure 1 summarizes the geometric variations of the object when the motion velocity of the camera changes. These images are all simulated by the line-scan camera imaging model and the simulation process is as follows. Firstly, the world coordinate system is established by taking an image containing the checkerboard pattern as the imaging object. The distance represented by each pixel is set, and the internal and external parameters of the line-scan camera imaging system are determined. Secondly, the velocity (vx, vy, vz) is given, and the points of the checkerboard image in the world coordinate system are mapped to the coordinate system of the line-scan image according to the imaging model of line-scan camera, i.e. Equation (1). Finally, the grey value of the simulated image is obtained by bilinear interpolation. By changing (vx, vy, vz), the simulation results of the object captured by the line-scan camera under different relative motion velocities can be achieved.

Fig. 1
figure 1

The effects of line-scan camera motion velocity variation on imaging. a Original picture, the camera speed is (vx, vy, vz) = (0, v1, 0); b, c the motion velocity of the line-scan camera is 0.1v1 and − 0.1v1 in the x-axis of the camera coordinate system, respectively, i.e. (vx, vy, vz) = (0.1v1, v1, 0) and (vx, vy, vz) = ( − 0.1v1, v1, 0); d, e the motion velocity of the line-scan camera increases or decreases in the y-axis of the camera coordinate system, i.e. (vx, vy, vz) = (0, 1.2v1, 0) and (vx, vy, vz) = (0, 0.92v1, 0); f, g the motion velocity of the line-scan camera is 0.12v1 and − 0.12v1 in the z-axis of the camera coordinate system, respectively, i.e. (vx, vy, vz) = (0, v1, 0.12v1) and (vx, vy, vz) = (0, v1, − 0.12v1)

Figure 1a is the image obtained by the line-scan camera under ideal conditions that the imaging plane of the camera is parallel to the object plane and the motion direction of the camera is perpendicular to the imaging sensor, that is, the motion velocity of the line-scan camera in the camera coordinate system is (vx, vy, vz) = (0, v1, 0). When the line-scan camera has a velocity along the direction of the sensor, i.e. vx ≠ 0, the object will be shifted along the direction of the sensor in each sampling of the line-scan camera. However, since the collected columns are directly spliced, the object appears as a shear transformation in the image, as shown in Fig. 1b, c. It should be noted that the shear transformation of the imaging object will appear in the line-scan image when vx ≠ 0. This is independent of the parameters selected in the simulation imaging, i.e. 0.1 and − 0.1. The value of the parameter can only affect the degree of shear transformation, and the exact values used in the paper are only for the observation convenience of the simulation results. The same is true for the parameters selected in the subsequent simulation process. When the vy becomes larger, since the sampling frequency F is unchanged, the total sampling numbers of the object will be decreased so that it will scale down along the v-axis in the image, as shown in Fig. 1d. Conversely, when the vy is reduced, the object will scale up along the v-axis in the image, as shown in Fig. 1e. If there is a nonzero velocity along the optical axis, it is clear that the camera with motion will be towards or away from the object. Obviously, the magnification factor of each sampling will become larger or smaller with the distance closer or farther between the camera and the object which makes the straight contour of the object mapped to hyperbolic arcs in the image [30], as shown in Fig. 1f and g.

In our previous research, a line-scan image GTM with eight parameters was established, which ignored the motion of the line-scan camera along the optical axis, i.e. vz = 0. In this scenario, the relative motion direction is parallel to the object plane. No matter how the pose of the camera changes, the distance between the optical centre of the camera and the object does not change with the relative motion. As shown in Fig. 2a, line-scan camera collects a planar object with parallel motion direction. The distances z1, z2 and z3 between the optical centre of the camera and the object at times t1, t2 and t3 do not change with the movement of the camera. In this case, the eight-parameter GTM can be used to express the geometric transformation relationship of the line-scan image acquired under different imaging poses. On the contrary, in the case of the arbitrary relative motion direction, the distance between the optical centre and the object will be changed with the possible motion along the optical axis. As shown in Fig. 2b, the line-scan camera collects the planar object at times t1, t2 and t3, respectively. However, the component of its motion velocity in the z-axis will lead to z1, z2 and z3 differences. At this time, the geometric changes as shown in Fig. 1f, g are thus generated in the line-scan images of the object. Both the projection transformation model of the area-scan image and the eight-parameter GTM of the line-scan image cannot achieve the geometric alignment of the line-scan image in this case.

Fig. 2
figure 2

Distance change from the optical centre to the planar object of the line-scan camera under parallel relative motion direction and arbitrary relative motion direction. a The motion direction of the line-scan camera is parallel to the imaged object; the distance z1, z2 and z3 from the optical centre to the object does not change with the relative motion. b The line-scan camera moves in any direction relative to the imaged object; the distance z1, z2 and z3 from the optical centre to the object changes with the relative motion

2.3 Derivation of the general geometric transformation model of the line-scan image

The object is collected by the two different cameras with different initial poses and motion directions, as shown in Fig. 3. According to Eq. (1), the imaging models for line-scan camera I (LSC I) and line-scan camera II (LSC II) can be expressed as:

$$\left\{ {\begin{array}{*{20}l} {{\text{LSC}}\;I:\;\;\left( {\begin{array}{*{20}c} {w_{1} u_{1} } \\ {v_{1} } \\ {w_{1} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {f_{1} } & 0 & {p_{{u_{1} }} } \\ 0 & {F_{1} } & 0 \\ 0 & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - \frac{{v_{{x_{1} }} }}{{v_{{y_{1} }} }}} & 0 \\ 0 & {\frac{1}{{v_{{y_{1} }} }}} & 0 \\ 0 & { - \frac{{v_{{z_{1} }} }}{{v_{{y_{1} }} }}} & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{1} } & {T_{1} } \\ 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} X \\ Y \\ Z \\ 1 \\ \end{array} } \right)} \hfill \\ {{\text{LSC}}\;II:\;\;\left( {\begin{array}{*{20}c} {w_{2} u_{2} } \\ {v_{2} } \\ {w_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {f_{2} } & 0 & {p_{{u_{2} }} } \\ 0 & {F_{2} } & 0 \\ 0 & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} 1 & { - \frac{{v_{{x_{2} }} }}{{v_{{y_{2} }} }}} & 0 \\ 0 & {\frac{1}{{v_{{y_{2} }} }}} & 0 \\ 0 & { - \frac{{v_{{z_{2} }} }}{{v_{{y_{2} }} }}} & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}l} 1 \hfill & 0 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 1 \hfill & 0 \hfill & 0 \hfill \\ 0 \hfill & 0 \hfill & 1 \hfill & 0 \hfill \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {R_{2} } & {T_{2} } \\ 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} X \\ Y \\ Z \\ 1 \\ \end{array} } \right)} \hfill \\ \end{array} } \right.$$
(2)

where (X, Y, Z) is the point on the object plane and (u1, v1) and (u2, v2) are the coordinates of the object plane points mapped to LSC I and LSC II, respectively.

Fig. 3
figure 3

Line-scan camera imaging diagram under different initial poses and motion directions

Let the object plane coincide with the XOY plane of the world coordinate system. In this case, the components of all points on the object in the Z-axis of the world coordinate system are 0. Hence, the components related to Z can be ignored in the imaging model, and the matrices in the imaging models of LSC I and LSC II can be combined to obtain their simplified expressions as follows:

$$\left\{ {\begin{array}{*{20}c} {{\text{LSC}}\;I:\;\;\left( {\begin{array}{*{20}c} {w_{1} u_{1} } \\ {v_{1} } \\ {w_{1} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {m_{1} } & {m_{2} } & {m_{3} } \\ {m_{4} } & {m_{5} } & {m_{6} } \\ {m_{7} } & {m_{8} } & {m_{9} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} X \\ Y \\ 1 \\ \end{array} } \right) = M\left( {\begin{array}{*{20}c} X \\ Y \\ 1 \\ \end{array} } \right)} \\ {{\text{LSC}}\;II:\;\;\left( {\begin{array}{*{20}c} {w_{2} u_{2} } \\ {v_{2} } \\ {w_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {n_{1} } & {n_{2} } & {n_{3} } \\ {n_{4} } & {n_{5} } & {n_{6} } \\ {n_{7} } & {n_{8} } & {n_{9} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} X \\ Y \\ 1 \\ \end{array} } \right) = N\left( {\begin{array}{*{20}c} X \\ Y \\ 1 \\ \end{array} } \right)} \\ \end{array} } \right.$$
(3)

where m1 ~ m9 and n1 ~ n9 are the combinations of the internal parameters, external parameters and motion parameters of LSC I and LSC II, respectively.

The elements m1, m2, m4 and m5 in the matrix M come from a combination of the internal parameter matrix, the motion matrix and the rotation matrix. Since all of these matrices are full rank, the matrix (m1, m2; m3, m4) is also full-rank matrix. According to the imaging model of the line-scan camera, we can get m9 = tz − vzty/vy. Since the origin of the line-scan camera coordinate system cannot coincide with the origin of the world coordinate system, m9 cannot be 0 at any time. Therefore, matrix M has an inverse matrix, and the following form can be obtained:

$$\left( {\begin{array}{*{20}c} X \\ Y \\ 1 \\ \end{array} } \right) = M^{ - 1} \left( {\begin{array}{*{20}c} {w_{1} u_{1} } \\ {v_{1} } \\ {w_{1} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {M_{1} } & {M_{2} } & {M_{3} } \\ {M_{4} } & {M_{5} } & {M_{6} } \\ {M_{7} } & {M_{8} } & {M_{9} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {w_{1} u_{1} } \\ {v_{1} } \\ {w_{1} } \\ \end{array} } \right)$$
(4)

According to Eq. (4), the relation between w1 and the image coordinate (u1, v1) can be written as:

$$1 = M_{7} w_{1} u_{1} + M_{8} v_{1} + M_{9} w_{1} \Leftrightarrow w_{1} = \frac{{1 - M_{8} v_{1} }}{{M_{7} u_{1} + M_{9} }}$$
(5)

By substituting Eq. (4) into LSC II of Eq. (3), one obtains:

$$\left( {\begin{array}{*{20}c} {w_{2} u_{2} } \\ {v_{2} } \\ {w_{2} } \\ \end{array} } \right) = N_{3 \times 3} M_{3 \times 3}^{ - 1} \left( {\begin{array}{*{20}c} {w_{1} u_{1} } \\ {v_{1} } \\ {w_{1} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {O_{1} } & {O_{2} } & {O_{3} } \\ {O_{4} } & {O_{5} } & {O_{6} } \\ {O_{7} } & {O_{8} } & {O_{9} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {w_{1} u_{1} } \\ {v_{1} } \\ {w_{1} } \\ \end{array} } \right)$$
(6)

where O1 ~ O9 is the elements of the combined matrix of matrix N and inverse matrix M.

By substituting Eq. (5) into Eq. (6), we thus have:

$$\left\{ {\begin{array}{*{20}l} {w_{2} u_{2} = \frac{{O_{1} u_{1} + \left( {O_{2} M_{7} - O_{1} M_{8} } \right)u_{1} v_{1} + \left( {O_{2} M_{9} - O_{3} M_{8} } \right)v_{1} + O_{3} }}{{M_{7} u_{1} + M_{9} }}} \hfill \\ {v_{2} = \frac{{O_{4} u_{1} + \left( {O_{5} M_{7} - O_{4} M_{8} } \right)u_{1} v_{1} + \left( {O_{5} M_{9} - O_{6} M_{8} } \right)v_{1} + O_{6} }}{{M_{7} u_{1} + M_{9} }}} \hfill \\ {w_{2} = \frac{{O_{7} u_{1} + \left( {O_{8} M_{7} - O_{7} M_{8} } \right)u_{1} v_{1} + \left( {O_{8} M_{9} - O_{9} M_{8} } \right)v_{1} + O_{9} }}{{M_{7} u_{1} + M_{9} }}} \hfill \\ \end{array} } \right.$$
(7)

By eliminating w2 in Eq. (7), we get:

$$\left\{ {\begin{array}{*{20}l} {u_{2} = \frac{{O_{1} u_{1} + \left( {O_{2} M_{7} - O_{1} M_{8} } \right)u_{1} v_{1} + \left( {O_{2} M_{9} - O_{3} M_{8} } \right)v_{1} + O_{3} }}{{O_{7} u_{1} + \left( {O_{8} M_{7} - O_{7} M_{8} } \right)u_{1} v_{1} + \left( {O_{8} M_{9} - O_{9} M_{8} } \right)v_{1} + O_{9} }}} \hfill \\ {v_{2} = \frac{{O_{4} u_{1} + \left( {O_{5} M_{7} - O_{4} M_{8} } \right)u_{1} v_{1} + \left( {O_{5} M_{9} - O_{6} M_{8} } \right)v_{1} + O_{6} }}{{M_{7} u_{1} + M_{9} }}} \hfill \\ \end{array} } \right.$$
(8)

O1 ~ O9 and M1 ~ M9 are determined by the internal parameters, external parameters and motion parameters of LSC I and LSC II. These parameters have been determined before imaging, so their combinations are constants. h1 ~ h12 is used to represent the coordinate coefficients of the LSC I in Eq. (8), which can be written in the following matrix form:

$$\left( {\begin{array}{*{20}c} {s_{1} u_{2} } \\ {s_{1} } \\ {s_{2} v_{2} } \\ {s_{2} } \\ \end{array} } \right) = \left( {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & {h_{3} } & {h_{4} } \\ {h_{5} } & {h_{6} } & {h_{7} } & 1 \\ {h_{8} } & {h_{9} } & {h_{10} } & {h_{11} } \\ {h_{12} } & 0 & 0 & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {u_{1} } \\ {u_{1} v_{1} } \\ {v_{1} } \\ 1 \\ \end{array} } \right)$$
(9)

Equation (9) is the GTM of line-scan images in the general form established in this paper. (u1, v1) and (u2, v2) are the pixel coordinates of two line-scan images, s1 and s2 are the scale factors of image coordinates, and h1 ~ h12 is the geometric transformation parameters to express the coordinate mapping relationship of two line-scan images. Geometric transformation forms of line-scan images expressed by 12 parameters in the model are shown in Table 1. The parameters in Eq. (9) are determined by the combination of the internal parameters, external parameters and motion parameters during two imaging. However, when estimating the parameters in Eq. (9), prior knowledge of the imaging system parameters is not required. The parameters in Eq. (9) are estimated only according to the information in the two images to be registered.

Table 1 Geometric transformation form of parameters representation in the model established in this paper

The 12-parameter GTM of line-scan image established in this paper is very different from the projection transformation model of area-scan image [31] [see Eq. (10)] and the eight-parameter GTM of line-scan image [24] [see Eq. (11)]. First, the GTM contains the quadratic term of image coordinates, i.e. u1v1 term. When the line-scan camera is close to or far from the object, the straight line will be projected into hyperbolic arcs. Second, the horizontal and vertical coordinates of the warped image have independent scale factors. According to the imaging model of the line-scan camera, the image is a perspective projection along the sensor direction and the vertical projection along the motion direction, so the geometric transformation rules of the horizontal and vertical coordinates of the line-scan image are different.

$$\left( {\begin{array}{*{20}c} {su_{2} } \\ {sv_{2} } \\ s \\ \end{array} } \right){ = }\left( {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & {h_{3} } \\ {h_{4} } & {h_{5} } & {h_{6} } \\ {h_{7} } & {h_{8} } & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {u_{1} } \\ {v_{1} } \\ 1 \\ \end{array} } \right)$$
(10)
$$\left( {\begin{array}{*{20}c} {su_{2} } \\ {v_{2} } \\ s \\ \end{array} } \right){ = }\left( {\begin{array}{*{20}c} {h_{1} } & {h_{2} } & {h_{3} } \\ {h_{4} } & {h_{5} } & {h_{6} } \\ {h_{7} } & {h_{8} } & 1 \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {u_{1} } \\ {v_{1} } \\ 1 \\ \end{array} } \right)$$
(11)

where s in Eqs. (10) and (11) both is a scale factor of image coordinates.

3 Verification method and experimental data

3.1 Verification method

The GTM is used to register two images acquired by different cameras, at different times and under different poses. In the image registration process, one image is selected as the template image and the other is selected as the target image. The optimal geometric transformation parameters can be used to warp the target image such that the transformed target image geometrically matches the template image. When the selected GTM cannot satisfy the geometric transformation relation between the template image and the target image, the two images cannot be geometrically aligned. Therefore, the feature-based registration method will be used to align two line-scan images in this paper, and the correctness and generality of the proposed GTM will be illustrated by the registration results.

The process of the feature-based registration method is described as follows. The feature points of the template image and the target image are extracted and then matched according to their information. Ignoring mismatching, each pair of matched feature points can be considered as the same point on the object, and m pairs of matched feature points are obtained. Equation (9) can be rewritten in the following form:

$$\left\{ {\begin{array}{*{20}l} {u_{2} = h_{1} u_{1} + h_{2} u_{1} v_{1} + h_{3} v_{1} + h_{4} - h_{5} u_{1} u_{2} - h_{6} u_{1} v_{1} u_{1} - h_{7} v_{1} } \hfill \\ {v_{2} = h_{8} u_{1} + h_{9} u_{1} v_{1} + h_{10} v_{1} + h_{11} - h_{12} u_{1} v_{2} } \hfill \\ \end{array} } \right.$$
(12)

According to Eq. (12), parameters h1 ~ h7 and h8 ~ h12 can be estimated by equations u2 and v2, respectively. The equation u2 contains a maximum of seven parameters, and seven equations need to be combined to solve them. Since each pair of matched feature points only provides one equation, at least seven pairs are required to calculate the geometric transformation parameters of two registered images. Therefore, different pairs are randomly selected from the matched feature point set m and their coordinates are denoted as (u1i, v1i) and (u2i, v2i), i = 1…7. According to Eq. (12), one gets the following form:

$$\left\{ {\begin{array}{*{20}l} {h_{1} u_{11} + h_{2} u_{11} v_{11} + h_{3} v_{11} + h_{4} - h_{5} u_{11} u_{21} - h_{6} u_{11} v_{11} u_{21} - h_{7} v_{11} u_{21} = u_{21} } \hfill \\ { \, \vdots } \hfill \\ {h_{1} u_{17} + h_{2} u_{17} v_{17} + h_{3} v_{17} + h_{4} - h_{5} u_{17} u_{27} - h_{6} u_{17} v_{17} u_{27} - h_{7} v_{17} u_{27} = u_{27} } \hfill \\ {h_{8} u_{11} + h_{9} u_{11} v_{11} + h_{10} v_{11} + h_{11} - h_{12} u_{11} v_{21} = v_{21} } \hfill \\ { \, \vdots } \hfill \\ {h_{8} u_{15} + h_{9} u_{15} v_{15} + h_{10} v_{15} + h_{15} - h_{12} u_{15} v_{25} = v_{25} } \hfill \\ \end{array} } \right.$$
(13)

Equation (13) is rewritten in the matrix form:

$$\left\{ {\begin{array}{*{20}l} {\left( {\begin{array}{*{20}c} {u_{11} } & {u_{11} v_{11} } & {v_{11} } & 1 & { - u_{11} u_{21} } & { - u_{11} v_{11} u_{21} } & { - v_{11} u_{21} } \\ {} & {} & {} & \vdots & {} & {} & {} \\ {u_{17} } & {u_{17} v_{17} } & {v_{17} } & 1 & { - u_{17} u_{27} } & { - u_{17} v_{17} u_{27} } & { - v_{17} u_{27} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {h_{1} } & \cdots & {h_{7} } \\ \end{array} } \right)^{T} = \left( {\begin{array}{*{20}c} {u_{21} } & \cdots & {u_{27} } \\ \end{array} } \right)^{T} } \hfill \\ {\left( {\begin{array}{*{20}c} {u_{11} } & {u_{11} v_{11} } & {v_{11} } & 1 & { - u_{11} v_{21} } \\ {} & {} & \vdots & {} & {} \\ {u_{15} } & {u_{15} v_{15} } & {v_{15} } & 1 & { - u_{15} v_{25} } \\ \end{array} } \right)\left( {\begin{array}{*{20}c} {h_{8} } & \cdots & {h_{12} } \\ \end{array} } \right)^{T} = \left( {\begin{array}{*{20}c} {v_{21} } & \cdots & {v_{25} } \\ \end{array} } \right)^{T} } \hfill \\ \end{array} } \right.$$
(14)

Equation (14) can be abbreviated as:

$$\left\{ {\begin{array}{*{20}c} {A_{1} H_{1} = B_{1} } \\ {A_{2} H_{2} = B_{2} } \\ \end{array} } \right.$$
(15)

Then, the geometric transformation parameters of the two line-scan images can be obtained by the following formula:

$$\left\{ {\begin{array}{*{20}c} {H_{1} = A_{1}^{ - 1} B_{1} } \\ {H_{2} = A_{2}^{ - 1} B_{2} } \\ \end{array} } \right.$$
(16)

Due to the extraction error and mismatch of the feature points, the geometric parameters estimated based on inaccurately matched feature point pairs cannot represent the global geometric mapping relationship. To avoid this situation, the geometric transformation parameters of the two images will be solved iteratively in this paper. The registration result based on the geometric transformation parameters estimated by inaccurately matched point pairs has a larger feature point coordinate error. Therefore, the root mean square error of feature point coordinates is taken as the measurement of the optimal estimation of geometric transformation parameters and the geometric parameter corresponding to the minimum value of coordinate error is selected as the optimal geometric transformation parameter. The specific registration process is described in Algorithm 1. The computational complexity of each iteration in Algorithm 1 is analysed in the following. Step 2 takes time O(n), where n is the number of feature point pairs. Step 3 takes time O(n3) to invert the matrix A1. M feature points in I2 are transformed into I1, and the computational complexity is O(nM) in step 4. Time O(M) is taken in step 5. Therefore, the total computational complexity of each iteration is O(n3 + nM).

figure a

3.2 Experimental data

To verify the correctness of the GTM established in this paper, a line-scan image acquisition system was built. The system is composed of a one-dimensional motion platform, a line-scan camera, a tripod and a planar object. The line-scan images can be obtained by the line-scan camera under different initial poses and different motion directions. The pose of the line-scan camera and motion velocity are adjusted before each imaging which makes the different geometric deformation of the object in the line-scan images. The model established in this paper can eliminate the geometric changes caused by these different imaging parameters. These imaging parameters need not be taken as prior knowledge, nor do they have any limitations on the process of estimating parameters in Algorithm 1. The checkerboard pattern is selected as the planar object in this paper since the feature points in this pattern are rich and evenly distributed. The feature points can be extracted with high accuracy, and the distribution is regular to avoid the mismatching of feature points. The influence of feature point extraction error and mismatch on the registration result can be reduced to verify the correctness of the established model with the checkerboard pattern line-scan images.

The tripod and the line-scan camera are placed on the one-dimensional motion platform to realize the arbitrary relative motion direction between the line-scan camera and the object, as shown in Fig. 4a. The motion platform moves uniformly in a straight line, and the object is scanned by the line-scan camera to complete an image acquisition. The imaging poses of the line-scan camera are changed by rotating the tripod, while the motion direction is changed by shifting the one-dimensional motion platform. Figure 5 shows a partial example of the line-scan image collected in this scenario, and the geometric transformation of the hyperbolic arc form is generated in the image.

Fig. 4
figure 4

Line-scan image acquisition system. a The line-scan camera scans the object under different initial poses and motion directions; b The line-scan camera scans under different initial poses and parallel motion directions relative to the planar object

Fig. 5
figure 5

Line-scan images collected by the line-scan camera under different initial poses and motion directions

The line-scan camera is fixed on the tripod and placed on the ground, and the planar object is placed on the one-dimensional motion platform. The parallel relative motion direction between the line-scan camera and the object can be realized, as shown in Fig. 4b. The object passes in front of the camera at a constant speed on the platform to complete an image acquisition and the different poses of the camera are achieved by rotating and shifting the tripod. Figure 6 shows image examples of the checkerboard collected using the acquisition system in Fig. 4b. Due to the different imaging poses of the camera, the checkerboard pattern presents different geometric shapes in the images.

Fig. 6
figure 6

Line-scan images collected by the line-scan camera under different initial poses and parallel motion direction relative to the planar object

4 Experimental results

4.1 Model validation based on our data

The GTM of the line-scan image established in this paper is compared with the eight-parameter GTM of the line-scan image (Eq. (11), short for eight-parameter model in the following paper) and the projection transformation model of the area-scan image (Eq. (10), short for projection model in the following paper). The former model is the only one that conforms to the geometric transformation law of the line-scan image, while the latter is the model that can represent the most complex geometric transformation of area-scan images. Line-scan images collected under two scenarios in Fig. 4 are selected as the experimental data. These three GTMs are used to, respectively, register the line-scan image pairs, and each registered image pair comes from the same scenario. In the scenario that the line-scan camera scans the object under the arbitrary relative motion direction, 31 line-scan images are collected, that is, 465 image pairs to be registered; in the scenario that the relative motion direction parallel to the object, 20 line-scan images are collected, that is, 190 image pairs to be registered. The method in Ref. [32] is used to extract the feature points in the checkerboard pattern line-scan images.

The registration method of the eight-parameter model and projection model is the same as that in Algorithm 1. In the eight-parameter model, h7 and h8 are only related to u2 so there are five unknown parameters in u2 and three unknown parameters in v2. Therefore, five matching feature point pairs are needed to solve the parameters in the eight-parameter model. The parameters in the projection model can be solved by combining u2 and v2, as h7 and h8 are both related to them. Since each matching feature point pair can provide two equations, four pairs are needed to solve the parameters in the projection model. The number of iterations is set to 100,000 to expect that the estimation of parameters in each model can get better results. The registration results based on different GTMs will be evaluated from subjective and objective aspects. Subjectively, the target image is transformed to the coordinate system of the template image according to the optimal geometric transformation parameters, and their overlapped image and feature points are drawn. The registration results can be evaluated by observing intuitively the positions of checkerboard intersections in the same coordinate system. If two images are geometrically aligned, the intersections in the two images will also coincide in the same coordinate system. Objectively, the mean error (ME), the root mean square error (RMSE) and the correctly matched rate (CMR) of the image feature point coordinates are calculated. CMR is calculated according to Eq. (17), where m is the pair numbers of the feature points and min is the number of geometric errors of feature point coordinates in m that are less than 1 pixel after registration. Since the feature points in the checkerboard pattern are evenly distributed, the higher CMR, the better the global registration accuracy. Meanwhile, the computational time of the registration process based on the different GTMs is recorded. The correctness of the GTM established in this paper is quantified through these statistical indexes.

$${\text{CMR}} = \frac{{m_{{{\text{in}}}} }}{m}$$
(17)

4.1.1 Image registration results collected under different motion directions

Part figure a in Fig. 5 is selected as the template image, and part figures b–d in Fig. 5 are selected as the target images. Table 2 shows the ME, RMSE and CMR of the feature point coordinate and the computational time of the registration. It can be seen that the ME and RMSE of the feature point coordinate after registration using the proposed GTM are the smallest. Both errors are less than 0.5 pixels, so the geometric alignment accuracy of line-scan images is very high. The ME and RMSE of feature point coordinates are between 1 and 3 pixels after registration with the eight-parameter model and are greater than 11 pixels after registration using the projection model. This means that the two images based on these two GTMs are not geometrically aligned. The CMR of the three image pairs after registration based on the model proposed in this paper is nearly 100%. It indicates that the three image pairs have very high-precision global geometric alignment. On the contrary, the CMR registered based on the other two GTMs is lower than 33%.

Table 2 Coordinate error statistics of feature points registered based on three GTMs and their computational time

Figure 7 lists the box plot of statistic indexes after registration of all images collected in the scenario of Fig. 4a. It can be seen that the ME and RMSE of 465 image pairs registered based on the proposed model are very small. It can be concluded that the registration accuracy is high and the proposed model can be used to express the geometric transformation relationship of line-scan images collected in this scenario. Figure 7c shows the CMR of feature points after the registration of three GTMs. The CMR of 75% of 465 image pairs registered by our model is nearly 100%, and the lowest CMR is also around 80%. On the contrary, the other two models cannot represent the geometric transformation law of the images collected in this scenario, and the CMR of 75% of 465 image pairs is under 80% and the lowest CMR is even close to 0%. Since there are more parameters to be solved in the model established in this paper, it takes more time to register the line-scan images compared with the other two GTMs, as shown in Fig. 7d. However, only the proposed GTM can align the image collected in this scenario.

Fig. 7
figure 7

The box plots of the coordinate error statistics of feature points after registration and the registration computational time of 465 line-scan image pairs. a ME of feature point pairs, b RMSE of feature point pairs, c CMR of feature point pairs, d computational time

Figure 8 shows the coincidence situation of feature point pairs registered based on three GTMs, and the accuracy of registration results is intuitively presented. The first column in Fig. 8 is the registration results based on the GTM established in this paper. It can be seen that the feature points in the target image almost coincide with the feature points in the template image, which means that the coordinate error between the feature points is small and the proposed model gives excellent geometric alignment accuracy. The second and the third columns of Fig. 8 are the registration results based on the eight-parameter model and projection model, respectively. The feature points of the template image do not coincide with the feature points of the target image, which indicates that they are not geometrically aligned. It also indicates that these two GTMs do not conform to the geometric transformation law of images acquired in the scenario that the relative motion between the line-scan camera and the object is arbitrary direction.

Fig. 8
figure 8

Overlap degree of feature points after registration using three GTMs. First column: the registration results based on the GTM established in this paper. Second column: the registration results based on the eight-parameter model. Third column: the registration results based on the projection model. The feature points in the box are not alignment. a–c The registration result of Fig. 5a and b, d–f The registration result of Fig. 5a and c, g–i The registration result of Fig. 5a and d

4.1.2 Image registration results collected under parallel motion direction

The model established in this paper is based on the changes in poses and motion directions, which should contain the geometric transformation relationship of the eight-parameter model. In this subsection, the proposed model and the eight-parameter model will be used to register the line-scan image acquired under parallel motion direction relative to the planar object and verify whether it can realize the registration of the image acquired in this scenario.

Part figure a in Fig. 6 is selected as the template image, and part figures b–d in Fig. 6 are selected as the target images. Table 3 shows the statistical errors of feature point coordinates after registration and computational time of the sample image in Fig. 6 based on two GTMs, respectively. As shown in Table 3, the ME and RMSE of the feature point coordinates are both less than 1 pixel. CMR also shows that the two models both have high global registration accuracy for the image acquired in this scenario. In terms of computational time, the model established in this paper still needs more time to solve. The first row of Fig. 9 is the registration result based on the proposed model, and the second row is based on the eight-parameter model. As can be seen from the registration results, the coincidence of feature point pairs which are transformed into the same coordinate system is very high. It is difficult to intuitively distinguish which model has a higher accuracy of registration. It means that the GTM established in this paper contains the geometric transformation law described by the eight-parameter line-scan image GTM.

Table 3 Coordinate error statistics of feature points registered based on two GTMs and their computational time
Fig. 9
figure 9

Overlap degree of feature points after registration using two GTMs. First row: the registration results based on the GTM established in this paper. Second row: the registration results based on the eight-parameter model. a, d The registration result of Fig. 6a and b, b, e The registration result of Fig. 6a and c, c, f The registration result of Fig. 6a and d.

Figure 10 shows the statistical error of the feature point coordinates after registration and computational time of all images collected in the scenario of Fig. 4b. Although the ME and RMSE of the feature point coordinates of the 190 line-scan image pairs registered based on the eight-parameter model are less than 1 pixel, the registration accuracy based on the model proposed in this paper is higher. Figure 10c shows the CMR after the registration of 190 line-scan image pairs. It can be seen that the model proposed in this paper can also realize the high-precision global registration of line-scan images collected under parallel motion direction scenarios, and its universality has been verified. In order to be fair, the iteration number of the experiment in this paper is set very high and the image registration based on the proposed model requires more computational time, as shown in Fig. 10d. When the iteration number is reduced, the computational time of the two GTMs will be smaller or even negligible.

Fig. 10
figure 10

The box plots of the coordinate error statistics of feature points after registration and the registration computational time of 190 line-scan image pairs. a ME of feature point pairs, b RMSE of feature point pairs, c CMR of feature point pairs, d computational time

4.2 Registration results on real train line-scan images

In this subsection, train line-scan images are taken as the registration data to verify the correctness of the model established in this paper. As mentioned in introduction, the geometric shape of the train in line-scan images collected by TEDS installed at different railway locations is discrepant. The area in the locomotive is selected as the reference image because its surface is not parallel to the motion direction of the train. As shown in Fig. 11a, d, the distance between the line-scan camera and the train surface is from far to near with movement. The boxes in them are selected as template images, as shown in Fig. 11b and e. Figure 11c and f is the target image to be registered.

Fig. 11
figure 11

Train line-scan images. a and d are reference images, b and e are template images, and c and f are target images

RIFT [12] feature points are extracted from train line-scan images, and seven matched feature point pairs are manually selected. Then, the parameters of the GTM established in this paper and eight-parameter model are estimated, respectively. Registration results are presented in the form of pseudo-colour image. Figure 12a and c is the registration result based on our model. The geometric of template images is precisely aligned with the geometric of the warped target images. On the contrary, the registration results using the eight-parameter model show that there are still local geometric errors, as shown in the boxes in Fig. 12b and d. The registration results of practical application also demonstrate the correctness and general of the GTM proposed in this paper.

Fig. 12
figure 12

Registration results of train line-scan images. a and c are the registration results based on the GTM established in this paper, and b and d are the registration results based on the eight-parameter transformation model; the area in the box has no alignment

5 Conclusions

Considering the geometric changes of the object in the image caused by different initial poses and motion directions of line-scan cameras, a general GTM with 12 parameters based on the imaging model of the line-scan camera is established. The model contains the geometric relationship described by the eight-parameter line-scan image GTM, and the relationship is similar to the affine transformation model and the projection transformation model in the GTM of area-scan images. The biggest difference between the proposed model and the eight-parameter line-scan image GTM is that our model can express the geometric transformation of hyperbolic arcs caused by the relative motion that makes the line-scan camera towards or away from the imaged object. The correctness and general of the proposed GTM are verified by the registration experiments of the line-scan images collected in two scenarios that the line-scan camera moves in arbitrary directions and the relative motion parallel to the object. Furthermore, the accuracy of the real application is also verified by the registration of the train line-scan image. The GTM established in this paper broadens the use conditions of the line-scan camera so that it does not have to move parallel to the object plane and perpendicular to the sensor. It can be more flexible in building the line-scan camera acquisition system in practical applications.

Availability of data and materials

The data sets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

GTM:

Geometric transformation model

TEDS:

The trouble of moving electric multiple units detection system

3D:

Three-dimensional

2D:

Two-dimensional

LSC I:

Line-scan camera I

LSC II:

Line-scan camera II

RIFT:

Radiation-variation insensitive feature transform

References

  1. B. Sun, J. Zhu, L. Yang, S.R. Yang, Z.Y. Niu, Calibration of line-scan cameras for precision measurement. Appl. Optics 55(25), 6836–6843 (2016). https://doi.org/10.1364/AO.55.006836

    Article  Google Scholar 

  2. B. Sun, J.G. Zhu, L.H. Yang, Y. Guo, J.R. Lin, Stereo line-scan sensor calibration for 3D shape measurement. Appl. Optics 56(28), 7905–7914 (2017). https://doi.org/10.1364/AO.56.007905

    Article  Google Scholar 

  3. K.C. Song, B.M. Hou, H. Niu, X. Wen, Y.H. Yan, Flexible line-scan camera calibration method using a coded eight trigrams pattern. Opt. Lasers Eng. 110, 296–307 (2018). https://doi.org/10.1016/j.optlaseng.2018.06.014

    Article  Google Scholar 

  4. M. Yao, Z.Y. Zhao, B.G. Xu, Geometric calibration of line-scan camera using a planar pattern. J. Electron. Imaging 23(1), 013028 (2014). https://doi.org/10.1117/1.JEI.23.1.013028

    Article  Google Scholar 

  5. B.W. Hui, G.J. Wen, P. Zhang, A novel line scan camera calibration technique with an auxiliary frame camera. IEEE Trans. Instrum. Meas. 62(9), 2567–2575 (2013). https://doi.org/10.1109/TIM.2013.2256815

    Article  Google Scholar 

  6. C. Steger, M. Ulrich, A camera model for line-scan cameras with telecentric lenses. Int. J. Comput. Vis. 129, 80–99 (2021). https://doi.org/10.1007/s11263-020-01358-3

    Article  MathSciNet  MATH  Google Scholar 

  7. R. Usamentiaga, D.F. Garcia, Multi-camera calibration for accurate geometric measurements in industrial environments. Measurement 134, 345–358 (2019). https://doi.org/10.1016/j.measurement.2018.10.087

    Article  Google Scholar 

  8. B.W. Hui, J.R. Zhong, G.J. Wen, D.R. Li, Determination of line scan camera parameters via the direct linear transformation. Opt. Eng. 51(11), 113201 (2012). https://doi.org/10.1117/1.OE.51.11.113201

    Article  Google Scholar 

  9. B.W. Hui, G.J. Wen, Z.X. Zhao, D.R. Li, Line-scan camera calibration in close-range photogrammetry. Opt. Eng. 51(5), 053602 (2012). https://doi.org/10.1117/1.OE.51.5.053602

    Article  Google Scholar 

  10. R.Y. Liao, J.G. Zhu, L.H. Yang, J.R. Lin, B. Sun, J.C. Yang, Flexible calibration method for line-scan cameras using a stereo target with hollow stripes. Opt. Lasers Eng. 113, 6–13 (2019). https://doi.org/10.1016/j.optlaseng.2018.09.014

    Article  Google Scholar 

  11. X.X. Zhang, C. Gilliam, T. Blu, All-pass parametric image registration. IEEE Trans. Image Process. 29, 5625–5640 (2020). https://doi.org/10.1109/TIP.2020.2984897

    Article  MATH  Google Scholar 

  12. J.Y. Li, Q.W. Hu, M.Y. Ai, RIFT: multi-modal image matching based on radiation-variation insensitive feature transform. IEEE Trans. Image Process. 29, 3296–3310 (2020). https://doi.org/10.1109/TIP.2019.2959244

    Article  MATH  Google Scholar 

  13. Y.X. Ye, J. Shan, L. Bruzzone, Robust registration of multimodal remote sensing images based on structural similarity. IEEE Trans. Geosci. Remote Sensing 55(5), 2941–2958 (2017). https://doi.org/10.1109/TGRS.2017.2656380

    Article  Google Scholar 

  14. G.A. Idrobo-Pizo, J.M.S.T. Motta, D.L. Borges, Novel invariant feature descriptor and a pipeline for range image registration in robotic welding applications. IET Image Process. 13(6), 964–974 (2019). https://doi.org/10.1049/iet-ipr.2018.6105

    Article  Google Scholar 

  15. C.C. Lin, Y.C. Tai, J.J. Lee, Y.S. Chen, A novel point cloud registration using 2D image features. EURASIP J. Adv. Signal Process. 2017, 5 (2017). https://doi.org/10.1186/s13634-016-0435-y

    Article  Google Scholar 

  16. C.X. Li, Z.L. Shi, Y.P. Liu, T.C. Liu, L.Y. Xu, Efficient and robust direct image registration based on joint geometric and photometric lie algebra. IEEE Trans. Image Process. 27(12), 6010–6024 (2018). https://doi.org/10.1109/TIP.2018.2864895

    Article  MathSciNet  MATH  Google Scholar 

  17. S.J. Chen, H.L. Shen, C.G. Li, J.H. Xin, Normalized total gradient: a new measure for multispectral image registration. IEEE Trans. Image Process. 27(3), 1297–1310 (2018). https://doi.org/10.1109/TIP.2017.2776753

    Article  MathSciNet  MATH  Google Scholar 

  18. W.W. Kong, P.X. Zang, S.J. Niu, D.W. Li, Iterative registration for multi-modality retinal fundus photographs using directional vessel skeleton. IET Image Process. 15(3), 696–704 (2021). https://doi.org/10.1049/ipr2.12054

    Article  Google Scholar 

  19. C. Steger, M. Ulrich, C. Wiedemann, Machine vision algorithms and applications, in Translation, 1st edn., ed. by S.R. Yang, D.J. Wu, D.S. Duan (Tsinghua University Press, Beijing, 2008), pp.47–53

    Google Scholar 

  20. S.F. Lu, Z. Liu, Automatic visual inspection of a missing split pin in the China railway high-speed. Appl. Optics 55(30), 8395–8405 (2016). https://doi.org/10.1364/AO.55.008395

    Article  Google Scholar 

  21. S.F. Lu, Z. Liu, Y. Shen, Automatic fault detection of multiple targets in railway maintenance based on time-scale normalization. IEEE Trans. Instrum. Meas. 67(4), 849–865 (2018). https://doi.org/10.1109/TIM.2018.2790498

    Article  Google Scholar 

  22. J.Y. Xu, R. Sun, P.Y. Tian, Q. Xie, Y. Yang, H.D. Liu, L. Cao, Correction of rolling wheel images captured by a linear array camera. Appl. Optics 54(33), 9736–9740 (2015). https://doi.org/10.1364/AO.54.009736

    Article  Google Scholar 

  23. L. Liu, F.Q. Zhou, Y.Z. He, Automated visual inspection system for bogie block key under complex freight train environment. IEEE Trans. Instrum. Meas. 65(1), 2–14 (2016). https://doi.org/10.1109/TIM.2015.2479101

    Article  Google Scholar 

  24. L. Fang, Z.L. Shi, C.X. Li, Y.P. Liu, E.B. Zhao, Geometric transformation modeling for line-scan images under different camera poses. Opt. Eng. 61(10), 103103 (2022). https://doi.org/10.1117/1.OE.61.10.103103

    Article  Google Scholar 

  25. R. Usamentiaga, Static calibration for line-scan cameras based on a novel calibration target. IEEE Trans. Instrum. Meas. 71, 5015812 (2022). https://doi.org/10.1109/TIM.2022.3190039

    Article  Google Scholar 

  26. D.D. Li, G.J. Wen, S.H. Qiu, Cross-ratio-based line- scan camera calibration using a planar pattern. Opt. Eng. 55(1), 014104 (2016). https://doi.org/10.1117/1.OE.55.1.014104

    Article  Google Scholar 

  27. R. Usamentiaga, D.F. Garcia, F.J. Calle, Line-scan camera calibration: a robust linear approach. Appl. Optics 59(30), 9443–9453 (2020). https://doi.org/10.1364/AO.404774

    Article  Google Scholar 

  28. D. Poly, A rigorous model for spaceborne linear array sensors. Photogramm. Eng. Remote Sens. 73, 187–196 (2007). https://doi.org/10.14358/PERS.73.2.187

    Article  Google Scholar 

  29. R. Gupata, R.I. Hartley, Linear pushbroom cameras. IEEE Trans. Pattern Anal. Mach. Intell. 19(9), 963–975 (1997). https://doi.org/10.1109/34.615446

    Article  Google Scholar 

  30. C. Steger, M. Ulrich, C. Wiedemann, Machine Vision Algorithms and Applications, 2nd edn. (Wiley-VCH, Weinheim, 2018), pp.618–622

    Google Scholar 

  31. P. Monteiro, J. Ascenso, F. Pereira, Perspective transform motion modeling for improved side information creation. EURASIP J. Adv. Signal Process. 2013, 189 (2013). https://doi.org/10.1186/1687-6180-2013-189

    Article  Google Scholar 

  32. A. Geiger, F. Moosmann, Ö. Car, B. Schuster, Automatic camera and range sensor calibration using a single shot, in Proceeding of IEEE International Conference on Robotics and Automation, (Saint Paul, USA, 2012), pp. 3936–3943. https://doi.org/10.1109/ICRA.2012.6224570

Download references

Acknowledgements

Not applicable.

Funding

This work was supported by the Science and Technological Innovation Field Fund Projects under Grant No. E01Z041101.

Author information

Authors and Affiliations

Authors

Contributions

LF and ZS conceived the idea of the study. LF, YL, CL and MP wrote the paper. LF and MP validated the results. EZ collected the data. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Lei Fang.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fang, L., Shi, Z., Liu, Y. et al. A general geometric transformation model for line-scan image registration. EURASIP J. Adv. Signal Process. 2023, 78 (2023). https://doi.org/10.1186/s13634-023-01041-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-023-01041-y

Keywords