From the previous section, finding filtering vector *F* is the key of apodization filtering. As can be seen from Equations (1) and (2), getting *F* is essentially to solve a first kind integral equation which generally represents ill-posed system, so that the stability of adopted method is as important as its efficiency. Original realization of apodization filtering (ORAF) is introduced in [22]. It has successfully been implemented for random noise radar in order to suppress the range sidelobes by applying projection method to resolve the ill-posed problem. However, its stability is not enough because inappropriate solution may be obtained. Improper filtering can lead to obvious distortion of output response. Therefore, a modified method SRAF is proposed in this article to acquire correct filtering vector with following processing details.

### 3.1. Coefficient matrix and desired response vector

As shown in Figure 1, coefficient matrix *K* and desired response vector *G* should be determined before resolving Equation (2). Approaches to formations of *K* and *G* are presented in this section.

Comparing Equations (1) and (2), product of matrix *K* and vector *F* is equivalent to convolution operation *A*(*τ*) ⊗ *F*(*τ*). This means that the convolution should be expressed as the product. Therefore, *K* is a convolution matrix, whose rows are reversed, conjugated, and time-shifted versions of *A*(*τ*) 's sampling. Therefore, convolution kernel approach can be introduced to get *K*. The sampled version of *A*(*τ*) is expressed as

A={\left[{a}_{1}{a}_{2}\dots \underset{\mathsf{\text{mainlobe}}}{\underset{\u23df}{{a}_{{m}_{1}}{a}_{{m}_{2}}\cdots {a}_{{m}_{k}}}}\cdots {a}_{m-1}{a}_{m}\right]}_{m\times 1}^{\mathsf{\text{T}}}

(3)

where *m* is the dimension of *A*, and {a}_{{m}_{i}}\left(i=1,2,\dots ,k\right) are the elements corresponding to mainlobe. Filtering function *F* can be denoted as

\mathit{F}={\left[\begin{array}{cccc}\hfill {f}_{1}\hfill & \hfill {f}_{2}\hfill & \hfill \dots \hfill & \hfill {f}_{n}\hfill \end{array}\right]}_{n\times 1}^{T}

(4)

where *n* is filter length that should be determined in advance. Based on convolution kernel approach, matrix *K* can be constructed by Equation (5).

\mathit{K}={\left[\begin{array}{c}\hfill {a}_{1}\hfill \\ \hfill {a}_{2}\hfill & \hfill {a}_{1}\hfill \\ \hfill \vdots \hfill & \hfill {a}_{2}\hfill & \hfill \ddots \hfill \\ \hfill {a}_{m}\hfill & \hfill \vdots \hfill & \hfill {a}_{1}\hfill \\ \hfill {a}_{m}\hfill & \hfill {a}_{2}\hfill \\ \hfill \ddots \hfill & \hfill \vdots \hfill \\ \hfill {a}_{m}\hfill \end{array}\right]}_{\left(m+n-1\right)\times n}

(5)

Elements not represented in Equation (5) are zero. As shown in Equation (5), matrix *K* is determined by original response and filter length together. Formulated by Equation (5), coefficient matrix *K* is guaranteed to accomplish the convolution operation without approximation. Compared with ORAF, it has adequate accuracy of matrix model with no restriction on filter length.

In an ideal output response, sidelobes are absolutely suppressed. Thus, desired response *G* is selected as follows: set the amplitude of all sidelobe points to be zero, and set the amplitude of mainlobe points to be the same as original response *A*. According to this selection principle, vector *G* is of the form

\mathit{G}={\left[0\dots 0\underset{\mathrm{mainlobe}}{\underset{\u23df}{{a}_{{m}_{1}}{a}_{{m}_{2}}\dots {a}_{{m}_{k}}}}0\dots 0\right]}_{\left(m+n-1\right)\times 1}^{T}

(6)

where {a}_{{m}_{i}}\left(i=1,2,\dots ,k\right) should be located at the center of *G*.

### 3.2. Ill-posed analysis

After matrix *K* and vector *G* have been determined, a direct solution for Equation (2) could be achieved by least squares method. However, if Equation (2) represents an ill-posed system, direct solution is practically impossible. Therefore, ill-posed analysis is necessary to decide which method is feasible to obtain correct solution. Whether an equation is ill posed or not depends on the coefficient matrix, so that matrix *K* is further analyzed starting with singular value decomposition (SVD) involved.

With implementation of SVD, matrix *K* is expressed as

\mathit{K}=\mathit{U}{\mathbf{\sum}}_{m+n-1,n}{\mathit{V}}^{T}

(7)

where *U* and *V* are both unit orthogonal matrices with sizes of (*m*+*n*-1) × (*m*+*n*-1) and *n* × *n*, respectively; and *Σ*_{m+n-1, n}is a matrix of (*m*+*n*-1) × *n*, whose representation is shown as Equation (8)

{\mathbf{\sum}}_{m+n-1,n}={\left[\begin{array}{c}\hfill {\sigma}_{1}\hfill \\ \hfill {\sigma}_{2}\hfill \\ \hfill \ddots \hfill \\ \hfill {\sigma}_{n}\hfill \\ \hfill 0\hfill \\ \hfill \vdots \hfill \end{array}\right]}_{\left(m+n-1\right)\times n}

(8)

where *σ*_{
i
}(*i* = 1, 2,...,*n*) denote the non-zero singular values of matrix *K* with degressive order from *σ*_{1} to *σ*_{
n
}.

The least squares solution (LSS) of Equation (2) is in the form of

{\mathit{F}}_{\mathsf{\text{LSS}}}={\left({\mathit{K}}^{T}\mathit{K}\right)}^{-1}{\mathit{K}}^{T}\mathit{G}

(9)

After inserting Equations (7) and (8) into Equation (9), *F*_{LSS} is rewritten as

{\mathit{F}}_{\mathsf{\text{LSS}}}=\mathit{V}{\left[\begin{array}{c}\hfill {\sigma}_{1}^{-1}\hfill \\ \hfill {\sigma}_{2}^{-1}\hfill \\ \hfill \ddots \hfill & \hfill 0\hfill & \hfill \cdots \hfill \\ \hfill {\sigma}_{n}^{-1}\hfill \end{array}\right]}_{n\times \left(m+n-1\right)}{\mathit{U}}^{T}\mathit{G}=\sum _{i=1}^{n}\frac{{\mathit{u}}_{i}^{T}\mathit{G}}{{\sigma}_{i}}{\mathit{v}}_{i}

(10)

where *u*_{
i
} and *v*_{
i
} denote the column vectors of matrices *U* and *V*, respectively.

In the denominator of Equation (10), singular values of matrix *K* highly influence the stability of solution *F*_{LSS}. If matrix *K* has very small singular values, a little variance from matrix *G* will cause serious fluctuation of solution *F*_{LSS}. That is to say, small singular value of *K* is the reason behind the ill-posed state of Equation (2). Therefore, a function which depends on singular values of *K* could be selected to evaluate whether Equation (2) represents an ill-posed system. This function is defined as Equation (11)

\kappa \left(\mathit{K}\right)=\frac{{\sigma}_{max}}{{\sigma}_{min}}=\frac{{\sigma}_{l}}{{\sigma}_{n}}

(11)

Determined by the quotient of the maximum singular value to the minimum one, *κ*(*K*) is called spectral condition of matrix *K*. If the value of *κ*(*K*) is very large with the form of 10^{n} (*n* > 6), Equation (2) is confirmed to be ill posed.

### 3.3. Approach to getting filtering vector

Based on ill-posed analysis result, a suitable method for getting filtering vector should be adopted. In essence, this problem is to solve a first kind integral equation which generally represents ill-posed system. In this section, total variation (TV) method is introduced to resolve the problem.

Depending on the peculiarity of equation, ill-posed problem is generally resolved via constructing an additional regularizing function to resume stability [23, 24]. In this article, a solution vector with finite length is utilized to approach the true solution which makes plentiful non-zero elements of original response become zero, so that the solution is likely to have several discontinuous points. TV is a specific regularization method with advantage of non-restricting solution to be smooth [25]. It is capable of resolving the ill-posed problem in this article because it is able to retain the discontinuous boundary of solution.

TV method focuses on transforming Equation (2) to an alternative function *J*^{α, β}(*F*) with well stability [26], whose representation is Equation (12).

{\mathit{J}}^{\alpha ,\beta}\left(\mathit{F}\right)=\frac{1}{2}{\u2225\mathit{K}\mathit{F}-\mathit{G}\u2225}^{2}+\alpha {\mathit{J}}_{\beta}\left(\mathit{F}\right)

(12)

where ||·|| represents *ℓ*_{2} norm; *J*_{β}(*F*) is regularizing function to resume the stability of Equation (2), and it is expressed as

{\mathit{J}}_{\beta}\left(\mathit{F}\right)={\int}_{\mathsf{\text{U}}}\sqrt{{\left|\nabla F\left(\tau \right)\right|}^{2}+\beta}\mathrm{d}\tau

(13)

where **U** is the support domain of *F*(*τ*). *α* and *β* are the regularization and regulated parameters with positive values. The two parameters have influences on the stability of solving ill-posed equation.

As a function relating to *F*, *J*^{α, β}(*F*) should be minimized to search an adequate filtering vector with variable *F*. Expected filter is the vector *F* that makes Equation (12) to achieve minimum value. For obtaining this filter, fixed-point iteration algorithm is implemented to accomplish the minimizing processing. With initial guess solution {\mathit{F}}^{0}={\left[\begin{array}{ccc}\hfill 0\hfill & \hfill \dots \hfill & \hfill 0\hfill \end{array}\right]}_{n\times 1}^{T}, equations as Equation (14) should be iteratively resolved.

\left[{\mathit{K}}^{*}\mathit{K}+\alpha L\left({\mathit{F}}^{m}\right)\right]{\mathit{F}}^{m+1}={\mathit{K}}^{*}\mathit{G}

(14)

where *K* ***** is the adjoint matrix of *K*, the superscript of *F* denotes iterative steps, and L\left({\mathit{F}}^{m}\right) represents ellipse differential coefficient operator. If operator L\left({\mathit{F}}^{m}\right) is operated at vector *F*, corresponding representation is defined as Equation (15).

L\left({\mathit{F}}^{m}\right)\mathit{F}=-\nabla \cdot \left(\frac{1}{\sqrt{|\nabla {\mathit{F}}^{m}{|}^{2}+{\beta}^{2}}}\nabla \mathit{F}\right)

(15)

where ∇·(*f*) is equal to div(*f*) denoting the divergence of vector *f*, and ∇*f* means the gradient of *f*. The digitized version of Equation (15) should be adopted, which is approximately expressed as

L\left({\mathit{F}}^{m}\right)\mathit{F}={\stackrel{\u0303}{\mathit{D}}}^{T}{\mathit{Q}}_{m}^{-1}\stackrel{\u0303}{\mathit{D}}\mathit{F}

(16)

where *Q*_{
m
} is a diagonal matrix corresponding to the digitized version of \sqrt{|\nabla {\mathit{F}}^{m}{|}^{2}+{\beta}^{2}}; and subscript *m* represents iterative steps. Matrices *Q*_{
m
} and \stackrel{\u0303}{\mathit{D}} can be constructed by Equations (17) and (18), respectively.

\begin{array}{c}{Q}_{m}\left(k,k\right)=\sqrt{{\left[{F}^{m}\left(k\right)-{F}^{m}\left(k-1\right)\right]}^{2}+{\beta}^{2}}\hfill \\ {Q}_{m}\left(1,1\right)=\sqrt{{\left[{F}^{m}\left(1\right)\right]}^{2}+{\beta}^{2}}\hfill \end{array}

(17)

\stackrel{\u0303}{\mathit{D}}={\left[\begin{array}{c}\hfill -a\hfill \\ \hfill b\hfill & \hfill -a\hfill \\ \hfill b\hfill & \hfill \ddots \hfill \\ \hfill \ddots \hfill & \hfill \ddots \hfill \\ \hfill b\hfill & \hfill -a\hfill \end{array}\right]}_{n\times n}

(18)

In Equation (17), *k* = 2,3,...,*n* represents the *k* th element of *F*^{m}. Parameters *a* and *b* in Equation (18) are 0.665 and 0.1755, which are determined by minimizing function *f*(*a*, *b*) = 64(*a*^{2}+2*b*^{2}-0.5)^{2}+224(0.125-*ab*)^{2}+98*b*^{4} [26, 27]. Then, Equation (14) can be digitized as Equation (19)

\left[{\mathit{K}}^{T}\mathit{K}+\alpha {\stackrel{\u0303}{\mathit{D}}}^{T}{\mathit{Q}}_{m}^{-1}\stackrel{\u0303}{\mathit{D}}\right]{\mathit{F}}^{m+1}={\mathit{K}}^{T}\mathit{G}

(19)

Equation (19) is easy to solve by traditional conjugate gradient method.

We can successively obtain *F*^{1}, *F*^{2}, *F*^{3},... until *F*^{n} converging to threshold through iterative resolving Equation (19). In theory, vector *F*^{n} is able to satisfy Equation (20) as follows.

\underset{n\to \infty}{lim}\u2225{\mathit{F}}^{n}-{\mathit{F}}^{n-1}\u2225=0

(20)

In order to decide whether current solution is converged, the norm of difference between the solution vectors of current and former iterations is an obvious indicator. Threshold for the norm must be small enough to ensure that the iterative algorithm converging to a desired solution. In general, threshold is selected less than 10^{-2}. Let *F*= *F*^{n} as filtering vector, the result of multiplying coefficient matrix *K* with filtering vector *F* is filtered output with sidelobes suppression. The process of solving ill-posed equation for filtering vector is shown as Figure 2.

There is an important issue that should be concerned with filtered output after filtering by vector *F*. Since *m*+*n*-1 dimension is much longer than *m*, the length of filtered output must be revised to be consistent with original response *A*. For filtered output, the points at two sides can be abandoned because their values are so little that hardly include useful information after sidelobe suppression. Therefore, revision for length could be performed as follow: with number of *m*, the points in the middle of filtered output are saved as final output; each (*n*-1)/2 points in the two sides are deleted. After length revision, final filtered output is obtained, and the filtering process of SRAF is accomplished with flowchart shown in Figure 3.

Here, it is necessary to present how to determine parameters *α* and *β* that influence the stability of Equation (12). For optimal value of *α*, function *φ*(*λ*) is formed as Equation (21).

\phi \left(\lambda \right)=\frac{1}{m}\sum _{i=1}^{m}\frac{\left|\mathit{G}\left(i\right)\right|}{\lambda {\left|\mathit{A}\left(i\right)\right|}^{2}}-{\u2225\epsilon \u2225}^{2}

(21)

In Equation (21), *G*(*i*) is composed of the *m* middlemost elements after Fourier transforming to desired response *G*, and *A*(*i*) is the Fourier transform of original response *A*. Through one dimension searching, it is easy to find an optimal *λ* that makes *φ*(*λ*) be zero because *φ*(*λ*) is a single descending function. Then, the optimal *α* can be estimated by *α* = *q*/*λ*, where *q* is the mean of *Q*_{
m
}. In [26, 28], the methods about selecting regularization parameter are provided. With small and positive value, parameter *β* is regulated to ensure that regularizing function *J*_{
β
}(*F*) is differentiable at **F** = **0**. In practice, it is estimated by experience and iterative tests.