In order to describe the proposed algorithm for image interpolation, we first apply it to one-dimensional vectors. Let _{
X
LR
} be a 1×*N* vector. Suppose we wish to enlarge _{
X
LR
} to a *kN* sized vector _{
X
HR
}using the following steps: We put *k*−1 zeros between any two successive entries of vector _{
X
LR
}. In order to achieve these *kN*−*N* new entries of _{
X
HR
}, we choose *2M* of *N* entries of _{
X
LR
}. The new value will be a linear combination of these *2M* entries.

In order to find the *2M* indeterminate multipliers of this linear combination, we should follow the procedure stated below, bearing in mind that for each fixed vector of any size the procedure and the multipliers applied to enlarge the vector are fixed. This means that, the multipliers used for obtaining a 1×*N* vector from its 1\times \left[\frac{N}{k}\right] counterpart are the same as the ones used to calculate a 1\times \left[\frac{N}{k}\right] vector from its 1\times \left[\frac{N}{{k}^{2}}\right] counterpart.

**Step(1-1):** As presented in (1) and (2), we down-sample the original vector _{
X
LR
} by a factor of *k* to find {X}_{1\times \left[\frac{N}{k}\right]}^{\prime}.

**Step(1-2):** Again we down-sample {X}_{1\times \left[\frac{N}{k}\right]}^{\u2033} by a factor of *k* to obtain {X}_{1\times \left[\frac{N}{{k}^{2}}\right]}^{\prime} vector.

**Step(1-3):** Now, by constructing 1\times \left[\frac{N}{k}\right] vector {X}^{\prime} from vector {X}^{\u2033} of size 1\times \left[\frac{N}{{k}^{2}}\right], we can find the multipliers used to obtain the enlarged image. To approach this, we must zero-pad {X}^{\u2033} by a factor of *k* to build vector {X}_{\mathrm{ZP}}^{\u2033}. This means that, *k*−1 zeros are inserted between every two successive entries of {X}^{\u2033}.

As it was stated earlier, by using these multipliers, we can find a new vector _{
X
HR
} from {X}_{1\times \left[\frac{N}{k}\right]}^{\prime}.

In order to evaluate the enlarged image, we take advantages of the criteria peak signal to noise ratio (PSNR) and Structural SIMilarity (SSIM) [11] by down-sampling the original vector by a factor of *k* and interpolating it using the proposed algorithm.

X=\left(\begin{array}{llll}{x}_{1}& {x}_{2}& \dots & {x}_{N}\end{array}\right)

(1)

down-sample by a factor of *k* *↓*

{X}^{\u2033}=\left(\begin{array}{lllll}{x}_{1}& {x}_{k+1}& {x}_{2k+1}& \dots & {x}_{\left(\right[\frac{N}{k}]-1)k+1}\end{array}\right)

(2)

down-sample by a factor of *k* *↓*

{X}^{\u2033}=\left(\begin{array}{lllll}{x}_{1}^{\prime}& {x}_{k+1}^{\prime}& {x}_{2k+1}^{\prime}& \dots & {x}_{\left(\right[\frac{N}{{k}^{2}}]-1)k+1}^{\prime}\end{array}\right)

(3)

Zero padding *↓*

{X}_{\mathrm{ZP}}^{\u2033}=\left(\begin{array}{llllllll}{x}_{1}^{\u2033}& \underset{(k-1)\text{times}}{\underset{\u23df}{0\dots 0}}& {x}_{2}^{\u2033}& \underset{(k-1)\text{times}}{\underset{\u23df}{0\dots 0}}& {x}_{3}^{\u2033}& \dots & 0& {x}_{\left[\frac{N}{{k}^{2}}\right]}^{\u2033}\end{array}\right)

(4)

**Step(1-4):** In order to compute vector {X}^{\u2033} from {X}_{\mathrm{ZP}}^{\mathrm{\u2033\u2033}}, we must insert a linear combination of {x}_{i}^{\mathrm{\u2033\u2033}}’s; i=1:\phantom{\rule{2.0pt}{0ex}}\left[\frac{N}{{k}^{2}}\right] instead of each zero in {X}_{\mathrm{ZP}}^{\mathrm{\u2033\u2033}}. For this purpose, we can convolve {X}_{\mathrm{ZP}}^{\mathrm{\u2033\u2033}} with an interpolator vector, *A* (which is determined by (5)).

In this equation, *A* is a 1×(1 + 2(*M*−1) + 2*M*(*k*−1)) vector. The 1 + (*M*−1) + *M*(*k*−1)^{st} entry, which is located at the middle of the vector, is 1. Except for the mentioned entry, the entries with the indices that are multiples of *k* are 0, and there are (*k*−1)*a*^{i}_{
″
}s and (*k*−1)_{
a
i
}s between any two successive zeros at the left and right side of the entry 1, respectively which can be calculated by using the proposed algorithm.

In (5), the parameter *M* can take an optional value which is set to 2 in our experiments. However, by setting *M* higher than this value, implementation speed decreases while image quality increases.

\begin{array}{l}A=[{a}_{M(k-1)}^{\prime}\phantom{\rule{1.0pt}{0ex}}{a}_{M(k-1)-1}^{\prime}\\ \phantom{\rule{1em}{0ex}}\dots {a}_{(M-1)(k-1)+1}^{\prime}0\phantom{\rule{2.0pt}{0ex}}{a}_{(M-1)(k-1)}^{\prime}{a}_{(M-1)(k-1)-1}^{\prime}\\ \phantom{\rule{1em}{0ex}}\dots {a}_{(M-2)(k-1)+1}^{\prime}0\dots {a}_{k-1}^{\prime}\dots {a}_{2}^{\prime}\phantom{\rule{1.0pt}{0ex}}{a}_{1}^{\prime}1{a}_{1}{a}_{2}\\ \phantom{\rule{1em}{0ex}}\dots {a}_{k-1}0\dots {a}_{(M-2)(k-1)+1}\\ \phantom{\rule{1em}{0ex}}\dots {a}_{(M-1)(k-1)}0\phantom{\rule{2.0pt}{0ex}}{a}_{(M-1)(k-1)+1}\dots {a}_{M(k-1)}]\end{array}

(5)

{X}^{\prime}={X}_{\mathrm{ZP}}^{\u2033}\ast A

(6)

In (6), {X}^{\prime} and {X}_{\mathrm{ZP}}^{\u2033} are known and *A* is unknown, and there are \left[\frac{N}{k}\right]-\left[\frac{N}{{k}^{2}}\right] equations and 2*M*(*k*−1) unknown variables. This statement means that, if (6) is to have a solution, \left[\frac{N}{k}\right]-\left[\frac{N}{{k}^{2}}\right] must be greater than or equal to 2*M*(*k*−1).

**Step(1-5):** To solve (6), we have the following matrix equation:

\begin{array}{l}{B}_{\left[\frac{N}{k}\right]\times (1+2(M-1)+2M(k-1\left)\right)}\\ \times {A}_{(1+2(M-1)+2M(k-1\left)\right)\times 1}={C}_{\left[\frac{N}{k}\right]\times 1}\end{array}

(7)

where C={X}^{\prime}{}^{T}, and *A* denotes the interpolator matrix.

**Step(1-6):** In order to find matrix *B*, We add (*Mk*−1) zeros to the beginning and the end of vector {X}_{\mathrm{ZP}}^{\u2033} to obtain vector {X}_{\mathrm{ZP}0}^{\u2033}.

{X}_{\mathrm{ZP}0}^{\u2033}=\left(\begin{array}{llllllllll}0& \dots & 0& {x}_{\mathrm{ZP},1}^{\u2033}& {x}_{\mathrm{ZP},2}^{\u2033}& \dots & {x}_{\mathrm{ZP},\left[\frac{N}{k}\right]}^{\u2033}& 0& \dots & 0\end{array}\right)

(8)

where {x}_{\mathrm{ZP},i}^{\u2033} is the ^{i th} entry of vector {X}_{\mathrm{ZP}}^{\u2033}.

**Step(1-7):** By using {X}_{\mathrm{ZP}0}^{\u2033}, we can find matrix *B* as follows:

\begin{array}{ll}{B}_{i\ast}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}& =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{X}_{\mathrm{ZP}0}^{\u2033}(i:i+\text{size}(A)-1)\phantom{\rule{2em}{0ex}}\\ =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}{X}_{\mathrm{ZP}0}^{\u2033}(i:i+2M(k-1)-1)\phantom{\rule{2em}{0ex}}\end{array}

(9)

i=1:\text{size}\left({X}^{\prime}\right)=1:\phantom{\rule{2.0pt}{0ex}}\left[\frac{N}{k}\right]

where _{B i∗}is the ^{i th}row of matrix *B*.

Now, following Step(1-5) and then Step(1-4), we can obtain the multipliers used for image enlargement.

To understand the above algorithm, we can illustrate the above relation with an example in which *N*=36,*K*=3,*M*=2.

**Step(1-1):**

\begin{array}{ll}{X}^{\prime}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}& =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\text{downsample}(X,3)\phantom{\rule{2em}{0ex}}\\ =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\left(\begin{array}{lllllllll}{x}_{1}& {x}_{4}& {x}_{7}& {x}_{10}& \dots & {x}_{25}& {x}_{28}& {x}_{31}& {x}_{34}\end{array}\right)\phantom{\rule{2em}{0ex}}\\ =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\left(\begin{array}{lllllllll}{x}_{1}^{\prime}& {x}_{2}^{\prime}& {x}_{3}^{\prime}& {x}_{4}^{\prime}& \dots & {x}_{9}^{\prime}& {x}_{10}^{\prime}& {x}_{11}^{\prime}& {x}_{12}^{\prime}\end{array}\right)\phantom{\rule{2em}{0ex}}\end{array}

(10)

**Step(1-2):**

\begin{array}{ll}{X}^{\u2033}\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}& =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\text{downsample}({X}^{\prime},3)\phantom{\rule{2em}{0ex}}\\ =\phantom{\rule{2.77695pt}{0ex}}\phantom{\rule{2.77695pt}{0ex}}\left(\begin{array}{llll}{x}_{1}^{\prime}& {x}_{4}^{\prime}& {x}_{7}^{\prime}& {x}_{10}^{\prime}\end{array}\right)\phantom{\rule{2em}{0ex}}\end{array}

(11)

**Step(1-3):**

{X}_{\mathrm{ZP}}^{\u2033}=\left(\begin{array}{llllllllll}{x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}\end{array}\right)

(12)

B=\left(\begin{array}{lllllllllll}0& 0& 0& 0& 0& {x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0\\ 0& 0& 0& 0& {x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}\\ 0& 0& 0& {x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0\\ 0& 0& {x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0\\ 0& {x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}\\ {x}_{1}^{\prime}& 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0\\ 0& 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0& 0\\ 0& {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0& 0& 0\\ {x}_{4}^{\prime}& 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0& 0& 0& 0\\ 0& 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0& 0& 0& 0& 0\\ 0& {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0& 0& 0& 0& 0& 0\\ {x}_{7}^{\prime}& 0& 0& {x}_{10}^{\prime}& 0& 0& 0& 0& 0& 0& 0\end{array}\right)

(13)