Superresolution for simultaneous realization of resolution enhancement and motion blur removal based on adaptive prior settings
 Takahiro Ogawa^{1}Email author,
 Daisuke Izumi^{1},
 Akane Yoshizaki^{1} and
 Miki Haseyama^{1}
https://doi.org/10.1186/16876180201330
© Ogawa et al.; licensee Springer. 2013
Received: 11 June 2012
Accepted: 30 January 2013
Published: 22 February 2013
Abstract
A superresolution method for simultaneously realizing resolution enhancement and motion blur removal based on adaptive prior settings are presented in this article. In order to obtain highresolution (HR) video sequences from motionblurred lowresolution video sequences, both of the resolution enhancement and the motion blur removal have to be performed. However, if one is performed after the other, errors in the first process may cause performance deterioration of the subsequent process. Therefore, in the proposed method, a new problem, which simultaneously performs the resolution enhancement and the motion blur removal, is derived. Specifically, a maximum a posterior estimation problem which estimates original HR frames with motion blur kernels is introduced into our method. Furthermore, in order to obtain the posterior probability based on Bayes’ rule, a prior probability of the original HR frame, whose distribution can adaptively be set for each area, is newly defined. By adaptively setting the distribution of the prior probability, preservation of the sharpness in edge regions and suppression of the ringing artifacts in smooth regions are realized. Consequently, based on these novel approaches, the proposed method can perform successful reconstruction of the HR frames. Experimental results show impressive improvements of the proposed method over previously reported methods.
1 Introduction
Highresolution (HR) video sequences are necessary for various fundamental applications, and acquisition of data with an HR image sensor makes quality improvement straightforwardly. However, it is often difficult to capture video sequences with sufficient high quality from current image sensors. Furthermore, video sequences often include motion blurs in many situations, e.g., there is not enough light to avoid the use of a long shutter speed. Then image processing methodologies for increasing the visual quality are necessary to bridge the gap between demands of applications and physical constraints. Many researchers have proposed superresolution (SR) methods for increasing the resolution levels of lowresolution (LR) video sequences [1–30]. Most SR methods are broadly categorized into two approaches, learningbased (examplebased) approach and reconstructionbased approach. The learningbased approach estimates the HR frame from only its LR frame, but several other HR frames are utilized to learn a prior on the original HR frame [1–9]. On the other hand, the reconstructionbased approach estimates the HR frame from their multiple LR frames, and many methods based on this approach have been proposed [10–30]. In this article, we focus on the reconstructionbased approach and discuss its details.
The reconstructionbased SR was first proposed by Tsai and Huang [10]. They used a frequency domain approach, and their formulation was extended by Kim et al. [11, 12]. In general, the frequency domain approaches have strength of theoretical simplicity and high computational efficiency. However, in these frequency domain approaches [10–12], the observation model of LR frames is restricted to only translational motion [17]. Due to the lack of data correlation in the frequency domain, it is difficult to effectively use the spatial domain knowledge. Therefore, spatial domain approaches have often been developed to overcome the weakness of the frequency domain approaches [13–24, 27–30].
In general, since the estimation of HR frames is an illposed problem, the prior information is introduced to determine the solution of the SR problems. Also, it is represented as a prior probability or a regularization term, and they are adopted to stabilize the inversion of the illposed problem. Typically, intensity gradients are used for the regularization, and their L _{1}norm or L _{2}norm regularization approaches are often used [13, 14, 18]. Total variation (TV) [31] is utilized as the most common regularization. This means the conventional methods assume that the TV obtained from the original HR frames is based on the predefined distribution. Since L _{2}norm regularization penalizes highfrequency components severely, the solution tends to become oversmoothed. On the other hand, although L _{1}norm regularization keeps sharpness compared to L _{2}norm regularization, it tends to increase artifacts in smooth regions.
In addition to the above problems, these conventional SR methods try to only recover HR frames from their LR frames. However, motion blurs are also caused in the image acquisition process, and their removal must be performed with the resolution enhancement. Therefore, many methods for removing motion blurs have been proposed [32–36]. In order to realize the resolution enhancement and the motion blur removal, the conventional methods tend to separately perform these two procedures. Then, since the performance of the first procedure may cause the deterioration of the performance in the subsequent procedure, some artifacts such as blurring and ringing artifacts are enhanced in the final output.
As shown in the above discussions, the conventional methods have the following problems: (i) simultaneous resolution enhancement and motion blur removal cannot be realized, successfully, and (ii) regularization, i.e., prior information cannot be provided for target video sequences, adaptively.
 (i)
Simultaneous estimation of the HR frame and the motion blur kernels: In order to estimate the original HR frame from its motionblurred LR frames, a posterior probability of the original HR frame and the motion blur kernels is newly defined. Then, by using the Maximum a Posterior (MAP) estimation, the proposed method performs the simultaneous resolution enhancement and motion blur removal. This enables suppression of the performance degradation due to the separate processing (problem (i)). Note that for realizing the successful estimation of the HR frame in this approach, the following approach becomes necessary.
 (ii)
A new prior probability for the HR frame: The proposed method derives a new prior probability distribution of the HR frame, whose shape can adaptively be set to the suitable one for each area. By estimating the optimal shape adaptively, oversmooth in edge regions and artifacts in smooth regions can be suppressed. Furthermore, the proposed method introduces a new weight factor concerning edge and blur directions into the derivation of the prior probability to reduce the oversmooth, which occurs in the blur direction, and the ringing artifacts. Then the problem (ii) can be alleviated by this approach.
Then, by combining the above two approaches, accurate reconstruction of the HR video sequences can be expected.
The remainder of this article is organized as follows. Section 2 shows the observation model of LR video sequences which is utilized in the proposed method. The resolution enhancement method of motionblurred LR video sequences is presented in Section 3. In Section 4, the effectiveness of our method is verified by some results of experiments. Concluding remarks are shown in Section 5.
2 Observation model of motionblurred LR video sequences
In this section, we present the observation model utilized in the proposed method. Let j th frame of a motionblurred LR video sequence be denoted in a vector form by ${\mathbf{y}}^{\left(j\right)}={\left[{y}_{1}^{\left(j\right)},{y}_{2}^{\left(j\right)},\cdots \phantom{\rule{0.3em}{0ex}},{y}_{{N}_{1}{N}_{2}}^{\left(j\right)}\right]}^{T}\left(\in {\mathbf{R}}^{{N}_{L}}\right)$, where N _{1}×N _{2} is the size of the LR frame, and N _{ L }=N _{1} N _{2}. In this article, ^{ T } denotes a vector/matrix transpose operator. An i th frame of its HR video sequence is denoted in a vector form by ${\mathbf{x}}^{\left(i\right)}={\left[{x}_{1}^{\left(i\right)},{x}_{2}^{\left(i\right)},\cdots \phantom{\rule{0.3em}{0ex}},{x}_{{q}_{1}{N}_{1}{q}_{2}{N}_{2}}^{\left(i\right)}\right]}^{T}\left(\in {\mathbf{R}}^{{N}_{H}}\right)$, where q _{1} N _{1}×q _{2} N _{2} is the size of the HR frame, N _{ H }=q _{1} N _{1} q _{2} N _{2} and q _{1}≥1, q 2≥1. Note that j ∈ {iM,iM+1,…,i,…,i+M1,i+M}, i.e., the i th HR frame is reconstructed from the 2M+1 motionblurred LR frames by our method in the following section.
In the above equations, ${\mathbf{F}}^{(i,j)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}\times {N}_{H}}\right)$ is a motion operator between i th HR frame x ^{(i)} and the original HR frame corresponding to j th motionblurred LR frame y ^{(j)}, ${\mathbf{H}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}\times {N}_{H}}\right)$ is a blurring operator due to the motion blur in j th frame, B $\left(\in {\mathbf{R}}^{{N}_{H}\times {N}_{H}}\right)$ is a low pass filter, D $\left(\in {\mathbf{R}}^{{N}_{L}\times {N}_{H}}\right)$ is a downsampling operator, ${\mathbf{v}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{L}}\right)$ is an additive white noise vector in j th LR frame. In this article, we assume D and B are known, and they are the bicubic operator. Furthermore, F ^{(i,j)} is calculated by using the simple block matching method whose function “cvCalcOpticalFlowBM” is published in the libraries of OpenCV [37].
3 SR algorithm based on adaptive prior settings
where ${\mathbf{x}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{N}_{H}}\right)$ is a vector form of an original HR frame (j th HR frame) of j th motionblurred LR frame y ^{(j)}. Furthermore, ${\mathbf{K}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{L}_{1}\times {L}_{2}}\right)$ and ${\mathbf{X}}^{\left(j\right)}\phantom{\rule{1em}{0ex}}\left(\in {\mathbf{R}}^{{q}_{1}{N}_{1}\times {q}_{2}{N}_{2}}\right)$ are, respectively, the matrix forms of j th motion blur kernel k ^{(j)} and j th HR frame x ^{(j)}, and ⊗ is a convolution operator. In addition, vec[·] is a vectorization operator.
In the above equation, we can utilize the observation model shown in the previous section for the likelihood Pr(yx ^{(i)},β ^{(i)},k), where its details are shown in Section 3.2. Furthermore, the proposed method defines a new prior probability distribution of the HR frame Pr(x ^{(i)},β ^{(i)}), whose shape can adaptively be set for each area by determining the parameters β ^{(i)}, for accurately reconstructing the target HR frame. In addition, a new weight factor is determined by using motion blur and edge directions and introduced into this prior probability to suppresses the oversmooth in edge regions and the noise and ringing artifacts in smooth regions. Then successful estimation of the HR frame and the motion blur kernels from the obtained posterior probability based on the MAP estimation can be expected.
As described above, we adopt the probability model. Furthermore, we use the probability model simultaneously representing the original HR frame x ^{(i)}, the parameters β ^{(i)} and the motion blur kernels k. Thus, we briefly explain the reason why the probability model is adopted and the reason why the probability model simultaneously representing x ^{(i)}, β ^{(i)} and k is used, respectively.
Reason why we adopt the probability model There have been proposed many frameworks which do not adopt probability models. In general, in these methods, they tend to assume that the distribution of the estimation target is represented by only one simple distribution such as the Gaussian distribution. On the other hand, in the methods which adopt probability models, it becomes feasible to adaptively estimate the distribution from the statistic characteristic of the estimation target. Then, as shown in the proposed method, the probability model whose distribution matches the estimation target can directly be used for its reconstruction. Therefore, due to its high degree of freedom, the proposed method uses the probability model.
Reason why the probability model simultaneously representing x ^{(i)} , β ^{(i)} , and k is used Since the proposed method tries to simultaneously perform the SR and the motion blur removal, we must estimate both of the motion blur kernels k and the original HR frame x ^{(i)} from only the motion blurred LR frames y. Furthermore, it is difficult to represent the original HR frame x ^{(i)} by using a simple fixed distribution, and we have to model it by using a distribution whose shape can adaptively be determined for each area based on its parameters β ^{(i)}. Therefore, the original HR frame x ^{(i)} depends on the parameters β ^{(i)}, and the motion blurred LR frames y are generated from the original HR frame x ^{(i)} and the motion blur kernels k. In order to estimate these three unknowns x ^{(i)}, β ^{(i)}, and k from only the motionblurred LR frames y without suffering from their contradictions, the proposed method adopts the probability model which enables their simultaneous representation.
This section is organized as follows. Section 3.1 shows the prior probability distribution used in the proposed method. The algorithm for the reconstruction of the HR frame is presented in Section 3.2.
3.1 Definition of prior probability distributions
This section explains the prior probability distributions of the HR frame, its parameters, and the motion blur kernels utilized in our method. As shown in Equation (8), the prior probability Pr(x ^{(i)},β ^{(i)},k) is divided into Pr(x ^{(i)},β ^{(i)}) and Pr (k). Thus, in this section, we explain the details of Pr(x ^{(i)},β ^{(i)}) and Pr (k).
3.1.1 Prior probability of HR frame and its parameters
where we assume the prior probability Pr(β ^{(i)})∝c o n s t. in the above equation. The conditional probability Pr_{ e } (x ^{(i)}β ^{(i)}) is defined in such a way that sharp edges in the HR frame are kept. Therefore, we adaptively set its distribution based on intensity gradients for each area in the HR frame. Furthermore, the conditional probability Pr_{ s }(x ^{(i)}β ^{(i)}) is adopted to suppress noises and ringing artifacts in smooth regions of the HR frame. By using the intensity gradients of the motion blurred LR frame, we suppress the increase of the intensity gradients at smooth regions in the HR frame. The details of Pr_{ e }(x ^{(i)}β ^{(i)}) and Pr_{ s }(x ^{(i)}β ^{(i)}) are shown below.
The details of Pr_{ e }(x^{(i)}β ^{(i)})
Note that β ^{(i)} in Equation (12) contains all of ${\beta}_{s,t}^{\left(i\right)}$. Note that each pixel s has $\left{\mathcal{N}}_{s}\right$ parameters ${\beta}_{s,t}^{\left(i\right)}$, where $\left{\mathcal{N}}_{s}\right$ is the number of pixels in the neighborhood. Thus, the dimension of β ^{(i)} becomes ${N}_{H}\left{\mathcal{N}}_{s}\right$. In the proposed method, the shape of the prior probability distribution Pr (x ^{(i)},β ^{(i)}) changes at each area in the HR frame by introducing the shape parameter ${\beta}_{s,t}^{\left(i\right)}$ ($1\le {\beta}_{s,t}^{\left(i\right)}\le 2$) into the definition of the conditional probability Pr _{ e }(x ^{(i)}β ^{(i)}). If ${\beta}_{s,t}^{\left(i\right)}=1$ and ${\beta}_{s,t}^{\left(i\right)}=2$, the distribution of Pr _{ e }(x ^{(i)}β ^{(i)}), respectively, equals to the Laplace distribution and the Gaussian distribution. It should be noted that the HR frame generally contains both of edge regions and smooth regions. If the prior probability of the HR frame is defined by one distribution, it means both of edge and smooth regions have the same properties. However, these regions actually have different properties each other. Thus, the proposed method estimates the parameter of the GGD, which determines its shape, at each area in the HR frame. In the HR frame, the edge regions should have large values of the intensity gradients. By automatically estimating the distribution, which nearly becomes the Laplace distribution, the penalty of the intensity gradient becomes weaker than that defined by using the Gaussian distribution, where the details of its estimation are shown in Section 3.2. Consequently, by keeping the large intensity gradients, the edge regions can preserve the sharpness.
The details of Pr _{ s }(x^{(i)}β ^{(i)})
In Equation (15), we use the LR frame to constraint the intensity gradients of the HR frame for suppressing noises and ringing artifacts. Equation (15) is motivated by the fact that the motion blur can generally be considered as a smooth filtering process. In a locally smooth region of the LR frame, the corresponding region in the HR frame should be also smooth. In Equation (17), since the estimated value of ${\beta}_{s,t}^{\left(i\right)}$ becomes larger in smooth regions, m _{ s } also becomes larger in such regions. In the region having the large value of m _{ s }, the intensity gradient is strongly constrained by the LR frame. Since the LR frame does not have any ringing and noise artifacts, increasing of the intensity gradient is prevented in the estimated HR frame, and those artifacts can be suppressed.
3.1.2 Prior probability of motion blur kernels
and η ^{(j)} is a rate parameter. It is commonly observed that since a motion blur kernel identifies the path of the camera, it tends to be sparse with most values close to zero. This prior probability for the motion blur kernel is used in [19], and we adopt the same prior probability in this article.
3.1.3 Discussion of effectiveness of new prior probability
As shown in the above explanations, the proposed method tries to perform the resolution enhancement and the motion blur removal with keeping the sharpness in edge regions and suppressing noises and ringing artifacts in smooth regions. In general, in order to derive the posterior probability of the HR frame based on Bayes’ rule as shown in Equation (3), the likelihood and the prior probability should be defined. Note that the likelihood is derived from the observation model, and its distribution tends to be common between different methods, where the details of the likelihood is shown in the following section. Therefore, the proposed method focuses on the prior probability and introduces the following novel points to solve the conventional problems.

Adaptive setting of the distribution shape of the prior probability in Equation (12)
The proposed method adaptively determines the parameters ${\beta}_{s,t}^{\left(i\right)}$ which set the distribution shape of the prior probability in such a way that the reconstructed HR frame, respectively, keeps sharpness and smoothness in edge and smooth regions.

Suppression of noises and ringing artifacts in Equation (15)
The proposed method monitors the parameters ${\beta}_{s,t}^{\left(i\right)}$, which represent the distribution shape, in Equation (17) and derives the new prior probability to suppress the occurrence of noises and ringing artifacts in smooth regions.
The proposed method divides Pr(x ^{(i)},β ^{(i)}) into Pr_{ e } (x ^{(i)}β ^{(i)}) and Pr_{ s }(x ^{(i)}β ^{(i)}) in order to deal with edge and smooth areas separately during the SR process. It should be noted that Pr_{ e }(x ^{(i)}β ^{(i)}) is defined by the GGD, and it has a distribution between the Laplace distribution and the Gaussian distribution. On the other hand, Pr_{ s }(x ^{(i)}β ^{(i)}) is defined as the Gaussian distribution, i.e., Pr_{ e }(x ^{(i)}β ^{(i)}) has a higher degree of freedom. This is because Pr_{ e }(x ^{(i)}β ^{(i)}) and Pr_{ s }(x ^{(i)}β ^{(i)}), respectively, have different roles in the proposed method. Specifically, Pr_{ e } (x ^{(i)}β ^{(i)}) is adopted for correctly representing the prior on intensity gradients of the original HR frame. Therefore, the proposed method uses the GGD for providing its distribution correctly. Unfortunately, since there is a limitation to perfectly represent the prior even if we use the GGD, some artifacts may occur as a result, and Pr_{ s }(x ^{(i)}β ^{(i)}) becomes necessary to remove such artifacts by smoothing the corresponding regions. In the proposed method, we try to perform a simple smoothing, and thus, Pr_{ s }(x ^{(i)}β ^{(i)}) based on the Gaussian distribution using L _{2}norm is utilized. Note that since the smoothing should not be performed in edge regions, the proposed method monitors ${\beta}_{s,t}^{\left(i\right)}$ to avoid the oversmoothing in those regions.
It is also possible to apply some postprocessing techniques, such as smoothing filters, to the removal of those artifacts. In this case, we should introduce some functions such as those shown in Equations (15)–(17) into the design of the filters. Nevertheless, since artifacts (i.e., errors) caused in smooth regions affect the estimation of the whole target HR frame during the optimization and also cause the estimation errors, we simultaneously use Pr_{ s }(x ^{(i)}β ^{(i)}) with Pr_{ e }(x ^{(i)}β ^{(i)}). Then it is expected that the errors in the smooth regions can be suppressed in the reconstruction process, and the propagation of those errors to the other areas tends to be avoided.
Then, from the above novel points, our method provides a solution to the problems of the conventional methods not being able to perform adaptive reconstruction.
3.2 Algorithm for reconstructing HR frame
where ${\Delta}_{y}^{(s,t)}$ and ${\Delta}_{x}^{(s,t)}$ are distances from s th pixel to t th pixel through x and ycoordinates, respectively. The matrix K ^{(i)} corresponds to the matrix shown in Equation (6), and K ^{(i)}(u,v) (u=1,2,…,L _{1};v=1,2,…,L _{2}) is a (u,v)th element of K ^{(i)}. If the direction between s th pixel and t th pixel becomes parallel to the main direction of the motion blur, the weight factor becomes small. If the resolution enhancement is only performed, the regularization term is dependent only on the characteristic of the HR frame since the blur is commonly constant in all directions. However, due to both of the resolution reduction and the motion blur, the blur does not become constant in all directions. This weight factor is utilized for avoiding the oversmooth due to the regularization term.
Finally, we explain the optimization procedures of Equation (22). Since the cost function shown in Equation (22) consists of the three large sets of unknowns (the HR frame x ^{(i)}, the parameters β ^{(i)}, and the motion blur kernels k), the use of direct search techniques is intractable. Therefore, the following cyclic coordinate descent optimization procedures are adopted to estimate the unknowns. Specifically, we iteratively perform the following three procedures.
Step 1: Update of the HR frame x ^{(i)}
Step 2: Update of the parameters β ^{(i)}
Step 3: Update of the motion blur kernels k
Then we can simultaneously estimate the HR frame ${\widehat{\mathbf{x}}}^{\left(i\right)}$, the parameters ${\widehat{\mathit{\beta}}}^{\left(i\right)}$, and the motion blur kernels $\widehat{\mathbf{k}}$. This optimization method is based on the steepest descend algorithm. Thus, the convergence of the iterative process may not be guaranteed.
In the proposed method, we newly define the posterior probability for simultaneous estimation of the HR frame and the motion blur kernels. Furthermore, the proposed method introduces the new prior probability, and by estimating the optimal parameter determining its distribution in each area, the sharpness in edge regions is preserved. In smooth regions, noises and ringing artifacts are reduced by using the information of the motion blurred LR frames. Therefore, the proposed method performs the reconstruction more adaptively than the conventional methods, and accurate restoration and resolution enhancement by our method can expected.
4 Experimental results
The performance of the proposed method is verified in this section. We used video sequences shown in Table 1. According to Equation (1), motionblurred LR video sequences shown in Table 2 were generated from the motion blur kernels (PSF) shown in Figure 1. Then we applied the proposed method to the LR video sequences and generated resolutionenhanced video sequences at the original resolution. When applying the proposed method to the test sequences, we simply set α=1, μ=0, ${\lambda}_{1}^{\left(j\right)}=\frac{1}{ij}$, λ _{2}=1.0×10^{3}, h _{1}=2.0×10^{2}, h _{2}=1.0×10^{5}, h _{3}=1.0×10^{11} and η=5.0×10^{7}. It should be noted that α, μ, and ${\lambda}_{1}^{\left(j\right)}$ have been set to the reasonable values. Furthermore, since h _{1}, h _{2}, and h _{3} only determine the step size in the cyclic coordinate descend optimization procedures, they do not affect the performance of the proposed method if we set them to sufficiently small values. Then the parameters which seem to affect the performance of the proposed method are λ _{2} and η, and we set these parameters from some preliminary experiments. In addition to the setting of the above parameters, we also show the conditions in the experiments below.

Number of frames used to reconstruct each HR frame : 5 (i.e., M=2)

Number of iterations for the whole optimization: 10

Note that in each iteration, we also performed the following iterations for x ^{(i)}, β ^{(i)} and k ^{(j)}:

Number of iterations for optimizing x ^{(i)}: 300

Number of iterations for optimizing β ^{(i)}: 50

Number of iterations for optimizing k ^{(j)}: 10


Block size used in the block matching algorithm : 7×7 pixels

Neighborhood ${\mathcal{N}}_{s}$: Eight neighboring pixels of pixel s
Test video sequences used for the verification in this experiment
Sequence  Size (pixels)  Fps 

“Mobile & Calendar”  720×576  30 
“Susie”  720×480  30 
“Coast Guard”  352×288  30 
 1.
Comparative methods 1 and 2
For comparison of the proposed method, we used the conventional methods [13, 18]. These methods are only the resolution enhance methods, which, respectively, use the L _{2}norm or L _{1}norm regularization term. In order to compare the proposed method and the conventional methods, the degradation model including the motion blur is used, and the motion blur kernels are estimated by Fergus et al. [34]. The proposed method adaptively determines the prior distribution, i.e., the regularization term is adaptively determined for the target video sequences. Thus, these comparative methods are suitable for the comparison of the proposed method.
 2.
Comparative method 3
The conventional method [39], which is implemented by using the software provided by the authors, is only a resolution enhancement method utilizing the frequency domain approach. In order to compare the performance between the proposed method and this method, we remove the motion blur by Fergus et al. [34] after applying the resolution enhancement. This methods is used as the benchmarking method.
In order to perform the same experiments between different methods, we performed the registration (motion estimation) by using the simple block matching procedures shown in Section 2 for the proposed method and Comparative methods 1 and 2. It should be noted that Comparative method 3 based on [39] is a different approach, and thus, we used their proposed motion estimation approach for this comparative method. Recently, many successful registration methods have been proposed, and the performance of the SR can drastically be improved. However, since the main focus of this article is the reconstruction algorithm, we adopted such simple procedures.
Performance comparison (PSNR (dB)) between the proposed method and the conventional methods
Sequence  Proposed  Comparative  Comparative  Comparative 

method  method 1  method 2  method 3  
“Mobile & Calendar”  32.98  32.78  32.44  32.45 
“Susie”  36.29  35.55  36.02  35.86 
“Coast Guard”  31.63  31.55  31.39  31.32 
From the obtained results, we can see the proposed method enables the successful reconstruction of the HR video sequences from the motion blurred LR video sequences. As shown in the previous section, the proposed method newly adopts the following two novel approaches:
 (i)
Simultaneous resolution enhancement and motion blur removal
The proposed method uses the posterior probability for simultaneously estimating the HR frame and the motion blur kernels from the target motion blurred LR frames. Then this provides a solution to the problems of the conventional methods that separately perform these two reconstructions, i.e., the problem that errors caused in the first reconstruction affect the performance of the subsequent reconstruction.
 (ii)
Adaptive setting of prior probability on HR frame
In the proposed method, the prior probability is adaptively set for the target video sequence. Specifically, we calculate the parameters which determine the distribution shape of the prior probability on intensity gradients to keep the sharpness in edge regions. Furthermore, the prior probability is also determined in such a way that noises and ringing artifacts are suppressed in smooth regions.
When estimating unknown data from its observed data, its estimation generally becomes an illposed problem. Therefore, we have to provide some prior information on the estimation target. In general, since it is quite difficult to perfectly provide the prior, the estimation error is always caused due to the mismatch of the prior. In methods which separately perform the SR and the motion blur removal, this problem occurs in each process, and the estimation performance of the original HR frame is degraded. Furthermore, after finishing the first process, the obtained result contains errors due to the above problem, and the remaining second process estimates the original HR frame by regarding the result obtained by the first process as an observation. However, the model in the second process does not generally consider the error caused in the first process, and its compensation becomes difficult.
Therefore, we think if one is performed after the other, errors in the first process may cause performance degradation of the subsequent process. Thus, the probability model simultaneously estimating all unknowns is introduced into the proposed method.
Finally, we discuss the limitation of the proposed method and show its future outlook. In the proposed method, we calculate the motion vectors for estimating F ^{(i,j)} by using the simple block matching method [37]. It is well known that results of SR severely depend on the estimation performance of F ^{(i,j)}. In this article, we only focus on the performance of the reconstruction algorithm, but it is necessary to adopt more accurate motion estimation algorithms for improving the performance of SR.
Next, enhancing only the spatial resolution with removing motion blurs will introduce temporal aliasing effects. Therefore, videotovideo SR becomes necessary for reducing the above problem. There have been proposed many spacetime based methods such as [36, 41, 42], and we also have to expand our method to the videotovideo version.
Furthermore, in this article, we only considered motion blur caused by the ego (camera) motion. However, in real applications, we have to focus on the motion blur caused by two different factors: the ego (global) motion and the objects’ (local) motions. For the successful videotovideo SR, we have to consider both global/local motion blurs.
These topics are future work in our study.
5 Conclusion
 (i)
simultaneous estimation of the HR frame and the motion blur kernels, and (ii) a new prior probability for correctly representing the HR frame. Then we simultaneously estimate the HR frame and the motion blur kernels based on the new prior probability. Consequently, successful reconstruction of HR video sequences can be realized with preserving the sharpness and suppressing some artifacts.
Note that although the proposed method can perform the accurate SR in the experiments, some artifacts occur between the edge regions and smooth regions. This is because it becomes difficult to accurately estimate the parameters in the prior distribution of the original HR frame from only the motion blurred LR frames. We will have to tackle this problem in the future work. Furthermore, we simply set the parameters used in the proposed method. These parameters were set to the values that output the highest performance. However, they should be determined from the target video sequences, adaptively.
In addition, we also have to realize the videotovideo SR approach for reducing temporal aliasing effects. Therefore, we will study this point in the subsequent report.
Declarations
Acknowledgements
This research was partly supported by a GrantinAid for Scientific Research (B) 21300030, from the Japan Society for the Promotion of Science (JSPS).
Authors’ Affiliations
References
 Freeman WT, Pasztor EC, Carmichael OT: Learning lowlevel vision. Int. J. Comput. Vis 2000, 40: 2547. 10.1023/A:1026501619075MATHView ArticleGoogle Scholar
 Hertzmann A, Jacobs CE, Oliver N, Curless B, Salesimn DH: Image analogies. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’01 2001, 327340.View ArticleGoogle Scholar
 Freeman WT, Jones TR, Pasztor EC: Examplebased superresolution. IEEE Comput. Graph. Appl 2002, 22: 5665.View ArticleGoogle Scholar
 Sun J, Zheng NN, Tao H, Shum HY: Image hallucination with primal sketch priors, vol. 2. Proceedings. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003 2003, 729736.Google Scholar
 Wang Q, Tang X, Shum H: Patch based blind image super resolution, vol. 1. Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV’05) 2005, 709716.View ArticleGoogle Scholar
 Stephenson TA, Chen T: Adaptive Markov random fields for examplebased superresolution of faces. EURASIP J. Appl. Signal Process 2006, 2006: 225225.View ArticleGoogle Scholar
 Jiji CV, Chaudhuri S: Singleframe image superresolution through contourlet learning. EURASIP J. Appl. Signal Process 2006, 2006: 235235.View ArticleGoogle Scholar
 Jiji CV, Chaudhuri S, Chatterjee P: Single frame image superresolution: should we process locally or globally? Multidimen. Syst. Signal Process 2007, 18: 123152. 10.1007/s1104500700241MATHMathSciNetView ArticleGoogle Scholar
 Li X, Lam KM, Qiu G, Shen L, Wang S: An efficient examplebased approach for image superresolution. International Conference on Neural Networks and Signal Processing, 2008 2008, 575580.Google Scholar
 Tsai R, Huang T: Multiframe image restoration and registration. Adv. Comput. Vis. Image Process 1984, 1: 317339.Google Scholar
 Kim S, Bose N, Valenzuela H: Recursive reconstruction of high resolution image from noisy undersampled multiframes. IEEE Trans. Acoust. Speech Signal Process 1990, 38(6):10131027. 10.1109/29.56062View ArticleGoogle Scholar
 Kim S, Su WY: Recursive highresolution reconstruction of blurred multiframe images. IEEE Trans. Image Process 1993, 2(4):534539. 10.1109/83.242363View ArticleGoogle Scholar
 Schultz R, Stevenson R: Extraction of high resolution frames from video sequences. IEEE Trans. Image Process 1996, 5: 9961011. 10.1109/83.503915View ArticleGoogle Scholar
 Hardie R, Barnard K, Armstrong E: Joint MAP registration and highresolution image estimation using a sequence of undersampled images. IEEE Trans. Image Process 1997, 6(12):16211633. 10.1109/83.650116View ArticleGoogle Scholar
 Baker S, Kanade T: Limits on superresolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell 2002, 24(9):11671183. 10.1109/TPAMI.2002.1033210View ArticleGoogle Scholar
 Farsiu S, Robinson D, Elad M, Milanfar P: Robust shift and add approach to superresolution. Appl. Digi. Image Process. XXVI 2003, 5203: 121130. 10.1117/12.507194View ArticleGoogle Scholar
 Park S, Park M, Moon G: Superresolution image reconstruction: a technical over view. IEEE Signal Process. Mag 2003, 20(3):2136. 10.1109/MSP.2003.1203207View ArticleGoogle Scholar
 Farsiu S, Robinson M, Elad M, Milanfar P: Fast and robust multiframe super resolution. IEEE Trans. Image Process 2004, 13(10):13271344. 10.1109/TIP.2004.834669View ArticleGoogle Scholar
 Hu H, Kondai L: A regularization framework for joint blur estimation and superresolution of video sequences. ICIP 2005, 3: 329332.Google Scholar
 van Ouwerkerk J: Image superresolution survey. Image Vis. Comput 2006, 24(10):10391052. 10.1016/j.imavis.2006.02.026View ArticleGoogle Scholar
 Shen H, Zhang L, Huang B, Li P: A MAP approach for joint motion estimation, segmentation, and super resolution. IEEE Trans. Image Process 2007, 16(2):479490.MathSciNetView ArticleGoogle Scholar
 Takeda H, Frasiu S, Milanfar P: Kernel regression for image processing and reconstruction. IEEE Trans. Image Process 2007, 16(2):349366.MathSciNetView ArticleGoogle Scholar
 YuanRan L, DaoQing D: Color superresolution reconstruction and demosaicing using elastic net and tight frame. IEEE Trans. Circuits Syst. I: Regular Papers 2008, 55(11):35003512.View ArticleGoogle Scholar
 Omer O, Tanaka T: Joint blur identification and highresolution image estimation based on weighted mixednorm with outlier rejection. IEEE International Conference on Acoustics, Speech and Signal Processing, 2008, ICASSP 2008 2008, 13051308.View ArticleGoogle Scholar
 Omer O, Tanaka T: Extraction of highresolution frame from lowresolution video sequence using regionbased motion estimation. IEICE Trans. Fund 2010, E93A(4):742751. 10.1587/transfun.E93.A.742View ArticleGoogle Scholar
 Omer O, Tanaka T: Regionbased weightednorm with adaptive regularization for resolution enhancement. Digi. Signal Process 2011, 21(4):508516. 10.1016/j.dsp.2011.02.005View ArticleGoogle Scholar
 Protter M, Elad M, Takeda H, Milanfar P: Generalizing the nonlocalmeans to superresolution reconstruction. IEEE Trans. Image Process 2009, 18: 3651.MathSciNetView ArticleGoogle Scholar
 Baboulaz L, Dragotti P: Extract feature extraction using finite rate of innovation principles with an application to image superresolution. IEEE Trans. Image Process 2009, 18(2):281298.MathSciNetView ArticleGoogle Scholar
 Takeda H, Milanfar P, Protter M, Elad M: Superresolution without explicit subpixel motion estimation. IEEE Trans. Image Process 2009, 18(9):19581975.MathSciNetView ArticleGoogle Scholar
 Lee IH, Bose N, Lin CW: Locally adaptive regularized superresolution on video with arbitrary motion. 17th IEEE International Conference on Image Processing (ICIP), 2010 2010, 897900.View ArticleGoogle Scholar
 Rudin L, Osher S, Fatemi E: Nonlinear total variation based on removal algorithms. Physica D 1992, 60: 259268. 10.1016/01672789(92)90242FMATHView ArticleMathSciNetGoogle Scholar
 Richardson H: Bayesianbased iterative method of image restoration. J. Opt. Soc. Am 1972, 62: 5559. 10.1364/JOSA.62.000055View ArticleGoogle Scholar
 Lucy L: An iterative technique for the rectification of observed distributions. Astron. J 1974, 79(6):745754.View ArticleGoogle Scholar
 Fergus R, Singh B, Hertzmann A, Roweis S, Freeman W: Removing camera shake from a single photograph. ACM Trans. Graph. (SIGGRAPH) 2006, 25(3):787794. 10.1145/1141911.1141956View ArticleGoogle Scholar
 Yuan L, Sun J, Quan L, Shum H: Progressive interscale and intrascale nonblind image deconvolution. ACM Trans. Graph. (SIGGRAPH) 2008, 27(3):110.View ArticleGoogle Scholar
 Takeda H, Milanfar P: Removing motion blur with spacetime processing. IEEE Trans. Image Process 2011, 20(10):29903000.MathSciNetView ArticleGoogle Scholar
 Bradski G: The OpenCV library. Dr. Dobb’s Journal of Software Tools, 2000Google Scholar
 Do M, Vetteli M: Waveletbased texture retrieval using generalized gaussian density and KullbackLeibler distance. IEEE Trans. Image Process 2002, 11: 146158. 10.1109/83.982822MathSciNetView ArticleGoogle Scholar
 Vandewalle P, Süsstrunk S, Vetterli M: A frequency domain approach to registration of aliased images with application to superresolution. EURASIP J. Appl. Signal Process 2006, 2006: 114.View ArticleGoogle Scholar
 Meyer F: Topographic distance and watershed lines. Signal Process 1994, 38: 113125. 10.1016/01651684(94)900604MATHView ArticleGoogle Scholar
 Faktor A, Irani M: Spacetime superresolution from a single video. In Proceedings of CVPR. : ; 2011:33533360.Google Scholar
 Shechtman E, Caspi Y, Irani M: Spacetime superresolution. IEEE Trans. Pattern Anal. Mach. Intell 2005, 27(4):531545.View ArticleMATHGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.