 Research
 Open Access
 Published:
Sparse representation utilizing tight frame for phase retrieval
EURASIP Journal on Advances in Signal Processing volume 2015, Article number: 96 (2015)
Abstract
We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is illposed. Several techniques have been used to address this problem by utilizing various priors such as nonnegative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding nonconvex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noisefree and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsitybased algorithms in terms of reconstruction quality.
Introduction
In science and engineering fields, such as crystallography, neutron radiography, astronomy, signal processing, and optical imaging [1, 2], it is difficult to design sophisticated measuring setups to allow direct recording of the phase, which carries the critical structural information of the test object or signal [1]. Interestingly, an alternative mean called algorithmic phase retrieval is arising in these fields. The goal of phase retrieval (PR) algorithms is to retrieve the signal only through its Fourier spectrum magnitude that can be obtained by the sensors. However, since the global phase shift, conjugate inversion, spatial shift on the interest signal can lead to the same Fourier magnitude, the PR problem is illposed. Therefore, prior information on the underlying signal is incorporated into the recovery process to enable its recovery.
In the past decades, alternative projection strategy pioneered by Gerchberg and Saxton [3] for PR is popular. The object magnitude constraint and Fourier magnitude constraint are utilized in the GerchbergSaxton (GS) algorithm, which addresses the problem of recovering a complex object from its Fourier magnitude via projecting onto the constrained sets alternatively. Instead of the magnitude constraint of the GS algorithm in object domain, Fienup [4] in 1978 suggested a PR algorithm called hybridinput output (HIO) algorithm, which incorporates the nonnegativity and support constraint into the PR process. Further study of alternative projection strategy [5–7] can be regarded as the modification or extension of the HIO algorithm and the GS algorithm.
Recently, the sparsity prior for PR is focused by researchers [8–12]. Theoretically, the sparsity prior can be incorporated into the object constraint of any alternative projection algorithm to improve the performance. Mukherjee et. al. proposed the socalled MaxK algorithm [8], which incorporates sparsity into the object constraint of alternative projection strategy via solving the sparse coding subproblem; Loock et. al. [9] incorporated the sparsity constraint into relaxed averaged alternating reflectors (RAAR) algorithm and proposed a shearlet soft thresholding procedure for PR from nearfield sampled data, namely Fresnel magnitude. Another sparsitybased strategy for PR is based on greedy strategy, including greedy sparse phase retrieval (GESPAR) [10] and nonlinear basis pursuit [11]. It has been shown that GESPAR could achieve lower computational complexity compared to the alternative projection algorithm with sparsity constraints [10].
In the image PR field, the image regularization, such as l _{1} regularization [13, 14], is focused by researchers. They often formulate the nonconvex l _{1} minimization problem and solve the problem by alternating directions method of multipliers (ADMM) [15], which can obtain a suboptimal solution to the nonconvex problem. Inspired by this idea, in this paper, we extend the spatial sparsity prior to transform sparsity prior based on translation invariant Haar pyramid (TIHP) [16] for PR. The proposed regularization is based on the assumption that the underlying image can be represented sparsely in TIHP tight frame. The assumption is natural for a wide class of natural images. Indeed, TIHP tight frame have been shown to provide suitable results for image restoration [16, 17]. We formulate the sparse representation regularization term and incorporate it into the PR optimization problem combining with the support and Fourier magnitude constraint. Due to the nonconvexity of the objective function, the optimal solution to the corresponding problem is difficult to obtain. Nevertheless, ADMM technique, which can obtain a satisfied solution to the PR problem [13, 14], is utilized in this paper.
Our contributions can be summarized as follows:

1.
We propose a sparse representation regularization term based on the TIHP tight frame for phase retrieval. We combine the sparse representation regularization term with the data consistency term and object constraint term of utilizing the indicator function to formulate a new phase retrieval problem. The sparse representation regularization term of utilizing TIHP tight frame is helpful to retrieve the missing phase as well as recover the image at low oversampled factors. Moreover, additional spatial priors of the image can be incorporated into the object constraint via enforcing the spatial priors in the constraint set, specially support prior and the intensity constraint of the underlying image are utilized in this paper;

2.
The alternative iterative algorithm of utilizing ADMM technique for solving the formulated optimization problem is proposed via dividing the formulated problem into several subproblems. We give the optimal solution to each subproblem theoretically, and experimental results demonstrated the better convergence behavior of this approach;

3.
We demonstrate the sparsity measure of utilizing l _{1} norm can obtain better reconstruction than l _{0} norm for our framework heuristically. Experimental results indicate that our proposed algorithm can obtain better reconstruction quality compared with the alternative projection algorithms of utilizing the same sparsity prior. Additionally, our algorithm is robust to noise, which is demonstrated empirically.
The structure of this paper is as follows. To begin with, the PR prior work is reviewed in Section 2. Then, in Section 3, we formulate our new PR problem and introduce our alternative iterative algorithm in detail. Section 4 presents our experimental simulations. Finally, concluding remarks and directions for future research are presented in Section 5.
Related work
The alternative projection strategy
Let Μ = {x ∈ ℝ ^{N} Fx = b} (here, F ∈ ℂ ^{N × N} accounts for the discrete Fourier transform matrix, b ∈ ℝ ^{N} is the observed Fourier magnitude, and x represents the underlying signal) be the Fourier magnitude constraint, which is a set of the signals whose Fourier magnitude spectrum matches with the measured Fourier magnitude of the underlying signal, and S = {x(r) x(r) ≠ 0 for some r ∈ D and x(r) = 0 for r ∉ D} be the support constraint set that indicates the set of signals have the nonzero support in D. The PR problem can be formulated as the following feasible problem
The alternative projection algorithms are utilized for solving the above problem, and the popular algorithm among them is the Fienup’s HIO algorithm [4], which starts with an initial guess and bounces between the above constraint sets until the terminated condition is reached. Given a parameter β, the HIO algorithm for updating x can be described as
Where \( {P}_M\left(\mathbf{x}\right)={\mathbf{F}}^{1}\left(\mathbf{b}\odot \frac{\mathbf{F}\mathbf{x}}{\left\mathbf{F}\mathbf{x}\right}\right) \) (F ^{− 1} represents the inverse Fourier transform and ⊙ denotes elementwise product, t is the iteration number. Mathematically, the above formulation is equivalent to
Here, P _{ S } is the projection onto S. To solve the problem (1), Luke [7] proposed RAAR algorithm, which can be described as
Here, S _{+} = {x(r)x(r) > 0 for some r ∈ D and x(r) = 0 for r ∉ D} represents the support and nonnegative constraint set. Indeed, under the same constraint, the RAAR algorithm equals to the HIO algorithm when β = 1. To further improve the performance of RAAR, Loock et. al. [9] utilized shearlet sparsity prior to recover the phase from the Fresnel magnitude data, namely nearfield data. Instead of the projection operator \( {P}_{S_{+}}\left(\bullet \right) \) in (4), they utilize \( {P}_{{\mathrm{S}}_{+}}^{\left(\varepsilon \right)}\left(\bullet \right) \) defined as
with [•]_{+} = max{Re(•), 0}. Here T_{ ε }(•) = sign(•) ⋅ max( •  − ε, 0) is a soft thresholding operator, moreover, F _{ s } represents the shearlet transform matrix, and \( {\mathbf{F}}_s^{1} \) represents the inverse shearlet transform matrix. The experimental simulations in [9] show that the RAAR algorithm with shearlet sparsity constraint outperforms the support constraint and similarly as the support plus nonnegative constraint under the nearfield scenario.
ADMM for the PR optimization problem
Differ from the PR feasible problem, the PR problem can also be formulated as a minimum problem that can be solved by the optimization theory. Additional constraints can be incorporated into the corresponding PR minimum problem by way of indicator functions. Theoretically, the feasible problem (1) can be regarded as the following minimum problem approximately
The first term of problem (6) is the data consistency, which enforces the Fourier magnitude constraint. The second term represents the realspace or object constraint, such as support or positive, onto the underlying signal; here, \( {\mathbb{I}}_S \) is an indicator function
Yang et. al. incorporated the l _{1} regularization for PR [14]
The above problem is a nonconvex problem, Yang et. al. [14] suggested ADMM technique to solve the problem and obtained a better reconstruction. Yang’s algorithm suffers from the limit of recovering the image that is sparse in spatial domain; when it comes to recover the images that are nonsparse in spatial domain, it fails to reconstruct. However, most of the natural images are nonsparse in spatial domain; Yang’s algorithm cannot enable to retrieve the phase of these images. Moreover, the previous alternative projection algorithms of utilizing sparsity prior suffer from either the relatively low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework of utilizing TIHP tight frame, and experimental results indicate its efficiency for natural images.
The proposed approach
Problem formulation
We study the PR optimization problem of recovering the image from its Fourier magnitude also known as the farfield data. Few decades have witnessed a great interest in image regularization, especially the sparse representation regularization for solving inverse problems. As a special inverse problem, it is essential to exploit a fine sparse representation for PR problem. The fine analytical transform or sparsifying basis for sparse representation regularization is critical for recovering the signal. In this paper, we propose utilizing TIHP tight frame for PR. The sparse representation regularization term of utilizing the TIHP tight frame is incorporated into the PR minimum problem and yields the following optimization problem
Where x ∈ ℝ ^{N} is the interest signal, W represents the TIHP tight frame admits W ^{T} W = I (here I is an identify matrix), and λ is the regular parameter. The first term of problem (9) is the data consistency, and the second term represents sparse representation regularization term in the TIHP tight frame. For the sparse representation regularization term, both the l _{1} norm and l _{0} norm can be considered to promote sparsity. In the simulations section (Subsection 4.1), we demonstrate that the reconstruction obtained by l _{1} norm is better than utilizing l _{0} norm in terms of reconstruction quality. The third term is an indicator function that can combine some additional constraints, such as support or nonnegative constraint, into PR process.
The proposed phase retrieval method
Problem (9) is a nonconvex optimization problem due to the presence of the magnitude operator, thus, solving it efficiently is a challenge. We attempt to solve it by using ADMM technique, and problem (9) can be recast as
The above problem can be rewritten as the following Lagrangian form that can be solved by ADMM
Here, u _{1}, u _{2} represent the scaled dual variables. We solve the above problem by attacking the following x, y, z subproblems and following the updating of scale dual variables finally. The optimal solution to each subproblem is derived, to the t th iteration:

1.
x subproblem
With y, z, u _{1}, u _{2} fixed, problem (11) is corresponding to the x subproblem
The above problem can be recast as
Here, \( {\mathbf{m}}^{\left(t1\right)}={\mathbf{F}}^H\left({\mathbf{z}}^{\left(t1\right)}{\mathbf{u}}_1^{\left(t1\right)}\right) \), \( {\mathbf{n}}^{\left(t1\right)}={\mathbf{y}}^{\left(t1\right)}+{\mathbf{u}}_2^{\left(t1\right)} \), C is a constant independent of x. The objective function of problem (13) is a convex function with the unique solution
Here, T _{ ε }(·) is a soft thresholding operator with ε = λ/(2 × (ρ _{1} + ρ _{2}))._{.}The hard thresholding operator \( T(w)=\left\{\begin{array}{l}w,\kern1.2em \mathrm{if}\kern0.5em w>\sqrt{\lambda /\left({\rho}_1+{\rho}_2\right)};\\ {}0,\kern1.9em \mathrm{otherwise}\end{array}\right\} \) instead of the soft thresholding operator is utilized in (14) under the scenario that the l _{0} norm is exploited for promoting sparsity.

2.
z subproblem
With x, y, u _{1}, u _{2} fixed in problem (11), the z subproblem is
Let \( {\mathbf{w}}^{(t)}=\mathbf{F}{\mathbf{x}}^{(t)}+{\mathbf{u}}_1^{\left(t1\right)}, \) thus
The first term is the Fourier magnitude constraint, and the second term indicates that the underlying vector is close to the known vector. To obtain the optimal solution of problem (16), the phase of z and w ^{(t)} must be guaranteed to equal
Here, pha(•) represents the operator that extracts the phase. To solve problem (16), we can only consider the following optimization problem with respect to z
Consider the cost function of problem (18) is a differentiable function with respect to z, and the least squares solution to problem (18) is
Therefore, the optimal solution to problem (16) is given by

3.
y subproblem
With x, z, u _{1}, u _{2} fixed in problem (11), the y subproblem is
Note that the first term of the above problem indicates the underlying vector is close to the known vector (\( {\mathbf{x}}^{(t)}{\mathbf{u}}_2^{\left(t1\right)} \)) in Euclidean norm, and the second term is an indicator function; the optimal solution to the above problem is
Where Ρ _{ S }(•) represents the projection operator onto the constraint set S.
Finally, update the scaled dual variables of ADMM
Here, \( \gamma \in \left(0,\left(\sqrt{5}+1\right)/2\right) \) can guarantee the convergence for the convex problem [18]. There is no theoretical guarantee that ADMM can obtain the optimal solution to the nonconvex problem. Nevertheless, ADMM method can obtain a satisfied solution for the nonconvex PR problem [14, 19]. To give a better convergence for our algorithm, we incorporate the parameter γ, which can be adjusted heuristically, into ADMM. So far, all issues in the process of handing problem (11) have been solved. We update the variable x, z, y at the tth iteration by solving subproblems (12), (15), and (21), and update u _{1}, u _{2} by (23) iteratively. To monitor the convergence of our algorithm, we utilize the relative residual norm defined by [19]
The iteration is terminated until the relative residual norm res ≤ τ for some small τ > 0, or the maximum iteration number is reached under the noisefree case, and only the latter terminated condition is utilized for the noisy case. To illustrate our algorithm with sparse representation regularization utilized tight frame in detail, Table 1 provides the proposed algorithm.
Experimental simulations
In our experiments, we use two standard gray scale images with size 512 × 512, including “Lena” and “Fruits.” The Fourier transform of the testing image is oversampled by various factors for comparison, and we call the factor as oversampled factor [19]. For example, the image is cropped with size 314 × 314, namely, the 314 × 314 pixels of the original image in Fig. 1 are retained, and the other part of it is excluded. The cropped testing images are padded with zeros to create the 512 × 512 image; under this scenario, the oversampled factor is 1.63 (512/314). Theoretically, the lower the oversampled factor is, the more difficult to reconstruct the image. We compare the l _{0} norm and l _{1} norm for promoting sparsity in our approach firstly. Then, the algorithm is compared with other existing algorithms and is evaluated of reconstruction quality and robustness to noise. All simulations were performed in MATLAB R2012a on the computer with AMD Athlon (tm) II X2 255 processor (3.11 GHz), 1.75 G memory, and Windows XP operator system.
Parameter setup
To demonstrate the effectiveness of the sparsity in our algorithm, we compared our algorithm with the alternative projection algorithm without sparsity constraint, and the HIO algorithm is chosen as the benchmark algorithm under the noisefree case. The nonnegative constraint S _{+} is utilized in the HIO algorithm, and the HIO MATLAB codes can be downloaded from https://github.com/leeneil/ghiomatlab. We also incorporated the stateoftheart sparsitybased algorithms such as the MaxK algorithm [8] and RAAR framework with shearlet sparsity method [9] for comparison. The MaxK algorithm can be regarded as a parameterized relaxation with respect to RAAR [8] of utilizing K sparse constraint; therefore, we chose the RAAR method with sparsity constraint in the TIHP tight frame for comparison. Due to the l _{0} norm that is incorporated to promote sparsity, we termed this algorithm as RAARl _{0} algorithm, which is also introduced in [8]. For RAARl _{0} algorithm, the sparsity level K is set to 0.4 N, where N is the total number of the measurements, and β = 0.99. Moreover, the proposed RAARbased algorithm in [9] is selected for comparison. We utilize the sparsity in TIHP tight frame instead of shearlet sparsity for reconstruction from the farfield data. The l _{1} norm is utilized in [9] to promote sparsity; thus, the corresponding algorithm is called RAARl _{1} algorithm. It is difficult to give the theoretical guarantee for the choice of the parameters for PR algorithms. In general, these parameters are tuned heuristically. To give a better performance for the RAARl _{1} algorithm, we suggest a rule of updating thresholding ε empirically: ε = C _{1} + C _{2}/t (here C _{1} and C _{2} are some constants that need to be tuned empirically). The thresholding ε of the RAARl _{1} algorithm is decreasing in dependence of the iteration number t, which gives a promising result that has been demonstrated in [9]. Each parameter in the RAARl _{1} algorithm was evaluated by varying one parameter at a time while keeping the rest fixed based on the principle of obtaining the higher peak signal to noise ratio (PSNR). We tried several choices of β, C _{1}, and C _{2} for this algorithm at oversampled factor 1.58, and experimental results show that β = 0.99, ε = 1.5 + 8/t are suitable parameter choices.
The projection operator P _{M} utilized in RAARl _{0} and RAARl _{1} algorithms is defined as [7]
with η = 10^{− 10}., both the RAARl _{0} and RAARl _{1} algorithms utilize the RAAR framework with TIHP sparsity constraint; the difference between the two algorithms is the object domain constraint, which yields the different projection operator \( {P}_{{\mathrm{S}}_{+}}\left(\bullet \right) \) in (4). Concretely, the RAARl _{0} algorithm utilizes the following projection operator
Here, T _{ K }(•) represents the operator that retains the K largest coefficients, and B(•) = max(min(•, 255), 0) denotes as the pixel intensities constraint. The projection P _{S}(•) represents projection onto the support constraint. Differing from the RAARl _{0} algorithm, the RAARl _{1}algorithm incorporates \( {P}_{\mathrm{S}}^{\varepsilon}\left(\bullet \right)=B\left\{{P}_{\mathrm{S}}\left[{\mathbf{W}}^T{T}_{\varepsilon}\left(\mathbf{W}\left(\bullet \right)\right)\right]\right\} \) into (4). The maximums of iterations for all algorithms in the experiments are set to 3000.
For TIHP tight frame, the transform and its inverse transform can be downloaded from http://www.io.csic.es/PagsPers/JPortilla/software/file/4l0absdeblurpack. The l _{0} norm and l _{1} norm are all considered to promote sparsity, and the corresponding algorithm utilizing TIHP tight frame is termed PRTIHPl _{0} and PRTIHPl _{1}. The P _{S} utilized in our algorithm is defined as
Note that an initial guess is important for PR, we utilize x ^{(0)} = P _{ S } P _{ Μ }(v) as the initial guess for all algorithms; here, v is a random image. We tune the parameters of the proposed two algorithms finely, heuristically, we set the dyadic scales 7, and ρ _{1} = ρ _{2} = 0.01, γ = 0.5, τ = 0.01 for the two algorithms. For the parameter λ, we set PRTIHPl _{0} and PRTIHPl _{1} to 0.1 and 0.05, respectively.
We compare the proposed PRTIHPl _{0} and PRTIHPl _{1} at various oversampled factors. In Fig. 2, we give the reconstructed image “Lena” at oversampled factor 1.63 and 1.62, namely, containing 314 × 314 pixels (the first row) and 316 × 316 pixels (the second row), respectively. Moreover, the corresponding nonnegative support regions in the center of the 512 × 512 padded image are shown in Fig. 2. Since the global phase factor exists in the reconstructed images, the reconstructions can be flipped or shifted. In this case, we aligned the reconstruction with the original image to give a clear comparison. Moreover, the part of the reconstructed image that padding zeros to create the oversampled diffraction pattern is excluded. One can see that the nearly perfect reconstructions in first row are achieved by the two algorithms; however, the PRTIHPl _{0} algorithm fails to reconstruct as the decreasing of the oversampled factor. One can see our PRTIHPl _{1} algorithm can provide the perfect reconstruction at these oversampled factors. Indeed, for image “Lena”, the minimum oversampled factor for perfect reconstitution of our PRTIHPl _{1} algorithm is 1.58 (contains 324 × 324 pixels), but the reconstruction of our PRTIHPl _{0} algorithm is extremely worse than PRTIHPl _{1} algorithm at this oversampled factor. Therefore, we utilize l _{1} norm to promote sparsity for comparison in next subsection.
Phase retrieval from noisefree oversampled diffraction pattern
We performed the proposed algorithm and the three benchmark algorithms for various images at oversampled factor 1.58. In this simulation, the nonnegative support region is simply the window with size 324 × 324 in the center of the 512 × 512 padded image.
We used the same initial guess for all algorithms for fairness. Figures 3 and 4 show the comparisons of the reconstruction performance for the two testing images. Note that the reconstructed image may be flipped for the global phase factor in its Fourier transform; the translation has been removed for clear comparison. The original images are shown in Figs. 3a and 4a. From the reconstructions of the four algorithms presented in Figs. 3 and 4, it is easy to see that our PRTIHPl _{1} produced highquality reconstructions regardless of the images (see Figs. 3e and 4e). From the reconstructions in Figs. 3b and 4b, one can see that the HIO algorithm cannot give a visible reconstruction at this oversampled factor. The RAARl _{0} algorithm produced a better reconstruction than HIO, but many texture and detail information are lost, such as the texture of the hat in image “Lena” (see Fig. 3c) is lost; moreover, the spots of the apple in image “Fruits” (see Fig. 4c) are also lost. The RAARl _{1} reconstruction is not visibly good for image “Lena” (see Fig. 3d), and most details are also lost in image “Fruits” (see Fig. 4d). Interestingly, a nearly perfect reconstruction for image “Lena” is achieved by our PRTIHPl _{1} algorithm. As for the “Fruits” reconstruction, the spots of the apple in our reconstruction is preserved (see Fig. 4e), which shows our reconstruction is better than the reconstructed images obtained by the benchmark algorithms. The utilization of TIHP tight frame and ADMM technique should account for the considerable results obtained by our algorithm.
Although our proposed PRTIHPl _{1} algorithm could obtain a suboptimal solution to the nonconvex problem (11) as well as a better reconstruction, the global convergence is difficult to be proved rigorously, which is the results of the nonconvexity of the objective function. Empirically, our PRTIHPl _{1} algorithm converges to a stable point as the iteration increases. Figure 5 gives the plot of res, namely the relative residual norm defined in (24), versus iterations for the image “Lena” and “Fruits” at the oversampled factor 1.58. It is observed in Fig. 5 that the three benchmark algorithms are easy to strap into stagnation at this oversampled factor. One can see our PRTIHPl _{1} algorithm can circumvent this issue and become flat and stable ultimately, indicating good convergence property. As the nonconvex optimization problem may yield some perturbations in the convergent curves, one can see some perturbations in our stable curves.
To show the computational cost, we present the running time of our PRTIHPl _{1} algorithm and the benchmark algorithms in Table 2. In the table, the “average” means the average running time of processing the two testing images. From Table 2, one can see our proposed algorithm results in faster imaging speed compared to the sparsitybased benchmark algorithms. The high computational cost of the projection operator P _{ M } and P _{ S } should account for the slower imaging speed of the RAARl _{0} algorithm and the RAARl _{1} algorithm. However, from Table 2, one can see our algorithm is slower than the HIO algorithm in terms of the average running time. The TIHP transform and its inverse transform should account for the higher average running time of our algorithm compared to the HIO algorithm. Interestingly, for image “Fruits,” our algorithm is faster than the HIO algorithm. Since different image contains different component, the running time for the two testing images is different. Indeed, for image “Fruits,” our algorithm only needs 605 iterations, which results in a faster imaging speed. From the results and the analysis of the running time, one can see our algorithm outperforms the RAARl _{0} algorithm and the RAARl _{1} algorithm in terms of both reconstruction quality and imaging speed. Although the average time of the HIO algorithm is less, the reconstruction quality of the HIO algorithm is extremely worse than our algorithm at low oversampled factors.
Phase retrieval from noisy oversampled diffraction pattern
We simulated a noisy diffraction pattern from the image “Lena” at oversampled factor 1.74. Under the noisy data case, we incorporate the oversampling smoothness (OSS) algorithm [20], which produces consistently better reconstruction than the HIO algorithm under the noisy scenario, instead of the HIO algorithm for comparison. The OSS code can be downloaded from the author’s homepage and the maximum iterations for OSS we set is also 3000. Note that the random phase without support constraint is suitable for OSS, we utilized its initial method for initial guess of OSS; moreover, the same initialization is as described in Section 4.1 for the other algorithms. We added the random noise n on the true oversampled diffraction pattern to generate the noisy measurement data b _{ noise } = b + n.. The noise n is scaled so that the R _{noise} defined by R _{noise} = b − b _{ noise }_{1}/b_{1} is ranging from 5–20 %. For each noise level, we performed 20 independent runs and calculated the R _{real} [20]: R _{ real } = x ^{(t)} − x_{1}/x_{1} to evaluate the reconstruction quality.
The Gaussian noise is added on the true oversampled diffraction pattern to characterize the effects of different noise levels on the reconstructions. The average R _{real} of the three benchmark algorithms and our algorithm as a function of the noise levels are presented in Fig. 6. From Fig. 6, one can see our algorithm always shows a smaller R _{real}, which indicates that the best reconstructions are achieved by our algorithm. To further verify that our algorithm can obtain a better reconstruction, the best reconstruction, namely the reconstruction with the smallest R _{real}, of each algorithm under the Gaussian noise case with R _{noise} = 10 % is presented in Fig. 7 (the original image is presented in Fig. 7a). Visually, the RAARl _{0} algorithm produces the worst reconstruction (see Fig. 7c). The OSS algorithm is better than the RAARl _{0} algorithm, but it suffers from much artifacts and loses much details (see Fig. 7b). Although the RAARl _{1} algorithm outperforms the other two benchmark algorithms in terms of the reconstruction quality, much details are still lost in the reconstruction of the RAARl _{1} algorithm (see Fig. 7d). One can see our reconstruction in Fig. 7e not only reduces the artifacts but also preserves much details compared with the other three benchmark algorithms. The results demonstrate that our algorithm is robust to noise.
Conclusions
In this paper, we have introduced a framework for PR based on translation invariant Haar pyramid. Our main idea is to formulate sparse representation regularization term of utilizing TIHP tight frame for PR. We incorporated the formulated regularization term into the PR problem, which yields a new nonconvex optimization problem. ADMM technology was utilized for solving the resulting problem, and a satisfied solution is obtained. We demonstrated the l _{1} norm that promoted sparsity can obtain better reconstruction than l _{0} norm for our approach heuristically. Moreover, experimental simulations showed that our proposed approach considerably outperforms the previous PR algorithms in terms of reconstruction quality at low oversampled factors. The Gaussian noise with various noise levels was added on the true oversampled diffraction pattern to evaluate the reconstruction quality of our algorithm showing robust to noise. In this paper, TIHP tight frame is chosen for our approach to retrieval phases as well as recover images. Exploiting finer tight frame to improve the reconstruction quality is our future work.
References
 1.
Y Shechtman, YC Eldar, O Cohen, HN Chapman, J Miao, M Segev, Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal process. Mag. 32, 87–109 (2015)
 2.
JR Fienup, Phase retrieval algorithms: a personal tour [invited]. Appl. Opt. 52, 45–56 (2013)
 3.
RW Gerchberg, WO Saxton, A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, 237–246 (1972)
 4.
JR Fienup, Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett. 3, 27–29 (1978)
 5.
CL Guo, S Shi, JT Sheridan, Iterative phase retrieval algorithms I: optimization. Appl. Opt. 54, 4698–4708 (2015)
 6.
HH Bauschke, PL Combettes, DR Luke, Hybrid projection reflection method for phase retrieval. JOSA A. 20, 1025–1034 (2003)
 7.
DR Luke, Relaxed averaged alternating reflections for diffraction imaging. Inverse Probl. 21, 37–50 (2005)
 8.
S Mukherjee, CS Seelamantula, Fienup algorithm with sparsity constraints: application to frequencydomain opticalcoherence tomography. IEEE Trans. Signal Process. 62, 4659–4672 (2014)
 9.
S Loock, G Plonka, Phase retrieval for Fresnel measurements using a shearlet sparsity constraint. Inverse Probl. 30, (2014)
 10.
Y Shechtman, A Beck, YC Eldar, GESPAR: efficient phase retrieval of sparse signals. IEEE Trans. Signal Process. 62, 928–938 (2014)
 11.
H Ohlsson, AY Yang, R Dong, SS Sastry, Nonlinear basic pursuit, 2013
 12.
R Fan, Q Wan, F Wen, YP Liu, Iterative projection approach for phase retrieval of semisparse wave field. EURASIP J. Adv. Sig. Proc (2014). doi:10.1186/16876180201424
 13.
DS Weller, A Pnueli, G Divon, O Radzyner, YC Eldar, Undersampled phase retrieval with outliers, 2014. arXiv:1402.7350
 14.
Z Yang, CS Zhang, LH Xie, Robust compressive phase retrieval via L _{ 1 } minimization with application to image reconstruction (Cornell University, Ithaca, 2013)
 15.
S Boyd, N Parikh, E Chu, B Peleato, J Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and trends in machine learning 20, 681–695 (2011)
 16.
JA GuerreroColon, L Mancera, J Portilla, Image restoration using spacevariant Gaussian scale mixture in overcomplete pyramids. IEEE Trans. Image Process. 17, 27–41 (2008)
 17.
J Portilla, Image restoration through L _{ 0 } analysisbased sparse optimization in tight frames (Paper presented at the IEEE international conference on image processing, Cairo, 2009), pp. 3909–3912
 18.
R. Glowinski, Lectures on numerical methods for nonlinear variational problems (Springer, Berlin Heidelberg, 1981)
 19.
ZW, Wen, C Yang, X Liu, S Marchesini, Alternating direction methods for classical and psychographic phase retrieval. Inverse Probl. 28, (2012)
 20.
JA Rodriguez, R Xu, CC Chen, Y Zou, J Miao, Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities. J. Appl. Cryst. 46, 312–318 (2013)
Acknowledgements
This work was supported by the National Natural Science Foundation of China under Grant 61471313 and by the Natural Science Foundation of Hebei Province under Grant F2014203076.
Author information
Additional information
Competing interests
The authors declare that they have no competing interests.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Shi, B., Lian, Q. & Chen, S. Sparse representation utilizing tight frame for phase retrieval. EURASIP J. Adv. Signal Process. 2015, 96 (2015). https://doi.org/10.1186/s1363401502889
Received:
Accepted:
Published:
Keywords
 Phase retrieval
 Tight frame
 Sparse representation
 Signal processing