Open Access

Sparse representation utilizing tight frame for phase retrieval

EURASIP Journal on Advances in Signal Processing20152015:96

https://doi.org/10.1186/s13634-015-0288-9

Received: 25 June 2015

Accepted: 13 November 2015

Published: 19 November 2015

Abstract

We treat the phase retrieval (PR) problem of reconstructing the interest signal from its Fourier magnitude. Since the Fourier phase information is lost, the problem is ill-posed. Several techniques have been used to address this problem by utilizing various priors such as non-negative, support, and Fourier magnitude constraints. Recent methods exploiting sparsity are developed to improve the reconstruction quality. However, the previous algorithms of utilizing sparsity prior suffer from either the low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework that exploits sparsity of the signal in the translation invariant Haar pyramid (TIHP) tight frame. Based on this sparsity prior, we formulate the sparse representation regularization term and incorporate it into the PR optimization problem. We propose the alternating iterative algorithm for solving the corresponding non-convex problem by dividing the problem into several subproblems. We give the optimal solution to each subproblem, and experimental simulations under the noise-free and noisy scenario indicate that our proposed algorithm can obtain a better reconstruction quality compared to the conventional alternative projection methods, even outperform the recent sparsity-based algorithms in terms of reconstruction quality.

Keywords

Phase retrievalTight frameSparse representationSignal processing

1 Introduction

In science and engineering fields, such as crystallography, neutron radiography, astronomy, signal processing, and optical imaging [1, 2], it is difficult to design sophisticated measuring setups to allow direct recording of the phase, which carries the critical structural information of the test object or signal [1]. Interestingly, an alternative mean called algorithmic phase retrieval is arising in these fields. The goal of phase retrieval (PR) algorithms is to retrieve the signal only through its Fourier spectrum magnitude that can be obtained by the sensors. However, since the global phase shift, conjugate inversion, spatial shift on the interest signal can lead to the same Fourier magnitude, the PR problem is ill-posed. Therefore, prior information on the underlying signal is incorporated into the recovery process to enable its recovery.

In the past decades, alternative projection strategy pioneered by Gerchberg and Saxton [3] for PR is popular. The object magnitude constraint and Fourier magnitude constraint are utilized in the Gerchberg-Saxton (GS) algorithm, which addresses the problem of recovering a complex object from its Fourier magnitude via projecting onto the constrained sets alternatively. Instead of the magnitude constraint of the GS algorithm in object domain, Fienup [4] in 1978 suggested a PR algorithm called hybrid-input output (HIO) algorithm, which incorporates the non-negativity and support constraint into the PR process. Further study of alternative projection strategy [57] can be regarded as the modification or extension of the HIO algorithm and the GS algorithm.

Recently, the sparsity prior for PR is focused by researchers [812]. Theoretically, the sparsity prior can be incorporated into the object constraint of any alternative projection algorithm to improve the performance. Mukherjee et. al. proposed the so-called Max-K algorithm [8], which incorporates sparsity into the object constraint of alternative projection strategy via solving the sparse coding subproblem; Loock et. al. [9] incorporated the sparsity constraint into relaxed averaged alternating reflectors (RAAR) algorithm and proposed a shearlet soft thresholding procedure for PR from near-field sampled data, namely Fresnel magnitude. Another sparsity-based strategy for PR is based on greedy strategy, including greedy sparse phase retrieval (GESPAR) [10] and nonlinear basis pursuit [11]. It has been shown that GESPAR could achieve lower computational complexity compared to the alternative projection algorithm with sparsity constraints [10].

In the image PR field, the image regularization, such as l 1 regularization [13, 14], is focused by researchers. They often formulate the non-convex l 1 minimization problem and solve the problem by alternating directions method of multipliers (ADMM) [15], which can obtain a suboptimal solution to the non-convex problem. Inspired by this idea, in this paper, we extend the spatial sparsity prior to transform sparsity prior based on translation invariant Haar pyramid (TIHP) [16] for PR. The proposed regularization is based on the assumption that the underlying image can be represented sparsely in TIHP tight frame. The assumption is natural for a wide class of natural images. Indeed, TIHP tight frame have been shown to provide suitable results for image restoration [16, 17]. We formulate the sparse representation regularization term and incorporate it into the PR optimization problem combining with the support and Fourier magnitude constraint. Due to the non-convexity of the objective function, the optimal solution to the corresponding problem is difficult to obtain. Nevertheless, ADMM technique, which can obtain a satisfied solution to the PR problem [13, 14], is utilized in this paper.

Our contributions can be summarized as follows:
  1. 1.

    We propose a sparse representation regularization term based on the TIHP tight frame for phase retrieval. We combine the sparse representation regularization term with the data consistency term and object constraint term of utilizing the indicator function to formulate a new phase retrieval problem. The sparse representation regularization term of utilizing TIHP tight frame is helpful to retrieve the missing phase as well as recover the image at low oversampled factors. Moreover, additional spatial priors of the image can be incorporated into the object constraint via enforcing the spatial priors in the constraint set, specially support prior and the intensity constraint of the underlying image are utilized in this paper;

     
  2. 2.

    The alternative iterative algorithm of utilizing ADMM technique for solving the formulated optimization problem is proposed via dividing the formulated problem into several subproblems. We give the optimal solution to each subproblem theoretically, and experimental results demonstrated the better convergence behavior of this approach;

     
  3. 3.

    We demonstrate the sparsity measure of utilizing l 1 norm can obtain better reconstruction than l 0 norm for our framework heuristically. Experimental results indicate that our proposed algorithm can obtain better reconstruction quality compared with the alternative projection algorithms of utilizing the same sparsity prior. Additionally, our algorithm is robust to noise, which is demonstrated empirically.

     

The structure of this paper is as follows. To begin with, the PR prior work is reviewed in Section 2. Then, in Section 3, we formulate our new PR problem and introduce our alternative iterative algorithm in detail. Section 4 presents our experimental simulations. Finally, concluding remarks and directions for future research are presented in Section 5.

2 Related work

2.1 The alternative projection strategy

Let Μ = {x N | |Fx| = b} (here, F N × N accounts for the discrete Fourier transform matrix, b N is the observed Fourier magnitude, and x represents the underlying signal) be the Fourier magnitude constraint, which is a set of the signals whose Fourier magnitude spectrum matches with the measured Fourier magnitude of the underlying signal, and S = {x(r)| x(r) ≠ 0 for some rD and x(r) = 0 for rD} be the support constraint set that indicates the set of signals have the non-zero support in D. The PR problem can be formulated as the following feasible problem
$$ \mathrm{find}\kern.3em \mathbf{x}\kern.3em \in \kern.3em M\kern.3em \cap \kern.3em S. $$
(1)
The alternative projection algorithms are utilized for solving the above problem, and the popular algorithm among them is the Fienup’s HIO algorithm [4], which starts with an initial guess and bounces between the above constraint sets until the terminated condition is reached. Given a parameter β, the HIO algorithm for updating x can be described as
$$ {\mathbf{x}}^{\left(t+1\right)}(r)=\Big\{\begin{array}{l}\left[{P}_M\left({\mathbf{x}}^{(t)}\right)\right](r),\mathrm{if}\kern.1em r\in D;\\ {}{\mathbf{x}}^{(t)}(r)-\beta \left[{P}_M\left({\mathbf{x}}^{(t)}\right)\right](r),\mathrm{otherwise}.\end{array} $$
(2)
Where \( {P}_M\left(\mathbf{x}\right)={\mathbf{F}}^{-1}\left(\mathbf{b}\odot \frac{\mathbf{F}\mathbf{x}}{\left|\mathbf{F}\mathbf{x}\right|}\right) \) (F − 1 represents the inverse Fourier transform and denotes element-wise product, t is the iteration number. Mathematically, the above formulation is equivalent to
$$ {\mathbf{x}}^{\left(t+1\right)}(r)=\left[\left(\left(1+\beta \right){P}_S{P}_M+\mathbf{I}-{P}_{\mathrm{S}}-\beta {P}_M\right){\mathbf{x}}^{(t)}\right](r). $$
(3)
Here, P S is the projection onto S. To solve the problem (1), Luke [7] proposed RAAR algorithm, which can be described as
$$ {\mathbf{x}}^{\left(t+1\right)}(r)=\left[\left(2\beta {P}_{S_{+}}{P}_M+\beta \mathbf{I}-\beta {P}_{S_{+}}+\left(1-2\beta \right){P}_M\right){\mathbf{x}}^{(t)}\right](r). $$
(4)
Here, S + = {x(r)|x(r) > 0 for some rD and x(r) = 0 for rD} represents the support and non-negative constraint set. Indeed, under the same constraint, the RAAR algorithm equals to the HIO algorithm when β = 1. To further improve the performance of RAAR, Loock et. al. [9] utilized shearlet sparsity prior to recover the phase from the Fresnel magnitude data, namely near-field data. Instead of the projection operator \( {P}_{S_{+}}\left(\bullet \right) \) in (4), they utilize \( {P}_{{\mathrm{S}}_{+}}^{\left(\varepsilon \right)}\left(\bullet \right) \) defined as
$$ {P}_{{\mathrm{S}}_{+}}^{\varepsilon}\left(\bullet \right)={\left[{\mathbf{F}}_s^{-1}{T}_{\varepsilon}\left({\mathbf{F}}_s\left(\bullet \right)\right)\right]}_{+} $$
(5)
with [•]+ = max{Re(•), 0}. Here T ε (•) = sign(•)  max(| • | − ε, 0) is a soft thresholding operator, moreover, F s represents the shearlet transform matrix, and \( {\mathbf{F}}_s^{-1} \) represents the inverse shearlet transform matrix. The experimental simulations in [9] show that the RAAR algorithm with shearlet sparsity constraint outperforms the support constraint and similarly as the support plus non-negative constraint under the near-field scenario.

2.2 ADMM for the PR optimization problem

Differ from the PR feasible problem, the PR problem can also be formulated as a minimum problem that can be solved by the optimization theory. Additional constraints can be incorporated into the corresponding PR minimum problem by way of indicator functions. Theoretically, the feasible problem (1) can be regarded as the following minimum problem approximately
$$ \widehat{\mathbf{x}}=\underset{\mathbf{x}}{argmin}\left\{\left|\right|\left|\mathbf{F}\mathbf{x}\right|-\mathbf{b}\left|\right|{}_2^2+{\mathbb{I}}_{\mathrm{S}}\left(\mathbf{x}\right)\right\}. $$
(6)
The first term of problem (6) is the data consistency, which enforces the Fourier magnitude constraint. The second term represents the real-space or object constraint, such as support or positive, onto the underlying signal; here, \( {\mathbb{I}}_S \) is an indicator function
$$ {\mathbb{I}}_S\left(\mathbf{x}\right)=\Big\{\begin{array}{l}0\kern.3em \mathbf{x}\in S;\\ {}+\infty \mathrm{otherwise}.\end{array} $$
(7)
Yang et. al. incorporated the l 1 regularization for PR [14]
$$ \widehat{\mathbf{x}}=\underset{\mathbf{x}}{argmin}\left\{\left|\right|\left|\mathbf{F}\mathbf{x}\right|-\mathbf{b}\left|\right|{}_2^2+\lambda \left|\right|\mathbf{x}\left|\right|{}_1\right\}. $$
(8)

The above problem is a non-convex problem, Yang et. al. [14] suggested ADMM technique to solve the problem and obtained a better reconstruction. Yang’s algorithm suffers from the limit of recovering the image that is sparse in spatial domain; when it comes to recover the images that are non-sparse in spatial domain, it fails to reconstruct. However, most of the natural images are non-sparse in spatial domain; Yang’s algorithm cannot enable to retrieve the phase of these images. Moreover, the previous alternative projection algorithms of utilizing sparsity prior suffer from either the relatively low reconstruction quality at low oversampled factors or being sensitive to noise. To address these issues, we propose a framework of utilizing TIHP tight frame, and experimental results indicate its efficiency for natural images.

3 The proposed approach

3.1 Problem formulation

We study the PR optimization problem of recovering the image from its Fourier magnitude also known as the far-field data. Few decades have witnessed a great interest in image regularization, especially the sparse representation regularization for solving inverse problems. As a special inverse problem, it is essential to exploit a fine sparse representation for PR problem. The fine analytical transform or sparsifying basis for sparse representation regularization is critical for recovering the signal. In this paper, we propose utilizing TIHP tight frame for PR. The sparse representation regularization term of utilizing the TIHP tight frame is incorporated into the PR minimum problem and yields the following optimization problem
$$ \widehat{\mathbf{x}}=\underset{\mathbf{x}}{argmin}\left\{\left|\right|\left|\mathbf{F}\mathbf{x}\right|-\mathbf{b}\left|\right|{}_2^2+\lambda \left|\right|\mathbf{W}\mathbf{x}\left|\right|{}_1+{\mathbb{I}}_S\left(\mathbf{x}\right)\right\}. $$
(9)

Where x N is the interest signal, W represents the TIHP tight frame admits W T W = I (here I is an identify matrix), and λ is the regular parameter. The first term of problem (9) is the data consistency, and the second term represents sparse representation regularization term in the TIHP tight frame. For the sparse representation regularization term, both the l 1 norm and l 0 norm can be considered to promote sparsity. In the simulations section (Subsection 4.1), we demonstrate that the reconstruction obtained by l 1 norm is better than utilizing l 0 norm in terms of reconstruction quality. The third term is an indicator function that can combine some additional constraints, such as support or non-negative constraint, into PR process.

3.2 The proposed phase retrieval method

Problem (9) is a non-convex optimization problem due to the presence of the magnitude operator, thus, solving it efficiently is a challenge. We attempt to solve it by using ADMM technique, and problem (9) can be recast as
$$ \left\{\widehat{\mathbf{x}},\widehat{\mathbf{y}},\widehat{\mathbf{z}}\right\}=\underset{\mathbf{x},\mathbf{y},\mathbf{z}}{argmin}\left\{\left|\right|\left|\mathbf{z}\right|-\mathbf{b}\left|\right|{}_2^2+\lambda \left|\right|\mathbf{W}\mathbf{x}\left|\right|{}_1+{\mathbb{I}}_S\left(\mathbf{y}\right)\right\},s.t.\mathbf{F}\mathbf{x}=\mathbf{z},\mathbf{y}=\mathbf{x}. $$
(10)
The above problem can be rewritten as the following Lagrangian form that can be solved by ADMM
$$ \left\{\widehat{\mathbf{x}},\widehat{\mathbf{y}},\widehat{\mathbf{z}},{\widehat{\mathbf{u}}}_1,{\widehat{\mathbf{u}}}_2\right\}=\underset{\mathbf{x},\mathbf{y},\mathbf{z},{\mathbf{u}}_1,{\mathbf{u}}_2}{argmin}\left\{\left|\right|\left|\mathbf{z}\right|-\mathbf{b}\left|\right|{}_2^2+\lambda \left|\right|\mathbf{W}\mathbf{x}\left|\right|{}_1+{\mathbb{I}}_S\left(\mathbf{y}\right)+{\rho}_1\left|\right|\mathbf{F}\mathbf{x}-\mathbf{z}+{\mathbf{u}}_1\left|\right|{}_2^2+{\rho}_2\left|\right|\mathbf{y}-\mathbf{x}+{\mathbf{u}}_2\left|\right|{}_2^2\right\}. $$
(11)
Here, u 1, u 2 represent the scaled dual variables. We solve the above problem by attacking the following x, y, z subproblems and following the updating of scale dual variables finally. The optimal solution to each subproblem is derived, to the t th iteration:
  1. 1.

    x subproblem

     
With y, z, u 1, u 2 fixed, problem (11) is corresponding to the x subproblem
$$ {\mathbf{x}}^{(t)}=\underset{\mathbf{x}}{argmin}\left\{{\rho}_1\left|\right|\mathbf{F}\mathbf{x}-{\mathbf{z}}^{\left(t-1\right)}+{\mathbf{u}}_1^{\left(t-1\right)}\left|\right|{}_2^2+{\rho}_2\left|\right|{\mathbf{y}}^{\left(t-1\right)}-\mathbf{x}+{\mathbf{u}}_2^{\left(t-1\right)}\left|\right|{}_2^2+\lambda \left|\right|\mathbf{W}\mathbf{x}\left|\right|{}_1\right\}. $$
(12)
The above problem can be recast as
$$ {\mathbf{x}}^{(t)}=\underset{\mathbf{x}}{argmin}\left\{\left({\rho}_1+{\rho}_2\right)\left|\right|\mathbf{x}-\left({\rho}_1{\mathbf{m}}^{\left(t-1\right)}+{\rho}_2{\mathbf{n}}^{\left(t-1\right)}\right)/\left({\rho}_1+{\rho}_2\right)\left|\right|{}_2^2+\lambda \left|\right|\mathbf{W}\mathbf{x}\left|\right|{}_1+C\right\}. $$
(13)
Here, \( {\mathbf{m}}^{\left(t-1\right)}={\mathbf{F}}^H\left({\mathbf{z}}^{\left(t-1\right)}-{\mathbf{u}}_1^{\left(t-1\right)}\right) \), \( {\mathbf{n}}^{\left(t-1\right)}={\mathbf{y}}^{\left(t-1\right)}+{\mathbf{u}}_2^{\left(t-1\right)} \), C is a constant independent of x. The objective function of problem (13) is a convex function with the unique solution
$$ {\mathbf{x}}^{(t)}={\mathbf{W}}^T{T}_{\varepsilon}\left\{\mathbf{W}\left[\left({\rho}_1{\mathbf{m}}^{\left(t-1\right)}+{\rho}_2{\mathbf{n}}^{\left(t-1\right)}\right)/\left({\rho}_1+{\rho}_2\right)\right]\right\}. $$
(14)
Here, T ε (·) is a soft thresholding operator with ε = λ/(2 × (ρ 1 + ρ 2))..The hard thresholding operator \( T(w)=\left\{\begin{array}{l}w,\kern1.2em \mathrm{if}\kern0.5em w>\sqrt{\lambda /\left({\rho}_1+{\rho}_2\right)};\\ {}0,\kern1.9em \mathrm{otherwise}\end{array}\right\} \) instead of the soft thresholding operator is utilized in (14) under the scenario that the l 0 norm is exploited for promoting sparsity.
  1. 2.

    z subproblem

     
With x, y, u 1, u 2 fixed in problem (11), the z subproblem is
$$ {\mathbf{z}}^{(t)}=\underset{\mathbf{z}}{argmin}\left\{\left|\right|\left|\mathbf{z}\right|-\mathbf{b}\left|\right|{}_2^2+{\rho}_1\left|\right|\mathbf{F}{\mathbf{x}}^{(t)}-\mathbf{z}+{\mathbf{u}}_1^{\left(t-1\right)}\left|\right|{}_2^2\right\}. $$
(15)
Let \( {\mathbf{w}}^{(t)}=\mathbf{F}{\mathbf{x}}^{(t)}+{\mathbf{u}}_1^{\left(t-1\right)}, \) thus
$$ {\mathbf{z}}^{(t)}=\underset{\mathbf{z}}{argmin}\left\{\left|\right|\left|\mathbf{z}\right|-\mathbf{b}\left|\right|{}_2^2+{\rho}_1\left|\right|{\mathbf{w}}^{(t)}-\mathbf{z}\left|\right|{}_2^2\right\}. $$
(16)
The first term is the Fourier magnitude constraint, and the second term indicates that the underlying vector is close to the known vector. To obtain the optimal solution of problem (16), the phase of z and w (t) must be guaranteed to equal
$$ pha\left(\mathbf{z}\right)=pha\left({\mathbf{w}}^{(t)}\right). $$
(17)
Here, pha(•) represents the operator that extracts the phase. To solve problem (16), we can only consider the following optimization problem with respect to |z|
$$ \left|{\mathbf{z}}^{(t)}\right|=\underset{\left|\mathbf{z}\right|}{argmin}\left\{\left|\right|\left|\mathbf{z}\right|-\mathbf{b}\left|\right|{}_2^2+{\rho}_1\left|\right|\left|{\mathbf{w}}^{(t)}\right|-\left|\mathbf{z}\right|\left|\right|{}_2^2\right\}. $$
(18)
Consider the cost function of problem (18) is a differentiable function with respect to |z|, and the least squares solution to problem (18) is
$$ \left|{\mathbf{z}}^{(t)}\right|=\left(\mathbf{b}+{\rho}_1\left|{\mathbf{w}}^{(t)}\right|\right)/\left(\mathbf{I}+{\rho}_1\mathbf{I}\right). $$
(19)
Therefore, the optimal solution to problem (16) is given by
$$ {\mathbf{z}}^{(t)}=\left[\left(\mathbf{b}+{\rho}_1\left|{\mathbf{w}}^{(t)}\right|\right)/\left(\mathbf{I}+{\rho}_1\mathbf{I}\right)\right]\odot exp\left[j\cdotp pha\left({\mathbf{w}}^{(t)}\right)\right]. $$
(20)
  1. 3.

    y subproblem

     
With x, z, u 1, u 2 fixed in problem (11), the y subproblem is
$$ {\mathbf{y}}^{(t)}=\underset{\mathbf{y}}{argmin}\left\{{\rho}_2\left|\right|\mathbf{y}-{\mathbf{x}}^{(t)}+{\mathbf{u}}_2^{\left(t-1\right)}\left|\right|{}_2^2+{\mathbb{I}}_S\left(\mathbf{y}\right)\right\}. $$
(21)
Note that the first term of the above problem indicates the underlying vector is close to the known vector (\( {\mathbf{x}}^{(t)}-{\mathbf{u}}_2^{\left(t-1\right)} \)) in Euclidean norm, and the second term is an indicator function; the optimal solution to the above problem is
$$ {\mathbf{y}}^{(t)}={P}_S\left({\mathbf{x}}^{(t)}-{\mathbf{u}}_2^{\left(t-1\right)}\right). $$
(22)

Where Ρ S (•) represents the projection operator onto the constraint set S.

Finally, update the scaled dual variables of ADMM
$$ {\left\{\begin{array}{l}{\mathbf{u}}_1^{(t)}={\mathbf{u}}_1^{\left(t-1\right)}+\gamma \left(\mathbf{F}{\mathbf{x}}^{(t)}-{\mathbf{z}}^{(t)}\right)\\ {}{\mathbf{u}}_2^{(t)}={\mathbf{u}}_2^{\left(t-1\right)}+\gamma \left({\mathbf{y}}^{(t)}-{\mathbf{x}}^{(t)}\right)\end{array}\right.}_{.} $$
(23)
Here, \( \gamma \in \left(0,\left(\sqrt{5}+1\right)/2\right) \) can guarantee the convergence for the convex problem [18]. There is no theoretical guarantee that ADMM can obtain the optimal solution to the non-convex problem. Nevertheless, ADMM method can obtain a satisfied solution for the non-convex PR problem [14, 19]. To give a better convergence for our algorithm, we incorporate the parameter γ, which can be adjusted heuristically, into ADMM. So far, all issues in the process of handing problem (11) have been solved. We update the variable x, z, y at the tth iteration by solving subproblems (12), (15), and (21), and update u 1, u 2 by (23) iteratively. To monitor the convergence of our algorithm, we utilize the relative residual norm defined by [19]
$$ res=\left|\right|\left|\mathbf{F}{\mathbf{x}}^{(t)}\right|-\mathbf{b}\left|\right|{}_2/\left|\right|\mathbf{b}\left|\right|{}_2. $$
(24)
The iteration is terminated until the relative residual norm res ≤ τ for some small τ > 0, or the maximum iteration number is reached under the noise-free case, and only the latter terminated condition is utilized for the noisy case. To illustrate our algorithm with sparse representation regularization utilized tight frame in detail, Table 1 provides the proposed algorithm.
Table 1

Complete description of our proposed algorithm

Input: the Fourier magnitude b;

Initialization: t = 1, initial estimated image x (0), error tolerance τ > 0, parameters λ, γ, ρ 1, ρ 2

Repeat

Update image x (t) by (14);

Update z (t), y (t) by (20) and (22), respectively;

Update the scaled dual variables \( {\mathbf{u}}_1^{(t)} \), \( {\mathbf{u}}_2^{(t)} \) by (23);

t = t + 1;

Until maximum iteration number is reached or res ≤ τ

Output: final estimated image

4 Experimental simulations

In our experiments, we use two standard gray scale images with size 512 × 512, including “Lena” and “Fruits.” The Fourier transform of the testing image is oversampled by various factors for comparison, and we call the factor as oversampled factor [19]. For example, the image is cropped with size 314 × 314, namely, the 314 × 314 pixels of the original image in Fig. 1 are retained, and the other part of it is excluded. The cropped testing images are padded with zeros to create the 512 × 512 image; under this scenario, the oversampled factor is 1.63 (512/314). Theoretically, the lower the oversampled factor is, the more difficult to reconstruct the image. We compare the l 0 norm and l 1 norm for promoting sparsity in our approach firstly. Then, the algorithm is compared with other existing algorithms and is evaluated of reconstruction quality and robustness to noise. All simulations were performed in MATLAB R2012a on the computer with AMD Athlon (tm) II X2 255 processor (3.11 GHz), 1.75 G memory, and Windows XP operator system.
Fig. 1

Images with 512 × 512 pixels for our experiments. a Lena. b Fruits

4.1 Parameter setup

To demonstrate the effectiveness of the sparsity in our algorithm, we compared our algorithm with the alternative projection algorithm without sparsity constraint, and the HIO algorithm is chosen as the benchmark algorithm under the noise-free case. The non-negative constraint S + is utilized in the HIO algorithm, and the HIO MATLAB codes can be downloaded from https://github.com/leeneil/ghio-matlab. We also incorporated the state-of-the-art sparsity-based algorithms such as the Max-K algorithm [8] and RAAR framework with shearlet sparsity method [9] for comparison. The Max-K algorithm can be regarded as a parameterized relaxation with respect to RAAR [8] of utilizing K sparse constraint; therefore, we chose the RAAR method with sparsity constraint in the TIHP tight frame for comparison. Due to the l 0 norm that is incorporated to promote sparsity, we termed this algorithm as RAAR-l 0 algorithm, which is also introduced in [8]. For RAAR-l 0 algorithm, the sparsity level K is set to 0.4 N, where N is the total number of the measurements, and β = 0.99. Moreover, the proposed RAAR-based algorithm in [9] is selected for comparison. We utilize the sparsity in TIHP tight frame instead of shearlet sparsity for reconstruction from the far-field data. The l 1 norm is utilized in [9] to promote sparsity; thus, the corresponding algorithm is called RAAR-l 1 algorithm. It is difficult to give the theoretical guarantee for the choice of the parameters for PR algorithms. In general, these parameters are tuned heuristically. To give a better performance for the RAAR-l 1 algorithm, we suggest a rule of updating thresholding ε empirically: ε = C 1 + C 2/t (here C 1 and C 2 are some constants that need to be tuned empirically). The thresholding ε of the RAAR-l 1 algorithm is decreasing in dependence of the iteration number t, which gives a promising result that has been demonstrated in [9]. Each parameter in the RAAR-l 1 algorithm was evaluated by varying one parameter at a time while keeping the rest fixed based on the principle of obtaining the higher peak signal to noise ratio (PSNR). We tried several choices of β, C 1, and C 2 for this algorithm at oversampled factor 1.58, and experimental results show that β = 0.99, ε = 1.5 + 8/t are suitable parameter choices.

The projection operator P M utilized in RAAR-l 0 and RAAR-l 1 algorithms is defined as [7]
$$ {P}_M\left(\mathbf{x}\right)=\mathbf{I}-{\mathbf{F}}^{-1}\left\{\left[\frac{\left|\mathbf{F}\mathbf{x}\right|{}^2}{{\left(\left|\mathbf{F}\mathbf{x}\right|{}^2+{\eta}^2\right)}^{1/2}}-\mathbf{b}\right]\odot \left[\frac{\left|\mathbf{F}\mathbf{x}\right|{}^2+2{\eta}^2}{{\left(\left|\mathbf{F}\mathbf{x}\right|{}^2+{\eta}^2\right)}^{3/2}}\mathbf{F}\mathbf{x}\right]\right\} $$
(25)
with η = 10− 10., both the RAAR-l 0 and RAAR-l 1 algorithms utilize the RAAR framework with TIHP sparsity constraint; the difference between the two algorithms is the object domain constraint, which yields the different projection operator \( {P}_{{\mathrm{S}}_{+}}\left(\bullet \right) \) in (4). Concretely, the RAAR-l 0 algorithm utilizes the following projection operator
$$ {P}_{\mathrm{S}}^K\left(\bullet \right)=B\left\{{P}_{\mathrm{S}}\left[{\mathbf{W}}^T{T}_{\mathrm{K}}\left(\mathbf{W}\left(\bullet \right)\right)\right]\right\} $$
(26)

Here, T K (•) represents the operator that retains the K largest coefficients, and B(•) = max(min(•, 255), 0) denotes as the pixel intensities constraint. The projection P S(•) represents projection onto the support constraint. Differing from the RAAR-l 0 algorithm, the RAAR-l 1algorithm incorporates \( {P}_{\mathrm{S}}^{\varepsilon}\left(\bullet \right)=B\left\{{P}_{\mathrm{S}}\left[{\mathbf{W}}^T{T}_{\varepsilon}\left(\mathbf{W}\left(\bullet \right)\right)\right]\right\} \) into (4). The maximums of iterations for all algorithms in the experiments are set to 3000.

For TIHP tight frame, the transform and its inverse transform can be downloaded from http://www.io.csic.es/PagsPers/JPortilla/software/file/4-l0-abs-deblur-pack. The l 0 norm and l 1 norm are all considered to promote sparsity, and the corresponding algorithm utilizing TIHP tight frame is termed PR-TIHP-l 0 and PR-TIHP-l 1. The P S utilized in our algorithm is defined as
$$ {P}_S\left(\mathbf{x}\right)=\Big\{\begin{array}{l}B\left(\mathbf{x}\right)\mathrm{if}\kern0.1em \mathbf{x}\in \mathrm{S};\\ {}0\kern.8em \mathrm{otherwise}.\end{array} $$
(27)

Note that an initial guess is important for PR, we utilize x (0) = P S P Μ (v) as the initial guess for all algorithms; here, v is a random image. We tune the parameters of the proposed two algorithms finely, heuristically, we set the dyadic scales 7, and ρ 1 = ρ 2 = 0.01, γ = 0.5, τ = 0.01 for the two algorithms. For the parameter λ, we set PR-TIHP-l 0 and PR-TIHP-l 1 to 0.1 and 0.05, respectively.

We compare the proposed PR-TIHP-l 0 and PR-TIHP-l 1 at various oversampled factors. In Fig. 2, we give the reconstructed image “Lena” at oversampled factor 1.63 and 1.62, namely, containing 314 × 314 pixels (the first row) and 316 × 316 pixels (the second row), respectively. Moreover, the corresponding non-negative support regions in the center of the 512 × 512 padded image are shown in Fig. 2. Since the global phase factor exists in the reconstructed images, the reconstructions can be flipped or shifted. In this case, we aligned the reconstruction with the original image to give a clear comparison. Moreover, the part of the reconstructed image that padding zeros to create the oversampled diffraction pattern is excluded. One can see that the nearly perfect reconstructions in first row are achieved by the two algorithms; however, the PR-TIHP-l 0 algorithm fails to reconstruct as the decreasing of the oversampled factor. One can see our PR-TIHP-l 1 algorithm can provide the perfect reconstruction at these oversampled factors. Indeed, for image “Lena”, the minimum oversampled factor for perfect reconstitution of our PR-TIHP-l 1 algorithm is 1.58 (contains 324 × 324 pixels), but the reconstruction of our PR-TIHP-l 0 algorithm is extremely worse than PR-TIHP-l 1 algorithm at this oversampled factor. Therefore, we utilize l 1 norm to promote sparsity for comparison in next subsection.
Fig. 2

The reconstructed “Lena” images by PR-TIHP-l 0 and PR-TIHP-l 1 with various oversampled factors and their corresponding non-negative support fields. “Lena” with 314 × 314 pixels (the first row, oversampled by the factor of 1.63), 316 × 316 pixels (the second row, oversampled by the factor of 1.62)

4.2 Phase retrieval from noise-free oversampled diffraction pattern

We performed the proposed algorithm and the three benchmark algorithms for various images at oversampled factor 1.58. In this simulation, the non-negative support region is simply the window with size 324 × 324 in the center of the 512 × 512 padded image.

We used the same initial guess for all algorithms for fairness. Figures 3 and 4 show the comparisons of the reconstruction performance for the two testing images. Note that the reconstructed image may be flipped for the global phase factor in its Fourier transform; the translation has been removed for clear comparison. The original images are shown in Figs. 3a and 4a. From the reconstructions of the four algorithms presented in Figs. 3 and 4, it is easy to see that our PR-TIHP-l 1 produced high-quality reconstructions regardless of the images (see Figs. 3e and 4e). From the reconstructions in Figs. 3b and 4b, one can see that the HIO algorithm cannot give a visible reconstruction at this oversampled factor. The RAAR-l 0 algorithm produced a better reconstruction than HIO, but many texture and detail information are lost, such as the texture of the hat in image “Lena” (see Fig. 3c) is lost; moreover, the spots of the apple in image “Fruits” (see Fig. 4c) are also lost. The RAAR-l 1 reconstruction is not visibly good for image “Lena” (see Fig. 3d), and most details are also lost in image “Fruits” (see Fig. 4d). Interestingly, a nearly perfect reconstruction for image “Lena” is achieved by our PR-TIHP-l 1 algorithm. As for the “Fruits” reconstruction, the spots of the apple in our reconstruction is preserved (see Fig. 4e), which shows our reconstruction is better than the reconstructed images obtained by the benchmark algorithms. The utilization of TIHP tight frame and ADMM technique should account for the considerable results obtained by our algorithm.
Fig. 3

The reconstructed “Lena” images with 324 × 324 pixels. Top-row (left to right): a The original image. b The reconstructed image by HIO; Bottom row (left to right), the reconstructed images by: c RAAR-l 0. d RAAR-l 1. e PR-TIHP-l 1

Fig. 4

The reconstructed “Fruits” images with 324 × 324 pixels. Top-row (left to right): a The original image. b The reconstructed image by HIO; Bottom row (left to right), the reconstructed images by: c RAAR-l 0. d RAAR-l 1. e PR-TIHP-l 1

Although our proposed PR-TIHP-l 1 algorithm could obtain a suboptimal solution to the non-convex problem (11) as well as a better reconstruction, the global convergence is difficult to be proved rigorously, which is the results of the non-convexity of the objective function. Empirically, our PR-TIHP-l 1 algorithm converges to a stable point as the iteration increases. Figure 5 gives the plot of res, namely the relative residual norm defined in (24), versus iterations for the image “Lena” and “Fruits” at the oversampled factor 1.58. It is observed in Fig. 5 that the three benchmark algorithms are easy to strap into stagnation at this oversampled factor. One can see our PR-TIHP-l 1 algorithm can circumvent this issue and become flat and stable ultimately, indicating good convergence property. As the non-convex optimization problem may yield some perturbations in the convergent curves, one can see some perturbations in our stable curves.
Fig. 5

Comparisons of the convergence behaviors of HIO, RAAR-l 0, RAAR-l 1, and PR-TIHP-l 1. The relative residual norm (res) versus iterations: a Lena. b Fruits

To show the computational cost, we present the running time of our PR-TIHP-l 1 algorithm and the benchmark algorithms in Table 2. In the table, the “average” means the average running time of processing the two testing images. From Table 2, one can see our proposed algorithm results in faster imaging speed compared to the sparsity-based benchmark algorithms. The high computational cost of the projection operator P M and P S should account for the slower imaging speed of the RAAR-l 0 algorithm and the RAAR-l 1 algorithm. However, from Table 2, one can see our algorithm is slower than the HIO algorithm in terms of the average running time. The TIHP transform and its inverse transform should account for the higher average running time of our algorithm compared to the HIO algorithm. Interestingly, for image “Fruits,” our algorithm is faster than the HIO algorithm. Since different image contains different component, the running time for the two testing images is different. Indeed, for image “Fruits,” our algorithm only needs 605 iterations, which results in a faster imaging speed. From the results and the analysis of the running time, one can see our algorithm outperforms the RAAR-l 0 algorithm and the RAAR-l 1 algorithm in terms of both reconstruction quality and imaging speed. Although the average time of the HIO algorithm is less, the reconstruction quality of the HIO algorithm is extremely worse than our algorithm at low oversampled factors.
Table 2

Time (s) for phase retrieval of our PR-TIHP-l 1 algorithm and the benchmark algorithms

Testing image

HIO

RAAR-l 0

RAAR-l 1

PR-TIHP-l 1

Lena

269.00

3403.06

1559.63

797.27

Fruits

268.52

3423.63

1578.53

159.94

Average

268.76

3413.34

1569.08

478.60

4.3 Phase retrieval from noisy oversampled diffraction pattern

We simulated a noisy diffraction pattern from the image “Lena” at oversampled factor 1.74. Under the noisy data case, we incorporate the oversampling smoothness (OSS) algorithm [20], which produces consistently better reconstruction than the HIO algorithm under the noisy scenario, instead of the HIO algorithm for comparison. The OSS code can be downloaded from the author’s homepage and the maximum iterations for OSS we set is also 3000. Note that the random phase without support constraint is suitable for OSS, we utilized its initial method for initial guess of OSS; moreover, the same initialization is as described in Section 4.1 for the other algorithms. We added the random noise n on the true oversampled diffraction pattern to generate the noisy measurement data b noise  = b + n.. The noise n is scaled so that the R noise defined by R noise = ||b − b noise ||1/||b||1 is ranging from 5–20 %. For each noise level, we performed 20 independent runs and calculated the R real [20]: R real  = ||x (t) − x||1/||x||1 to evaluate the reconstruction quality.

The Gaussian noise is added on the true oversampled diffraction pattern to characterize the effects of different noise levels on the reconstructions. The average R real of the three benchmark algorithms and our algorithm as a function of the noise levels are presented in Fig. 6. From Fig. 6, one can see our algorithm always shows a smaller R real, which indicates that the best reconstructions are achieved by our algorithm. To further verify that our algorithm can obtain a better reconstruction, the best reconstruction, namely the reconstruction with the smallest R real, of each algorithm under the Gaussian noise case with R noise = 10 % is presented in Fig. 7 (the original image is presented in Fig. 7a). Visually, the RAAR-l 0 algorithm produces the worst reconstruction (see Fig. 7c). The OSS algorithm is better than the RAAR-l 0 algorithm, but it suffers from much artifacts and loses much details (see Fig. 7b). Although the RAAR-l 1 algorithm outperforms the other two benchmark algorithms in terms of the reconstruction quality, much details are still lost in the reconstruction of the RAAR-l 1 algorithm (see Fig. 7d). One can see our reconstruction in Fig. 7e not only reduces the artifacts but also preserves much details compared with the other three benchmark algorithms. The results demonstrate that our algorithm is robust to noise.
Fig. 6

R real as a function of the noise levels for the reconstruction of image “Lena”

Fig. 7

The reconstructed “Lena” images with 294 × 294 pixels from the Gaussian noisy oversampled diffraction pattern. Top-row (left to right): a The original image. b The reconstructed image by OSS (R real = 0.094); bottom row (left to right), the reconstructed images by: c RAAR-l 0 (R real = 0.1846). d RAAR-l 1 (R real = 0.0473). e PR-TIHP-l 1 (R real = 0.0391)

5 Conclusions

In this paper, we have introduced a framework for PR based on translation invariant Haar pyramid. Our main idea is to formulate sparse representation regularization term of utilizing TIHP tight frame for PR. We incorporated the formulated regularization term into the PR problem, which yields a new non-convex optimization problem. ADMM technology was utilized for solving the resulting problem, and a satisfied solution is obtained. We demonstrated the l 1 norm that promoted sparsity can obtain better reconstruction than l 0 norm for our approach heuristically. Moreover, experimental simulations showed that our proposed approach considerably outperforms the previous PR algorithms in terms of reconstruction quality at low oversampled factors. The Gaussian noise with various noise levels was added on the true oversampled diffraction pattern to evaluate the reconstruction quality of our algorithm showing robust to noise. In this paper, TIHP tight frame is chosen for our approach to retrieval phases as well as recover images. Exploiting finer tight frame to improve the reconstruction quality is our future work.

Declarations

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 61471313 and by the Natural Science Foundation of Hebei Province under Grant F2014203076.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Information Science and Engineering, Yanshan University

References

  1. Y Shechtman, YC Eldar, O Cohen, HN Chapman, J Miao, M Segev, Phase retrieval with application to optical imaging: a contemporary overview. IEEE Signal process. Mag. 32, 87–109 (2015)View ArticleGoogle Scholar
  2. JR Fienup, Phase retrieval algorithms: a personal tour [invited]. Appl. Opt. 52, 45–56 (2013)View ArticleGoogle Scholar
  3. RW Gerchberg, WO Saxton, A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35, 237–246 (1972)Google Scholar
  4. JR Fienup, Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett. 3, 27–29 (1978)View ArticleGoogle Scholar
  5. CL Guo, S Shi, JT Sheridan, Iterative phase retrieval algorithms I: optimization. Appl. Opt. 54, 4698–4708 (2015)View ArticleGoogle Scholar
  6. HH Bauschke, PL Combettes, DR Luke, Hybrid projection reflection method for phase retrieval. JOSA A. 20, 1025–1034 (2003)View ArticleGoogle Scholar
  7. DR Luke, Relaxed averaged alternating reflections for diffraction imaging. Inverse Probl. 21, 37–50 (2005)MathSciNetView ArticleMATHGoogle Scholar
  8. S Mukherjee, CS Seelamantula, Fienup algorithm with sparsity constraints: application to frequency-domain optical-coherence tomography. IEEE Trans. Signal Process. 62, 4659–4672 (2014)MathSciNetView ArticleGoogle Scholar
  9. S Loock, G Plonka, Phase retrieval for Fresnel measurements using a shearlet sparsity constraint. Inverse Probl. 30, (2014)Google Scholar
  10. Y Shechtman, A Beck, YC Eldar, GESPAR: efficient phase retrieval of sparse signals. IEEE Trans. Signal Process. 62, 928–938 (2014)MathSciNetView ArticleGoogle Scholar
  11. H Ohlsson, AY Yang, R Dong, SS Sastry, Nonlinear basic pursuit, 2013Google Scholar
  12. R Fan, Q Wan, F Wen, YP Liu, Iterative projection approach for phase retrieval of semi-sparse wave field. EURASIP J. Adv. Sig. Proc (2014). doi:10.1186/1687-6180-2014-24
  13. DS Weller, A Pnueli, G Divon, O Radzyner, YC Eldar, Undersampled phase retrieval with outliers, 2014. arXiv:1402.7350Google Scholar
  14. Z Yang, CS Zhang, LH Xie, Robust compressive phase retrieval via L 1 minimization with application to image reconstruction (Cornell University, Ithaca, 2013)Google Scholar
  15. S Boyd, N Parikh, E Chu, B Peleato, J Eckstein, Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and trends in machine learning 20, 681–695 (2011)MATHGoogle Scholar
  16. JA Guerrero-Colon, L Mancera, J Portilla, Image restoration using space-variant Gaussian scale mixture in overcomplete pyramids. IEEE Trans. Image Process. 17, 27–41 (2008)MathSciNetView ArticleGoogle Scholar
  17. J Portilla, Image restoration through L 0 analysis-based sparse optimization in tight frames (Paper presented at the IEEE international conference on image processing, Cairo, 2009), pp. 3909–3912Google Scholar
  18. R. Glowinski, Lectures on numerical methods for nonlinear variational problems (Springer, Berlin Heidelberg, 1981)Google Scholar
  19. ZW, Wen, C Yang, X Liu, S Marchesini, Alternating direction methods for classical and psychographic phase retrieval. Inverse Probl. 28, (2012)Google Scholar
  20. JA Rodriguez, R Xu, CC Chen, Y Zou, J Miao, Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities. J. Appl. Cryst. 46, 312–318 (2013)View ArticleGoogle Scholar

Copyright

© Shi et al. 2015