In this section, we present an iterative method to generate optimized frequency-domain OFDM symbols. The method consists of solving a sequence of SOCP subproblems which approximate the original problem (7) to (9) locally. Let *c*_{(0)} be a randomly generated feasible starting point for the problem (7) to (9). A new feasible point *c*_{(1)} is obtained as the solution of the following program:

\begin{array}{c}\text{minimize}\\ \mathit{c}\in {\u2102}^{N}\end{array}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\left|\right|\mathit{S}\left(\mathit{c}-{\mathit{c}}_{0}\right)|{|}^{2}

(10)

subject to

{\mathit{c}}^{H}{\mathit{M}}_{i}\mathit{c}-{\mathit{c}}_{\left(0\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(0\right)}-2\Re \left({\mathit{c}}_{\left(0\right)}^{H}{\mathit{P}}_{\alpha}\left(\mathit{c}-{\mathit{c}}_{\left(0\right)}\right)\right)\le 0,\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\forall i

(11)

{\mathit{c}}^{H}\mathit{c}-{\mathit{c}}_{\left(0\right)}^{H}{\mathit{S}}_{\beta}{\mathit{c}}_{\left(0\right)}-2\Re \left({\mathit{c}}_{\left(0\right)}^{H}{\mathit{S}}_{\beta}\left(\mathit{c}-{\mathit{c}}_{\left(0\right)}\right)\right)\le 0,

(12)

where ℜ{*z*} denotes the real part of the complex number *z*. The problem in (10) to (12) is obtained by linearizing the concave parts of the constraint functions in (8) to (9) around *c*_{(0)}. (Note that this is actually the first-order Taylor expansion). This leads to a SOCP problem that can be readily solved by CVX[18]. The method continues with linearizing the original problem around *c*_{(1)} and repeating this procedure until convergence is reached. A justification/motivation of this reformulation lies in the fact that the *best* convex approximation of a concave function is an affine function.

This method generates a sequence of feasible points with nonincreasing objective values. To prove this claim, we proceed as follows. Note that (11) with *c* = *c*_{(1)} can be equivalently written as:

{\mathit{c}}_{\left(1\right)}^{H}{\mathit{M}}_{i}{\mathit{c}}_{\left(1\right)}+{\mathit{c}}_{\left(0\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(0\right)}-{\mathit{c}}_{\left(0\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(1\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(0\right)}\le 0.

Now, note that

{\left({\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(0\right)}\right)}^{H}{\mathit{P}}_{\alpha}\left({\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(0\right)}\right)\ge 0,

since *P*_{
α
} is a positive semidefinite matrix. Thus, we can write

\begin{array}{l}{\left({\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(0\right)}\right)}^{H}{\mathit{P}}_{\alpha}\left({\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(0\right)}\right)\ge \phantom{\rule{1em}{0ex}}{\mathit{c}}_{\left(1\right)}^{H}{\mathit{M}}_{i}{\mathit{c}}_{\left(1\right)}+\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{2em}{0ex}}\phantom{\rule{1em}{0ex}}+{\mathit{c}}_{\left(0\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(0\right)}-{\mathit{c}}_{\left(0\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(1\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(0\right)}.\phantom{\rule{2em}{0ex}}\end{array}

It is now straightforward to see that

{\mathit{c}}_{\left(1\right)}^{H}\left({\mathit{M}}_{i}-{\mathit{P}}_{\alpha}\right){\mathit{c}}_{\left(1\right)}\le 0,

i.e., *c*_{(1)} satisfies the constraints in (8). It can be easily shown that *c*_{(1)} also satisfies the constraint in (9). To this aim, we combine the inequality

{\left({\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(0\right)}\right)}^{H}{\mathit{S}}_{\beta}\left({\mathit{c}}_{\left(1\right)}-{\mathit{c}}_{\left(0\right)}\right)\ge 0

with (12) when *c* = *c*_{(1)}. Then, after some basic manipulations, we obtain {\mathit{c}}_{\left(1\right)}^{H}\left({\mathit{I}}_{N}-{\mathit{S}}_{\beta}\right){\mathit{c}}_{\left(1\right)}\le 0, i.e., *c*_{(1)} satisfies the constraint in (9). Thus, *c*_{(1)} is feasible for the original problem (7) to (9).

Next, remark that *c*_{(0)} is feasible for (10) to (12) (when *c* = *c*_{(0)}, the constraints (8) and (9) become identical to (11) and (12), respectively). This implies that the feasible set of (10) to (12) contains *c*_{(0)}. Thus, ||*S*(*c*_{(1)}-*c*_{0})||^{2}≤||*S*(*c*_{(0)}-*c*_{0})||^{2}, i.e., the objective value of the new feasible point *c*_{(1)} cannot be higher than the objective value of the starting point *c*_{(0)}. The algorithm stops when ||*S*(*c*_{(k)}-*c*_{0})||-||*S*(*c*_{(k+1)}-*c*_{0})||<10^{-4} for some *k*. We refer to the proposed method as ‘NEW SOCP’ from hereafter.

Naturally, the algorithm requires a strict feasible solution of (7) to (9) as the feasible starting point. The problem of finding a feasible solution is not a trivial task and, unfortunately, no solid theory is available. However, the following heuristic has shown to be efficient in addressing this problem. The feasible starting point *c*_{(0)} is obtained as a solution of the convex feasibility problem:

\begin{array}{c}\text{find}\\ \mathit{c}\in {\u2102}^{N}\end{array}\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\mathit{c}

(13)

subject to

{\mathit{c}}^{H}{\mathit{M}}_{i}\mathit{c}-{f}_{1}-2\Re \left({\mathit{c}}_{\left(\text{rand}\right)}^{H}{\mathit{P}}_{\alpha}\left(\mathit{c}-{\mathit{c}}_{\left(\text{rand}\right)}\right)\right)\le 0,\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}\forall i

(14)

{\mathit{c}}^{H}\mathit{c}-{f}_{2}-2\Re \left({\mathit{c}}_{\left(\text{rand}\right)}^{H}{\mathit{S}}_{\beta}\left(\mathit{c}-{\mathit{c}}_{\left(\text{rand}\right)}\right)\right)\le 0,

(15)

where {f}_{1}={\mathit{c}}_{\left(\text{rand}\right)}^{H}{\mathit{P}}_{\alpha}{\mathit{c}}_{\left(\text{rand}\right)}, {f}_{2}={\mathit{c}}_{\left(\text{rand}\right)}^{H}{\mathit{S}}_{\beta}{\mathit{c}}_{\left(\text{rand}\right)} and *c*_{(rand)} is randomly generated; the real and imaginary parts of the *i* th entry of the vector *c*_{(rand)} are randomly generated in the intervals [ℜ{*c*_{0}_{
i
}}-1,ℜ{*c*_{0}_{
i
}} + 1] and [*I*{*c*_{0}_{
i
}}-1,*I*{*c*_{0}_{
i
}} + 1], respectively, where *c*_{0}_{
i
} is the *i* th entry of the vector *c*_{0} and *I*{*a*} denotes the imaginary part of the complex number *a*.

Problem (13) to (15) is obtained from (10) to (12) with *c*_{0} = *c*_{(rand)} and can be solved using CVX. Problem (13) to (15) may be infeasible; however, when it is feasible, this heuristic will provide a feasible solution to problem (7) to (9) which is used as the starting point of the algorithm^{a}.

It is important to note that the existing convex methods [15, 16] may fail to deliver feasible solution to the EVM minimization problem (4) to (6); please see ([15], Sec. III-C), and ([16], Sec. IV). This is due to the fact that in [15, 16], the feasible (nonconvex) set of the original (nonconvex) problem (4) to (6) lies within the feasible (convex) set of the relaxed convex problem. Note that in our work, in sharp contrast to [15, 16], the feasible set of the relaxed convex problem is a convex subset of the original (nonconvex) feasible set. Hence, the feasibility of the new solution is guaranteed.

*Note:* We can introduce an optional constraint that will keep the EVM below some preset threshold. This corresponds to a more challenging and realistic scenario where PAPR, FCPO, and EVM are simultaneously constrained. In that case, the EVM optimization can be formulated as

\begin{array}{c}\phantom{\rule{1em}{0ex}}\text{minimize}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{0.3em}{0ex}}p\\ p\in \mathbb{R},\mathit{c}\in {\u2102}^{N}\end{array}

(16)

subject to

\text{EVM}\le p\phantom{\rule{0.3em}{0ex}}\phantom{\rule{0.3em}{0ex}}{\text{EVM}}_{max},

(17)

\phantom{\rule{1em}{0ex}}\left(5\right),\phantom{\rule{1em}{0ex}}\left(6\right),\phantom{\rule{1em}{0ex}}p\le 1,

(18)

where EVM_{max} is the maximum allowed EVM. Note that the optimization problem (4) to (6) is different from the one in (16) to (18), since the search space for *c* in the former is larger than that in the latter.

We remark that the constraint (17) is convex and, consequently, the EVM problem (16) to (18) can also be addressed by the proposed method^{b}. The simulation results in Section 4 will assess the effectiveness of the new method.