 Research
 Open Access
 Published:
Coupleddecompositions: exploiting primal–dual interactions in convex optimization problems
EURASIP Journal on Advances in Signal Processing volume 2013, Article number: 41 (2013)
Abstract
Decomposition techniques implement the socalled “divide and conquer” in convex optimization problems, being primal and dual decompositions the two classical approaches. Although both solutions achieve the goal of splitting the original program into several smaller problems (called the subproblems), these techniques exhibit in general slow speed of convergence. This is a limiting factor in practice and in order to circumvent this drawback, we develop in this article the coupleddecompositions method. As a result, the number of iterations can be reduced by more than one order of magnitude. Furthermore, the new technique is selfadjustable, i.e., it does not depend on userdefined parameters, as opposite to what happens with classical strategies. Given that in signal processing applied to communications and networking we usually deal with a variety of problems that exhibit certain coupling structures, our method is useful to design decentralized as well as centralized optimization schemes with advantages over the existing techniques in the literature. In this article, we expose there different resource allocation problems where the proposed method is successfully applied.
1 Introduction
Convex optimization theory [1, 2] has provided in the last decades a powerful framework to solve optimization problems in many distinct areas. Besides the numerous applications existing in the signal processing literature, it is also possible to find examples in topics such as filter design, machine learning, or finance among others. This great success has been motivated by (i) convex optimization provides relevant insights into each specific problem, thanks to a mature theoretical framework, (ii) some problems can be solved analytically or semianalytically applying the socalled Karush–Kuhn–Tucker (KKT) optimality conditions, and (iii) efficient numerical methods, e.g., interior point methods, have been developed to solve generic convex problems in polynomial time.
In many engineering areas, optimization problems with a partially coupled structure arise. In particular, we consider programs where the objective can be expressed as a sum of functions that depend on disjoint sets of variables, which are additionally coupled by the problem constraints (e.g., [3–5]). The optimization of such programs is the topic addressed by decomposition methods [6] and a common strategy is to split the original problem into several smaller subproblems that are somehow coordinated until they reach the optimal solution. Additionally and as a byproduct, the resulting methods can deal more naturally with decentralized implementations [7, 8].
However, existing decomposition methods exhibit some drawbacks in practice. Roughly speaking, the speed of convergence of the algorithms is in general slow (this can be appreciated, for instance, in the numerical examples of [6]) and furthermore, it is necessary to manually adjust the stepsize used in the successive updates of the algorithms. Since there is no universal rule to do that optimally, the performance of the methods is compromised [9]. In order to overcome these drawbacks, we introduce a novel technique, the coupleddecompositions method (CDM). It can be applied to decentralized implementations and furthermore, due to its superior computational performance in terms of convergence speed, the new technique is also competitive when compared to wellestablished centralized methods.
In the following, we synthesize the main contributions of this article: (i) development of new interactions between the primal and dual domains in convex decomposition problems, (ii) development of a new method based on these novel interactions for problems with a single coupling constraint, (iii) convergence proof of the proposed method, (iv) further analysis of the method when it is applied to a subset of the problems of interest, and (v) presentation of numerical examples that show the benefits of having an unsupervised and efficient solution (in terms of both computational cost and convergence speed).
The remainder of the article is organized as follows. Section 2 formulates the type of problems that we deal with and it also reviews the classical decomposition techniques. Section 3 describes the proposed CDM and proves its convergence to the optimal solution whereas Section 4 provides further analysis on the proposed method when the problem is particularized. Finally, Section 5 presents numerical examples of the proposed method and Section 6 concludes the article.
2 Problem formulation and existing solutions
In this section, we first define the type of problems that we deal with throughout the text. Thereafter, the existing decomposition techniques in the literature are reviewed.
2.1 Problem formulation
Let us consider the following optimization problem,
with variables {\mathit{x}}_{j}\in {\mathbb{R}}^{{n}_{j}}. The functions {f}_{j},{h}_{j}:\phantom{\rule{1em}{0ex}}{\mathbb{R}}^{{n}_{j}}\to \mathbb{R} are assumed convex and differentiable in the sets {\mathcal{X}}_{j}, also convex and compact too. These sets are defined as {\mathcal{X}}_{j}=\left\{{\mathit{x}}_{j}\phantom{\rule{0.3em}{0ex}}\right\phantom{\rule{0.3em}{0ex}}{\mathit{g}}_{j}\left({\mathit{x}}_{j}\right)\preccurlyeq 0\} ^{a} with {\mathit{g}}_{j}\left({\mathit{x}}_{j}\right)={\left[{g}_{j}^{1}\right({\mathit{x}}_{j}),\dots ,{g}_{j}^{{G}_{j}}({\mathit{x}}_{j}\left)\right]}^{T}, where the functions {g}_{j}^{k}:\phantom{\rule{1em}{0ex}}{\mathbb{R}}^{{n}_{j}}\to \mathbb{R} are convex and differentiable. Therefore, Equation (1) defines a convex problem and if we further assume that its feasible region has nonempty relative interior, then strong duality holds.
Note that we may interpret (1) as the distribution of a quantity C of resources among J entities where the j th entity aims to set the values of the variables in x _{ j } (constrained to lie in {\mathcal{X}}_{j}) in order to minimize the global cost function \sum _{j=1}^{J}{f}_{j}\left({\mathit{x}}_{j}\right) without exceeding the coupling constraint \sum _{j=1}^{J}{h}_{j}\left({\mathit{x}}_{j}\right)\le C. The presented formulation applies, among others, to fair dynamic bandwidth allocation (DBA) in pointtomultipoint networks [4], to problems related with multipleinput multipleoutput design [3, 10, 11] or problems related to OFDM system design [12, 13].
The problem in (1) is suitable for a dual decomposition approach and also for a primal decomposition if it is adequately reformulated. In the next sections, those classical solutions are reviewed.
2.2 Primal decomposition
Let us consider the following modified version of (1),
where we have introduced the coupling variables y= [y _{1},…,y _{ J }]^{T}. The subsets {\mathcal{Y}}_{j}\in \mathbb{R} are defined as the images of {\mathcal{X}}_{j} through the functions h _{ j }, i.e., {h}_{j}:{\mathcal{X}}_{j}\to {\mathcal{Y}}_{j}. Since the functions h _{ j } are convex over {\mathcal{X}}_{j} and so continuous, the subsets {\mathcal{Y}}_{j} are guaranteed to be compact ([14], Th. 5.2.2). Therefore, each {\mathcal{Y}}_{j} has both a minimum and a maximum.
In primal decomposition, we assume that the coupling variables are fixed to a given value \mathit{y}\in \mathcal{Y} (more details can be found in [15], Sec. 6.4.2). Then, the problem in (2) is solved as J independent problems in the variables x _{ j }. They are called the subproblems and they are expressed as
Interestingly, we know from ([15], Sec. 5.4.4) that −λ _{ j }, i.e., minus the Lagrange multiplier associated to the constraint h _{ j }(x _{ j })≤y _{ j }, is in fact a subgradient^{b} of p _{ j } at y _{ j }.
Having defined the primal subproblems, we can rewrite (2) as
and (4) is referred to as the primal master problem. Note that since the subgradients of the primal subproblems are obtained at no cost, we can use a projected gradient approach ([15], Sec. 2.3) to solve the problem. In other words, the following recursion (k indexes iterations)
with {\mathit{s}}^{k}={\left[{\lambda}_{1}^{\ast}\right({y}_{1}^{k}),\dots ,{\lambda}_{J}^{\ast}({y}_{J}^{k}\left)\right]}^{T} and where [·]^{‡}is the projection onto the feasible set (i.e., \mathit{y}\in \left\{\mathit{y}\phantom{\rule{0.3em}{0ex}}\right\phantom{\rule{0.3em}{0ex}}\mathit{y}\in \mathcal{Y},\sum _{j=1}^{J}{y}_{j}\le C\}) converges to y ^{∗}. The interested reader can find more details about primal decomposition in ([15], Sec. 6.4.2) and also in [6]. However, note that it is necessary to appropriately adjust the value of α ^{k} in order to guarantee the convergence to the desired solution [6, 9].
2.3 Dual decomposition
Dual decomposition is the dualdomain alternative to primal decomposition. Let us compute the partial Lagrangian of (1) by means of relaxing only the coupling constraint
Clearly, the problem in (6) decouples into J independent problems, called the dual subproblems and defined as
Note that the dual subproblems are convex programs for μ ≥ 0 and that given a value of μ, the values of the variables in x _{ j } are found after solving the subproblems in (7) for j = 1,…,J, which can be computed in parallel. In particular, the optimal values of the primal variables, i.e., {x _{ j }}, are obtained from an optimal value of the dual variable, i.e., μ ^{∗}.
Using the dual subproblems, the dual master problem is written as
and, as in primal decomposition, a projected gradient approach can be applied ([15], Sec. 6.4.1) to finally get μ ^{∗}. The recursion is
where k indexes iterations and [a]^{+}= max{0,a}. As well as in primal decomposition, it can be shown that a subgradient of q _{ j } at μ ^{k} is readily found as^{c} {h}_{j}\left({\mathit{x}}_{j}^{\ast}\left({\mu}^{k}\right)\right) once the dual subproblems are solved ([15], Sec. 6.1) and therefore, a subgradient of q at μ ^{k} is given by {s}^{k}=\sum _{j=1}^{J}{h}_{j}\left({\mathit{x}}_{j}^{\ast}\left({\mu}^{k}\right)\right)C. Finally, note that a userdefined stepsize is also necessary in dual decomposition and, as we discuss later, this is a serious drawback of classical decomposition methods in practice.
2.4 Primal–dual techniques
There is a huge list of methods in the literature that are termed primal–dual but, to the best of our knowledge, the essentials in our proposed CDM have not been established previously. In general, all the reviewed methods suffer from (i) slow speed of convergence to the optimal solution (this restricts the number of practical applications), (ii) no consideration for the separated nature of the problem (i.e., the techniques are not decompositionbased approaches), and/or (iii) the decentralized implementation of the methods is not taken into account. On the contrary, all these aspects are addressed in the proposed CDM.
A first group of existing primal–dual techniques focus on iteratively finding a saddlepoint of the Lagrangian, which is a convex and concave function of the primal and dual variables, respectively. Although these methods were not originally conceived from a decomposition perspective, they can be applied to the problems of interest in this article (and also implemented in a decentralized manner). Among these techniques, we find the classical work of Arrow et al. [16] or the more recent Mean Value Cross (MVC) decompositions method [17, 18]. However, both techniques need to fix an stepsize (explicit or implicit as in the MVC decompositions method) and, as a consequence, they penalize in terms of convergence speed in practice.
In a second group of techniques we include all the possible combinations of classical primal and dual decompositions, as described in [6]. The idea in this case is to solve some parts of the problem with a primal decomposition approach while other parts are tackled by means of a dual decomposition. Therefore, these solutions do not consider full primal–dual interactions as in the proposed CDM, where each part considers both domains simultaneously. Furthermore, they also suffer from slow convergence speeds due, in part, to the manually adjusted stepsizes. However, it is important to remark that in the last decade a significant progress has been made in dualdecompositionbased solutions using smoothing [19] or pathfollowing [20] strategies, improving the number of iterations of the classical dual decomposition by an order of magnitude. Notwithstanding, these methods tackle problems with linear constraints and are not designed under a decentralized implementation perspective.
Finally, let us mention the primal–dual interior point methods ([2], Sec. 11.7) and its variants [21, 22] as the third group of primal–dual approaches. In this case, the basic idea is to iteratively solve the KKT conditions of the problem using numerical methods typically applied to the resolution of systems of nonlinear equations such as the Newton method. These techniques have received great attention during the past years due to their good performance in terms of convergence speed when used in generic convex problems. However, since they were not conceived to exploit the separability of the problem (if it exists), it is not straightforward to derive decentralized solutions from this third group of techniques (one of the goals in this article).
3 The CDM
In order to overcome the detected drawbacks, we design our CDM with the aim to (i) exploit the primal and dual domains in convex optimization problems and (ii) simultaneously benefit from the separability of the problem in order to derive decentralized solutions. Although there are solutions in the literature that exploit both solution domains as discussed, the development of a fast technique satisfying (i) and (ii) is still pending. In the following, we first describe our proposed CDM and thereafter, we prove that the iterates of the method convergence to the optimal solution.
3.1 Description of the method
The proposed method has four building blocks: the primal subproblems, the dual subproblems, the primal projection, and the dual projection. These blocks are connected as depicted in Figure 1 and in what follows, we describe the actions taken at each step of the method and we provide a summary of the technique in algorithmic form. Thereafter, the convergence of the successive updates of the CDM, i.e., μ ^{k}, towards an optimal value of the dual variable, i.e., μ ^{∗}, is proved (and the same is valid for the rest of variables, primal, and dual).
3.1.1 Step 1: dual subproblems
From μ ^{k}, the primal value {y}_{j}^{k} is obtained after solving the following convex optimization problem in d _{ j }
Note that d _{ j }(μ ^{k}) coincides with (7) if we substitute {y}_{j}^{k} by h _{ j }(x _{ j }). Note also that {\lambda}_{j}^{k}, i.e., the dual variable associated to the constraint {h}_{j}\left({\mathit{x}}_{j}\right)\le {y}_{j}^{k}, always takes the value of μ ^{k}. This can be checked using one of the KKT optimality conditions of the problem as follows
where L({\mathit{x}}_{j},{y}_{j}^{k},{\lambda}_{j}^{k},\dots ) stands for the Lagrangian function of the problem. The interested reader can find more details on the Lagrangian function as well as on the KKT optimality conditions of convex problems in ([2], Sec. 5.1, Sec. 5.5).
3.1.2 Step 2: primal projection
In the second step of the method, the values in {y}_{j}^{k} from all the subproblems are grouped in {\mathit{y}}^{k}={[{y}_{1}^{k},\dots ,{y}_{J}^{k}]}^{T} and projected to the subset \mathcal{Y}\cap \left\{\mathit{y}\right\sum _{i}{y}_{i}=C\} if μ ^{k}> 0 and to the subset \mathcal{Y}\cap \left\{\mathit{y}\right\sum _{i}{y}_{i}\le C\} otherwise. Note that both projections force the values in {\widehat{\mathit{y}}}^{k} to be feasible and that the choice of the projection subset depending on μ is in accordance with the complementary slackness constraint \mu (\sum _{j=1}^{J}{y}_{j}C)=0 of the problem.
In the most usual case, that is, for μ ^{k}> 0, the following convex problem has to be solved
which can be done semianalytically as discussed in “Proof of Proposition 2” in Appendix.
3.1.3 Step 3: primal subproblems
The j th primal subproblem is defined as
and it can be solved once {\u0177}_{j}^{k} is available. In this case, we are interested in the optimal value of the Lagrange multiplier associated to {h}_{j}\left({\mathit{x}}_{j}\right)\le {\u0177}_{j}^{k}, that is, {\widehat{\lambda}}_{j}^{k}. As later discussed in Section 3.4, the step 4 of the method uses only the values of {\widehat{\lambda}}_{j}^{k} that result from {\u0177}_{j}^{k}\notin \phantom{\rule{2.77695pt}{0ex}}\mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j}, where \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}\mathcal{A} stands for the boundary of the subset \mathcal{A}\phantom{\rule{0.5em}{0ex}}. The selected values are then grouped in the list \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\}, which is the input of the dual projection. Note that if \sum _{j=1}^{J}{\u0177}_{j}^{k}=C, then the list is guaranteed to be nonempty as shown in Proposition 3 (Section 3.4). Besides, it is important to solve the primal subproblem in (13) according to its dual version in (10). In other words, if {\u0177}_{j}^{k} is fixed in the j th primal subproblem then {\widehat{\lambda}}_{j}^{k} (not necessarily unique) is accepted as valid only if {d}_{j}\left({\widehat{\lambda}}_{j}^{k}\right) gives {y}_{j}^{k}={\u0177}_{j}^{k}.
3.1.4 Step 4: dual projection
If \sum _{j=1}^{J}{\u0177}_{j}^{k}=C, a new update of μ, i.e., μ ^{k + 1}, is obtained as the solution of the following optimization problem
In other words, μ ^{k + 1} takes the value in \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\} that is the closest to μ ^{k}. As discussed in Section 3.3, this is equivalent to set {\mu}^{k+1}=\text{min}\left\{\underset{j}{\overset{k}{\stackrel{\u0306}{\lambda}}}\right\} if μ ^{k}< μ ^{∗} and {\mu}^{k+1}=\text{max}\left\{\underset{j}{\overset{k}{\stackrel{\u0306}{\lambda}}}\right\} if μ ^{k}> μ ^{∗}.
If \sum _{j=1}^{J}{\u0177}_{j}^{k}<C then μ ^{k + 1} is fixed to 0, which is in accordance with the complementary slackness constraint \mu (\sum _{j=1}^{J}{y}_{j}C)=0.
3.2 The CDM in algorithmic form
Let us consider without loss of generality a decentralized implementation of the proposed method with a controller and J independent participants. Each participant is able to solve the corresponding primal and dual subproblems whereas the task of the controller is to compute the primal and dual projections. Note that both operations involve simple computations as discussed in the steps 2 and 4 above. The proposed CDM is then summarized in the following algorithm,
Choose an initial value for μ ^{0} and repeat

1.
The controller sends μ ^{k} to the participants, which compute d _{ j }(μ ^{k}) in (10) and return {y}_{j}^{k}.

2.
With {\mathit{y}}^{k}=[{y}_{1}^{k},{y}_{2}^{k},\dots ,{y}_{J}^{k}], the controller computes {\mathit{\u0177}}^{k} using the primal projection (step 1 above) and sends {\u0177}_{j}^{k} to the participants if {\u0177}_{j}^{k}\notin \phantom{\rule{2.77695pt}{0ex}}\mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j}.

3.
The participants compute {p}_{j}\left({\u0177}_{j}^{k}\right) in (13) and return {\widehat{\lambda}}_{j}^{k} to the controller.

4.
The controller fixes μ ^{k + 1} to the received value that is closer to μ ^{k}.
Until convergence.
3.3 Resource–price interpretation
Often in convex optimization, primal variables are interpreted as resources and dual variables as prices to be paid for them. In the sequel, we revisit the proposed technique under this resource–price perspective. Initially, a global price μ ^{k} is fixed and sent to the parts. Given that price, the parts estimate the amount of resources they want to buy. Intuitively, there will be a deficit of resources (a total request over C) if the price is too low and an excess if it is too high. In both cases, the primal projection corrects the allocation in order to distribute all the available resources among the parts. However, there is no guarantee that the distribution follows a common market law. In order to correct the situation, the primal subproblems estimate the price to be paid for the new resource allocation and, in case the individual prices differ, the dual projection fixes a new common price μ ^{k + 1} in order to advance towards a consensus price μ ^{∗}.
3.4 Proof of the method
Before proving that the successive updates of the proposed method converge to the optimal solution, let us establish the relationship between primal and dual variables in the subproblems with the following proposition.
Proposition 1. Take the jth primal subproblem p _{ j } in (13) and the jth dual subproblem d _{ j } in (10) of the CDM. Then, the following two statements hold: (i) {\widehat{\lambda}}_{j}^{k}\left({\u0177}_{j}^{k}\right) is nonincreasing on {\u0177}_{j}^{k} in (13) and (ii) {y}_{j}^{k}\left({\mu}^{k}\right) is nonincreasing on μ ^{k} in (10).
Proof. See “Proof of Proposition 1” in Appendix. □
Next, the goal is to verify that the primal and dual projections effectively coordinate the subproblems towards the optimal solution. Let us assume, without loss of generality, that the initial guess is μ ^{0} = 0 so that μ ^{0} ≤ μ ^{∗}. From that value, the CDM starts by solving the dual subproblems in (10) in order to obtain y ^{0}. As a result, there are two possibilities, namely, (i) \sum _{j}{y}_{j}^{0}\le C and (ii) \sum _{j}{y}_{j}^{0}>C. In the first situation, μ ^{0} as well as y ^{0} and the corresponding values in {x _{ j }} are optimal. Note that the subproblems are in this case decoupled and therefore the individual optimization carried out in the dual subproblems is globally optimal, too. For the sake of brevity, we do not discuss here what are the outputs of the following steps and iterations of the method, but it can be checked that the solution remains unaltered as expected. In the second case, μ ^{0} = 0 is clearly nonoptimal and in the sequel we show how the successive updates of μ ^{k}converge to an optimal value of the dual variable, that is, μ ^{∗} > 0.
Let us revisit then a complete iteration of the method starting at the dual subproblems in (10) with μ ^{k}< μ ^{∗}, which holds at least for k = 0. Since {y}_{j}^{k} is a nonincreasing function of μ ^{k}in the j th dual subproblem (see Proposition 1), μ ^{k}< μ ^{∗} and {y}_{j}^{k}\left({\mu}^{\ast}\right)={y}_{j}^{\ast}, it is true that {y}_{j}^{k}\ge {y}_{j}^{\ast}. Moreover, if we take into account that \sum _{j=1}^{J}{y}_{j}^{\ast}=C (we are considering the case where the optimal solution is coupled), we can establish that \sum _{j=1}^{J}{y}_{j}^{k}>C unless y ^{k}= y ^{∗}.
Thereafter, it is verified in the second step of the method (primal projection) that {\widehat{\mathit{y}}}^{k}\preccurlyeq {\mathit{y}}^{k} ({\u0177}_{j}^{k}<{y}_{j}^{k} for some j) according to Proposition 2 next.
Proposition 2. Given the optimization problem in (12) and y ^{k}≽ y ^{∗} (y ^{k}≠ y ^{∗}), its optimal solution can be expressed as {\widehat{\mathit{y}}}^{k}={\mathit{y}}^{k}\mathit{r} with r≽ 0(r _{ j } > 0 for some j).
Proof. See “Proof of Proposition 2” in Appendix. □
In the third step of the method, the j th primal subproblem defined in (13) computes the individual price {\widehat{\lambda}}_{j}^{k} and the list of individual prices \left\{{\widehat{\lambda}}_{j}^{k}\right\} is constructed with the values obtained from the J independent subproblems, indexed by j = 1,…,J. Note, however, that our main interest is not in the prices {\widehat{\lambda}}_{j}^{k} but in finding a global consensus price μ ^{∗}. Fortunately, if we come back to the problem definition in (2), we notice that there is a dependence between the dual variable associated to the constraint h _{ j }(x _{ j })≤ y _{ j }, i.e., λ _{ j }, and the dual variable associated to the constraint \sum _{j=1}^{J}{y}_{j}\le C, i.e., μ (in terms of the proposed algorithm, {\u0177}_{j}^{k}, {\widehat{\lambda}}_{j}^{k} and μ ^{k} play the role of y _{ j }, λ _{ j } and μ, respectively). This dependance motivates in our algorithm the selection of some of the values in the list \left\{{\widehat{\lambda}}_{j}^{k}\right\}. To be more specific, the value {\widehat{\lambda}}_{j}^{k} is chosen if the corresponding primal variable {\u0177}_{j}^{k} satisfies {\u0177}_{j}^{k}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j} as discussed next.
Let us first write the Lagrangian of the problem in (2), that is
where the set of convex functions q _{ j }(y _{ j }) with associated Lagrange multipliers ψ _{ j } define the subset {\mathcal{Y}}_{j}. From the Lagrangian function we derive some of the KKT optimality conditions of the convex optimization problem as far as the optimal values of the variables form a saddlepoint in the function plot. In particular, let us consider the following condition
that reveals
This equality is not very useful in general and neither from an algorithmic point of view because the values of the multipliers in ψ _{ j } are unknown. However, we can make use of the following complementary slackness conditions ([2], Sec. 5.5.2) of the problem, compactly written as ψ _{ j }⊙ q _{ j }(y _{ j }) = 0,^{d} and observe that if {y}_{j}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j} then {\mathit{q}}_{j}\left({y}_{j}\right)\prec 0 and consequently ψ _{ j }= 0. In that case, the link between μ and λ _{ j } is clear,
Back to the algorithm, this result motivates the use of {\widehat{\lambda}}_{j}^{k} only if it is derived from {\u0177}_{j}^{k}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j} and so a new list \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\} that contains all these suitable dual values is constructed. Besides, it is necessary to guarantee that the new list \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\} is nonempty or, equivalently, that after the dual projection at least one value in {\widehat{\mathit{y}}}^{k} satisfies {\u0177}_{j}^{k}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j}. This is the result of Proposition 3 next.
Proposition 3. Let {\widehat{\mathit{y}}}^{k}={\mathit{y}}^{k}\mathit{r} ({\widehat{\mathit{y}}}^{k}\ne {\mathit{y}}^{\ast} ) be a primal point resulting from the primal projection of the CDM with the value of r≽ 0 suitable to fulfill \sum _{j=1}^{J}{\u0177}_{j}^{k}=C, {\widehat{\mathit{y}}}^{k}\in \mathcal{Y}. Then, at least one value in \left\{{\u0177}_{j}^{k}\right\} verifies {\u0177}_{j}^{k}\notin \mathbf{bd}\phantom{\rule{2.83864pt}{0ex}}{\mathcal{Y}}_{j} and also {\u0177}_{j}^{k}>{y}_{j}^{\ast}.
Proof. See “Proof of Proposition 3” in Appendix. □
Finally, we need to prove that the last step of the method, i.e., the dual projection in (14), is able to find an update of the global price μ from the list \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\} such that μ ^{k}→ k → ∞ μ ^{∗}. Since we have assumed that primal and dual subproblems are reciprocal in the sense that they agree on the values of the dual variables {\widehat{\lambda}}_{j}^{k} and {\lambda}_{j}^{k} when {y}_{j}^{k}={\u0177}_{j}^{k} (see Section 3.1, step 3), a consequence is that {\stackrel{\u0306}{\lambda}}_{j}^{k}\left({y}_{j}^{\ast}\right) computed in (13) equals {\lambda}_{j}^{\ast}={\mu}^{\ast} as well as {y}_{j}^{k}\left({\mu}^{\ast}\right)={y}_{j}^{\ast} in (10). Note that we have intentionally written {\stackrel{\u0306}{\lambda}}_{j}^{k} instead of {\widehat{\lambda}}_{j}^{k} because our focus is only on the primal subproblems with {\u0177}_{j}^{k}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j}, which ensures λ _{ j }= μ according to (18). Additionally, the following two claims can be made: (i) all the values in \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\} satisfy {\stackrel{\u0306}{\lambda}}_{j}^{k}\ge {\mu}^{k} and (ii) at least one value in the list verifies {\stackrel{\u0306}{\lambda}}_{j}^{k}\le {\mu}^{\ast}. The first statement uses Proposition 1 and in particular that \widehat{{\lambda}_{j}^{k}} (or \stackrel{\u0306}{{\lambda}_{j}^{k}} equivalently) is a nonincreasing function of \widehat{{y}_{j}^{k}} in the primal subproblems. Recalling that {p}_{j}\left({y}_{j}^{k}\right) in (13) would produce {\lambda}_{j}^{k} as inner Lagrange multiplier and that {\lambda}_{j}^{k}={\mu}^{k} according to (11), it is true that {\stackrel{\u0306}{\lambda}}_{j}^{k}\ge {\mu}^{k} since \widehat{{y}_{j}^{k}}\le {y}_{j}^{k} (as a result of the primal projection). The second statement is verified in a similar manner taking into account that at least one value in \left\{{\u0177}_{j}^{k}\right\} verifies {\u0177}_{j}^{k}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j} and also {\u0177}_{j}^{k}>{y}_{j}^{\ast} (see Proposition 3). Since {\stackrel{\u0306}{\lambda}}_{j}^{k}\left({y}_{j}^{\ast}\right)={\lambda}_{j}^{\ast}={\mu}^{\ast} in the j th primal subproblem, Proposition 1 establishes that {\stackrel{\u0306}{\lambda}}_{j}^{k}\le {\mu}^{\ast}.
Figure 2a explains the effects of the three steps of the CDM graphically from the dual domain point of view. Each bar represents an entity (J in total) and a point in that bar indicates the value of the dual variable {\lambda}_{j}^{k} or {\widehat{\lambda}}_{j}^{k}. The highest the point the highest the value. At the beginning of the k th iteration, the dual subproblems enforce {\lambda}_{j}^{k}={\mu}^{k}\phantom{\rule{1em}{0ex}}\forall j and translate these dual values to the primal variables in y ^{k}. Immediately after the primal projection, the corrected values in {\widehat{\mathit{y}}}^{k} are converted again to dual variables, i.e., \left\{{\widehat{\lambda}}_{j}^{k}\right\}. In the figure, we appreciate the effect of the primal projection on the Lagrange multipliers of interest. In short, we notice that (i) all values increase and (ii) there is at least one value below μ ^{∗}.
The role of the dual projection in (14) is then to update to μ ^{k + 1} by selecting the closest value to μ ^{k} from the list \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\}, that is, {\mu}^{k+1}=\text{min}\left\{\underset{j}{\overset{k}{\stackrel{\u0306}{\lambda}}}\right\} if μ ^{k}< μ ^{∗}, as depicted in Figure 2b. Together with the previous results, i.e., {\stackrel{\u0306}{\lambda}}_{j}^{k}\ge {\mu}^{k} and {\stackrel{\u0306}{\lambda}}_{j}^{k}\le {\mu}^{\ast}, the new update verifies μ ^{k + 1}∈ [μ ^{k},μ ^{∗}] and thus our initial hypothesis (μ ^{k}< μ ^{∗}) is also satisfied for the next iteration unless μ ^{k + 1}= μ ^{∗}. Therefore, successive iterations confirm μ ^{k}→ k → ∞ μ ^{∗} and, accordingly, {\widehat{\mathit{y}}}^{k}\stackrel{k\to \infty}{\to}{\mathit{y}}_{j}^{\ast}\phantom{\rule{1em}{0ex}}\forall j. This concludes the proof of the proposed method.
4 Convergence rate analysis and stopping criterion
This section provides additional insights into the proposed CDM by means of the following particularization of (1),
where the variables in {x _{ j }} as well as the subsets in \left\{{\mathcal{X}}_{j}\right\} are unidimensional. To be precise, not all the problems that can be formulated as in (19) are considered in the following convergence analysis but only those with the following dependance between the primal variable y _{ j } and the dual variable λ _{ j } in the subproblems of the CDM, still interesting as far as usual problems in the literature exhibit that relationship (see some examples in Section 5),
In the general case, the relationship between y _{ j }and λ _{ j }can be established again, thanks to the KKT optimality conditions of the problem. Therefore, let us construct the Lagrangian of (19), that is,
and consider the following optimality condition,
where \stackrel{\u0307}{f} and \stackrel{\u0307}{h} stand for the first derivatives of the functions f and h, respectively. Note that if {x}_{j}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{X}}_{j} then ξ _{ j }= 0 due to complementary slackness and
Moreover, if the constraint h (x _{ j }) ≤ y _{ j }is satisfied with equality (the usual case as we consider coupled problems) then {x}_{j}={h}_{j}^{1}\left({y}_{j}\right) and the relationship between λ _{ j }and y _{ j }is established.
Finally, as we show in Section 5, (20) is found for common functions f and h appearing in usual problems. Furthermore, the convergence rate of the proposed method can be derived assuming (20) and a stopping criterion that enhances the performance of the CDM can be designed. These two issues are developed in the following subsections.
4.1 Convergence rate analysis
In order to find out the convergence rate of the proposed method, let us compare the value of (μ ^{k})^{− α}−(μ ^{∗})^{− α} in two successive iterations, i.e., k and k + 1. First, let us classify the optimal primal variables \left\{{y}_{j}^{\ast}\right\} into three groups: {\mathcal{I}}^{\ast} includes the indexes j corresponding to the variables that satisfy {y}_{j}^{\ast}=\text{inf}{\mathcal{Y}}_{j}, {\mathcal{S}}^{\ast} embraces the indexes where {y}_{j}^{\ast}=\text{sup}{\mathcal{Y}}_{j} and finally, {\mathcal{A}}^{\ast} contains the remaining indexes, i.e., those associated to {y}_{j}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{j}. Using (20) and recalling the optimality condition {\lambda}_{j}^{\ast}={\mu}^{\ast} seen in (11), it is true that
where {m}_{j}=\text{inf}{\mathcal{Y}}_{j} and {d}_{j}=\text{sup}{\mathcal{Y}}_{j}. Assuming that \sum _{j=1}^{J}{y}_{j}^{\ast}=C is fulfilled, we get
For any other value μ ^{k}≠ μ ^{∗} we define
where the subsets {\mathcal{A}}^{k}, {\mathcal{I}}^{k}, and {\mathcal{S}}^{k} are defined likewise {\mathcal{A}}^{\ast}, {\mathcal{I}}^{\ast}, and {\mathcal{S}}^{\ast} but refer to the indexes of the variables in \left\{{y}_{j}^{k}\right\}.
Let us assume μ ^{k}< μ ^{∗} and let us obtain \left\{{y}_{j}^{k}\right\} from (26). Clearly, since (μ ^{k})^{−α}> (μ ^{∗})^{−α}, it holds that {y}_{j}^{k}\ge {y}_{j}^{\ast}\phantom{\rule{1em}{0ex}}\forall j. As a result of the primal projection in (12), now with the objective value modified by the weighting matrix W= [1/a _{1},…,1/a _{ J }]^{T}, i.e., {\mathit{W}}^{1/2}\left\right{\mathit{y}}^{k}{\widehat{\mathit{y}}}^{k}{}^{2}, the corrected {\u0177}_{j}^{k} values can be expressed as
for the value of K > 0 to be determined. The proof is very similar to the case W= I in “Proof of Proposition 1” in Appendix and the convergence of the method is not affected. We use this projection in this particularized version simply because it offers better performance and we did not use it before just because we had no means to find a better weighting matrix than the identity matrix.
At the third step of the method, i.e., the dual subproblems, the reduced list \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\} is obtained from the values {\u0177}_{j}^{k} in (27) with j\in {\mathcal{A}}^{k}\cup {\mathcal{S}}^{k}. In other words, reversing (20) we find
Finally, in the dual projection we select the minimum value in \left\{{\stackrel{\u0306}{\lambda}}_{j}^{k}\right\}, which is in this case the closest to μ ^{k}given μ ^{k}< μ ^{∗} because μ ^{k + 1}∈[μ ^{k},μ ^{∗}] (see Section 3.3),
or equivalently,
since \frac{{d}_{j}{b}_{j}}{{a}_{j}} is always lower than (μ ^{k})^{− α} or otherwise \frac{{d}_{j}{b}_{j}}{{a}_{j}}K would belong to {\mathcal{A}}^{k}. Note in (27) that the definition of the subsets {\mathcal{A}}^{k} and {\mathcal{S}}^{k} implies d _{ j }< a _{ j }(μ ^{k})^{−α}+ b _{ j }.
Using the previous results, we can state that
This can be further refined if K is developed using (27) and \sum _{j=1}^{J}{\u0177}_{j}^{k}=C,
Particularly, note that the expression within brackets in (32) is exactly (μ ^{∗})^{−α} when the subsets {\mathcal{A}}^{k}, {\mathcal{I}}^{k} and {\mathcal{S}}^{k} coincide with the optimal ones. We say that the algorithm is in the optimal zone when the sets ({\mathcal{A}}^{k}, {\mathcal{I}}^{k},{\mathcal{S}}^{k}) coincide with ({\mathcal{A}}^{\ast}, {\mathcal{I}}^{\ast},{\mathcal{S}}^{\ast}).
Finally, we can conclude that the speed of convergence within the optimal zone obeys the following rule, which is obtained by plugging (32) into (31),
In other words, (μ ^{k})^{−α} converges linearly to (μ ^{∗})^{−α} except when {\mathcal{S}}^{\ast}=\{\varnothing \}, showing superlinear convergence. Alternatively, if the initial hypothesis is μ ^{0}> μ ^{∗}, the convergence is also linear expect for {\mathcal{I}}^{\ast}=\{\varnothing \}, in which case it is superlinear. Note in both cases that since (1) and (19) are assumed coupled problems, {\mathcal{A}}^{\ast}\ne \{\varnothing \}.
4.2 Stopping criterion
The previous convergence rule in (33) can be used to define a stopping criterion for the CDM. It is based on the particular evolution followed by μ ^{k} inside the optimal zone. For that purpose, let us take three consecutive values of μ, i.e., μ ^{k}, μ ^{k + 1}, and μ ^{k + 2}, all of them in the optimal zone. The successive application of (33) leads to
From (34) it is verified that
and therefore, in the optimal zone, the left side of (35) is a constant number regardless of k. From the practical point of view and thanks to this result, we can monitor the evolution of
and stop the iterations when S C ^{k} stabilizes to a constant value. Afterwards, the optimal solution is readily obtained since at that point we know which allocations saturate to either m _{ j } or d _{ j } and the exact value of μ ^{∗} can be computed by means of (25).
4.3 Graphical comparison among decomposition techniques
In the sequel, we include a graphical comparison among decomposition techniques and the goal is to highlight the manner in which the different methods operate in essence. We do this with the support of the following toy optimization problem
where we have included the variables y _{ i }to match the formulation of the proposed CDM and a primal decomposition as well. In Figure 3, we compare our proposed method to the classical decomposition techniques. In all the cases, the feasibility region of the problem in terms of the variables y _{1},y _{2} is marked in grey. Also, the contour lines of the objective function (centered at c= [c _{1},c _{2}]^{T}) are represented in the plots (even we know that the dependance of the objective function is on x _{1},x _{2} instead of y _{1},y _{2}).
As depicted in the figure, a primal decomposition approach updates y ^{k}by adding the subgradient to the point and, if the result is not feasible, a projection corrects the situation by finding the closest point in the feasible set. In the figure, note that arrows represent subgradients and dashed lines projections. In this way, the successive projections tend to the optimal solution, i.e., y ^{∗}. Next, let us consider a dual decomposition approach. In order to analyze it from the perspective of the primal variables, we need to establish first the relationship between y _{ j }and λ _{ j } and also between λ _{ j } and μ. For that purpose, we consider again the Lagrangian of the problem, that is
For the case {x}_{i}\in (0,{x}_{i}^{\mathit{\text{max}}}),i=1,2 and y _{ i }= x _{ i }, the dual variables in ψ _{ i }and ξ _{ i } satisfy ψ _{ i }= ξ _{ i }= 0,i = 1,2 due to slackness (note that {y}_{i}^{\mathit{\text{max}}}={x}_{i}^{\text{max}},i=1,2). In that conditions, the KKT optimality condition ∂ L/∂ x _{ i }= 0 forces
Furthermore, if we take into account that y _{ i }= x _{ i } in the case of interest and λ _{ i }= μ due to the KKT optimality condition ∂ L/∂ y _{ i }= 0, then we verify that
The dashed line in Figure 3 shows all the points that can be obtained by changing the value of μ in (40). Note in particular that for μ = 0 the optimum of the unconstrained version of (38) is achieved. Using the dual decomposition technique (in the figure we start with μ ^{0} = 0), the successive updates move along this line (the orientation and module defined by the subgradient) until the optimal solution is achieved.
In the two classical decomposition approaches, primal and dual decomposition, a good election of the stepsize that modifies the length of the subgradient plays a central role and it is recommended to choose a value that diminishes with the iteration number in order to prevent the successive updates from moving indefinitely around the optimal solution without reaching it. This issue is in fact an important practical impairment of both solutions. Notwithstanding, the method we propose in this article avoids the usage of a userdefined stepsize. As the reader can appreciate in the figure, once the initial guess y ^{0} derived from μ ^{0} = 0 is projected to the feasible subset, the proposed method finds out several candidates to update μ ^{0} to μ ^{1} (two in this case, i.e., {\stackrel{\u0306}{\lambda}}_{1}^{1} and {\stackrel{\u0306}{\lambda}}_{2}^{1}). These two candidates provide the possible updates {\mathit{y}}^{1}\left({\stackrel{\u0306}{\lambda}}_{1}^{1}\right) and {\mathit{y}}^{1}\left({\stackrel{\u0306}{\lambda}}_{2}^{1}\right) and the method always chooses the dual candidate that provides the smallest possible update. This operation guarantees that the primal update, i.e., y ^{1} in this case, remains in the same halfspace (with respect to the frontier y _{1} + y _{2} = C). Note with this simple example the interesting feature of the proposed method in comparison with the other techniques, that is, the step is automatically controlled.
5 Applications and numerical results
In the sequel, we present three different applications of the proposed method, the first two are related to power allocation problems and the third one deals with DBA in satellite networks. In the first problem, a decentralized solution is required to reduce the amount of signaling information. In the second one, a centralized implementation of the CDM is used to solve a timevarying waterfilling problem and the aim is to show the benefits of having an unsupervised method in that changing conditions. Finally, the third example shows the advantages of the proposed technique when a small allocation time is required in order to accommodate a large number of users.
5.1 Decentralized power allocation for cognitive radios
Let us consider a communication device that is able to establish simultaneous communication links by joining several networks or using multiple channels within the same system (e.g., this is possible in IEEE 802.11n). To do so, the device integrates multiple radio transceivers [23, 24] which, at their turn, operate over multiple subchannels or subcarriers in order to combat the multipath fading (see Figure 4). We assume that the device can sense the wireless channel and determine the nonused subcarriers in each subsystem, as it is usual in cognitive scenarios. Furthermore, each transceiver is able to optimally allocate the available power among its subcarriers using the waterfilling solution. This is advantageous from the system design point of view because we can employ offtheshelf radio transceivers and simply balance the device power among them. Finally, there is a central controller that performs the distribution task, being the global objective to maximize the total sum rate capacity. Note that, depending on the signal strength and capacity in each subsystem, some of the transceivers may remain temporarily idle.
The problem can be formulated for M radio transceivers as
where r _{ i }(P _{ i }) is the transmission rate of the i th transceiver when power P _{ i } is allocated to it and P _{ T } is the total available power. Each of the transmission rates is actually the result of another optimization problem, that is,
where {N}_{s}^{i} is the number of subcarriers of the i th radio, B W _{ i } stands for subcarrier bandwidth, N _{0} is the noise power spectral density, and {H}_{j}^{i} is the channel gain at the j th subcarrier of the i th transceiver.
Note that given the separability of the problem, i.e., there are many independent transceivers coupled by a total power constraint, a decentralized optimization method is adequate both from a mathematical and a practical point of view. In this approach, the controller decides the total power per transceiver and each subsystem computes its own optimal allocation. The application of the CDM to solve (41) is briefly detailed next.
Given μ ^{k}< μ ^{∗}, each transceiver computes the following dual subproblem,
and the application of the KKT conditions gives the solution
As a result of the dual subproblems we obtain {\mathit{P}}^{k}={[{P}_{1}^{k},\dots ,{P}_{M}^{k}]}^{T} and we use it as the input for the primal projection, that is,
where {\widehat{\mathit{P}}}^{k}={[{\widehat{P}}_{1}^{k},\dots ,{\widehat{P}}_{M}^{k}]}^{T}.
The corrected values {\widehat{P}}_{i}^{k} are then used in the primal subproblems defined in (42) and each transceiver computes the optimal allocation by its own. As a result of the primal subproblems, the Lagrange multipliers associated to the constraints \sum _{j=1}^{{N}_{s}^{i}}{p}_{i,j}\le {\widehat{P}}_{i}^{k} at the k th iteration, i.e., {\widehat{\lambda}}_{i}^{k}, are obtained. Discarding the values that result from {\widehat{P}}_{i}^{k}=0, we obtain the reduced list \left\{{\stackrel{\u0306}{\lambda}}_{i}^{k}\right\} and finally, the dual projection updates μ ^{k} from \left\{{\stackrel{\u0306}{\lambda}}_{i}^{k}\right\} by doing
A completely different approach is to gather all the information at the controller and to compute there the optimal power allocation. Afterwards, the result is sent back to the transceivers. Note that this centralized solution has an important drawback in terms of signaling because the powers in all the subcarriers and all the transceivers need to be exchanged. On the contrary, decentralized solutions benefit from transceiverlevel signaling. In the numerical results below, we compare the CDM to other approaches.
5.1.1 Numerical results
We consider a device with three different OFDM transmitters. The first transmitter employs 256 subcarriers spanning a total bandwidth of 1.536 MHz (6 kHz per subchannel), the second one has 256 subcarriers as well and 3.072 MHz of bandwidth (12 kHz per subchannel) and the third one manages 128 subcarriers in 1.28 MHz of bandwidth (10 kHz per subchannel), so that a total of 640 subcarriers and 5.888 MHz have to be controlled (see Table 1). We assume frequency selective Rayleighfading channels in all three systems with a channel length of 20 taps and an exponential power delay profile where the delay spread is 1 ms. Mean channel gain is 0 dB in system 1, −10 dB in system 2, and −5 dB in system 3. Moreover, we assume that the noise power spectral density is flat over frequency with {N}_{0}={\sigma}_{n}^{2}/B{W}_{1}, being B W _{1} the subcarrier bandwidth in system 1. Initially, we set up a uniform power allocation in all the methods and the total available power is always {P}_{T}\left(\text{dB}\right)={\sigma}_{n}^{2}\left(\text{dB}\right)+10\underset{10}{\text{log}}\left(640\right)+5.
Figure 5 shows a multisystem waterfilling allocation example. The plot at the top depicts one channel realization for the three systems whereas the plot at the bottom shows the optimal power allocation. As expected, most of the power is allocated to transceivers 1 and 3, which are the ones that have the best channel condition. On the contrary, subsystem 2 only allocates power to a few subcarriers that have the highest channel gains. Notwithstanding, in absolute terms, transceiver 2 receives quite a large allocation in order to exploit the higher subcarrier bandwidth.
Figure 6 shows the evolution of the Normalized Mean Squared Error (NMSE) in the power allocation with respect to the number of messages exchanged between the transmission subsystems and the central controller. The optimal power allocation is computed using the bisection method (relative error below 10^{−5}). We compare the proposed CDM to the classical primal and dual decomposition techniques, the classical primal–dual algorithm of Arrow et al. [16] and also to a centralized approach. The classical decomposition techniques use {\alpha}^{k}=1/\sqrt{k} as stepsize and the Arrow–Hurwicz method initializes the value in the dual variable μ to 0, the primal variables with a uniform power allocation and the stepsize is fixed to 0.1. On the one hand, results show that the proposed CDM is the best option, whereas the remaining alternatives require at least to double the amount of signaling in order to achieve the same allocation error. On the other hand, note that the classical decomposition techniques as well as the primal–dual approach are penalized in terms of convergence speed even taking into account that we have manually adjusted the stepsize of each method in order to achieve the best possible result. Finally, note also that a centralized approach is not efficient at all as far as the allocation error becomes small enough only when the entire allocation has been transmitted. This requires 640 messages in our case to send the channel gains to the controller and 640 messages more to return the optimal power allocation values to the radios.
5.2 Power allocation in a conventional OFDM transmission
In the following, we apply our method to a classical waterfilling problem where a decentralized solution is not necessary. In this occasion, we are interested in the adaptability of the method in timevarying scenarios.
Let us consider the wellknown singleuser waterfilling solution over parallel Gaussian channels ([25], Sec. 10.4), which provides the optimal power allocation to the subcarriers of an OFDMbased system in order to maximize the mutual information given a total power constraint. Mathematically,
where N _{ s } is the total number of subcarriers or parallel channels in the system, P is the total transmission power, {\sigma}_{{n}_{i}}^{2} is the noise variance in the i th subcarrier and p _{ i } stands for the allocated power. The application of the KKT optimality conditions to (47) leads to the solution
where (a)^{+} = max {0,a} and \frac{1}{\mu} is denoted as the waterlevel and shall be chosen in order to satisfy the total power constraint. Typically, the bisection method is employed to find μ ^{∗}. However, note that (47) can be rewritten in the form of (2) and also (19). Therefore, we can apply the proposed CDM as well. Indeed, (48) and the relationship in (20) match if we identify p _{ i } with y _{ i } and μ with λ _{ i } (remember that the required relationship applies only to {y}_{i}\notin \mathbf{bd}\phantom{\rule{2.77695pt}{0ex}}{\mathcal{Y}}_{i}, that is, y _{ i }= p _{ i }> 0).
5.2.1 Numerical results
Let us assume N _{ s }= 512 subcarriers. The channel is timevarying and frequency selective; it has 20 taps. The power delay profile is assumed exponential with a delay spread of 1 ms and the baseband sampling time is 1 μ s. We compare now the proposed CDM to the bisection method and also to the classical primal–dual algorithm in [16]. It is remarkable that the CDM requires no modification at all (it is completely unsupervised) and the same holds for the primal–dual algorithm. On the contrary, the bisection method requires a slight modification to be able to track the timevarying scenario. For that purpose, we introduce the updating factor α _{ u }. Initially, the method is applied as usual, that is, having the initial hypothesis on {\mu}_{l}^{0} and {\mu}_{u}^{0} (two values that are below and above μ ^{∗}, respectively), we compute {\mu}^{1}=1/2({\mu}_{l}^{0}+{\mu}_{u}^{0}) and we update {\mu}_{l}^{1} to μ ^{1} if \sum _{i=1}^{{N}_{s}}{p}_{i}\left({\mu}^{1}\right)>P or {\mu}_{h}^{1} to μ ^{1} otherwise. In the subsequent iterations, given that the channel is timevarying, we need to check first if {\mu}_{l}^{k} and {\mu}_{h}^{k} are still valid. If \sum _{i=1}^{{N}_{s}}{p}_{i}\left({\mu}_{l}^{k}\right)>P is not accomplished, we update {\mu}_{l}^{k} to \frac{{\mu}_{l}^{k}}{{\alpha}_{u}} and we repeat this while \sum _{i=1}^{{N}_{s}}{p}_{i}\left({\mu}_{l}^{k}\right)>P. Similarly, if \sum _{i=1}^{{N}_{s}}{p}_{i}\left({\mu}_{u}^{k}\right)<P is not attained, we modify {\mu}_{u}^{k} to {\alpha}_{u}\xb7{\mu}_{u}^{k} and we repeat this while \sum _{i=1}^{{N}_{s}}{p}_{i}\left({\mu}_{u}^{k}\right)<P. Then, we compute {\mu}^{k+1}=1/2({\mu}_{l}^{k}+{\mu}_{u}^{k}) and we update the hypothesis accordingly, as in the normal version of the technique.
Figure 7 plots the NMSE of the power allocation for both methods as a function of the mean SNR. As in the previous application example, we compute the optimal power allocation using the bisection method (relative error below 10^{−5}). Moreover, all the algorithms are initialized to the optimal solution for the current channel condition, α _{ u }= 1.05 in the bisection technique, the stepsize is fixed to 0.001 in the Arrow–Hurwicz method and we have considered three different channel velocities, namely, (i) {T}_{c}^{1}=10\xb7{T}_{\text{CDM}}, (ii) {T}_{c}^{2}=100\xb7{T}_{\text{CDM}}, and (iii) {T}_{c}^{3}=1000\xb7{T}_{\text{CDM}}, where T _{CDM} is the time taken by one complete iteration of the CDM and {T}_{c}^{i} is the coherence time of the channel at the i th scenario. Note that we have manually adjusted α _{ u } in the bisection method and the stepsize in the primal–dual algorithm in order to achieve the best possible performance at the worst channel condition, that is, when the channel coherence time is the smallest one, i.e., {T}_{c}^{1}.
Results show that the CDM usually outperforms the bisection method and it is far better than the primal–dual algorithm. Indeed, it performs worse than the bisection only for {T}_{c}^{1} and at low SNR. Note that since the CDM has no userdefined parameter, it automatically adapts to the different channel velocities. On the contrary, this adaptation does not occur in the other two methods. This is reflected in Figure 7, where, for example, the bisection method saturates to an NMSE around 10^{−4} for {T}_{c}^{1}, {T}_{c}^{2}, and {T}_{c}^{3} as the SNR grows.
5.3 Fair DBA
The fair DBA problem arises in manytoone communication systems [26, 27] and the goal is to fairly distribute the available bandwidth. In many cases and specially in systems with a huge number of users [28], the computational cost of the techniques plays an important role. Additionally, let us remark that recent works on the topic aim at providing mechanisms for QoS differentiation [4, 29] to modify a plain fair allocation. Therefore, we consider the following network utility maximization (NUM) formulation to solve a fair DBA problem,
where B is the available bandwidth, r _{ j } is the rate allocated to the j th flow, and U _{ j } is the j th utility function (the terms bandwidth and rate are used interchangeably). The parameters m _{ j }, d _{ j } (with 0 ≤ m _{ j }< d _{ j }), and p _{ j }> 0 are used to define the QoS requirements for each ongoing connection and they represent the minimum necessary rate, the required (maximum) bandwidth and the priority of the j th flow, respectively. Furthermore, we assume that \sum _{j}{m}_{j}<B<\sum _{j}{d}_{j}, i.e., the problem is coupled. As argued before, the utility functions can adequately be chosen in order to achieve a fair distribution of resources in different degrees. The following family of functions parameterized by γ
define different types of fairness, being γ → ∞ (max–min fairness) and γ = 1 (proportional fairness) the most relevant ones [29].
Note that (49) can be rewritten in the form of (19) and in particular, the problem is strictly convex and we assume that strong duality holds, i.e., there is at least one strictly feasible point. Therefore, we can apply the KKT optimality conditions to solve (49) semianalytically. In this case, the optimal rates must verify^{e}
and the optimal value of μ is such that \sum _{j=1}^{N}{r}_{j}^{\ast}\left({\mu}^{\ast}\right)=B. The bisection method is a classical technique widely used in the literature in order to approximate μ ^{∗} but, alternatively, we can also apply the enhanced version of the CDM. Specifically, by adding the new variables {y _{ j }} and identifying f _{ j }(r _{ j }) with − U _{ j }(r _{ j }) and h _{ j }(r _{ j }) = r _{ j }, (23) together with {r}_{j}={h}_{j}^{1}\left({y}_{j}\right), (51) turns into
when m _{ j }< y _{ j }< d _{ j } and has the required form in (20). Therefore, once the subsets {\mathcal{S}}^{\ast}, {\mathcal{I}}^{\ast}, and {\mathcal{A}}^{\ast} are known, the optimal value of μ is readily found according to (25) as
5.3.1 Numerical results
Let us draw the values of m _{ j } from an integer uniform distribution between 0 and 10. Each request d _{ j } is obtained summing m _{ j } and an integer random number between 0 and 100. The priority values p _{ j } are drawn from a uniform distribution that takes values between 0.25 and 5 in steps of 0.25 and γ = 1. Figure 8 plots the mean allocation time, i.e., execution time, of the CDM when centrally computed in combination with the stopping criterion defined in Section 4.2. The algorithm has been executed in a Intel ^{Ⓒ} Core 2 Duo CPU running at 2.2 GHz and programmed in Matlab ^{Ⓒ}. We have considered three different values for the total available bandwidth, namely B1=\sum _{j}{m}_{j}+0.25\sum _{j}{d}_{j}, B2=\sum _{j}{m}_{j}+0.5\sum _{j}{d}_{j}, and B3=\sum _{j}{m}_{j}+0.75\sum _{j}{d}_{j}. The results of the CDM have been compared to the classical bisection method and to the hypothesis testing method [30]. Since the allocation time is not sensitive to the available capacity for the latter methods, in Figure 8 we distinguish among B 1, B 2, and B 3 only for the CDM.
In order to provide a fair comparison among the methods, it is necessary to take into account the accuracy with respect to the optimal solution. The hypothesis testing strategy always achieves the exact optimal solution (see the details in [30]). The bisection method has been adjusted to achieve a relative error in the allocation lower than 10^{−6}, or in other words,
where r ^{∗} is the optimal allocation (which can be obtained with the hypothesis testing method) and r ^{BI}is the allocation achieved by the bisection method. Initially, the two hypothesis for the values of μ are 0 and 10. In the CDM we stop the iterations when
Note that as the number of users grows, the difference in time between the proposed method and the others also grows, specially when the system is more restricted in terms of capacity, i.e., for B 1. In this case, the CDM is able to compute the allocation in half the time required by classical methods. In terms of accuracy in the solution, (55) gives the exact optimal solution for B 1, B 2 and a relative allocation error lower than 10^{−4} for B 3. Overall, the solution is good in practice; it is optimal in capacityconstrained scenarios and nearly optimal in less critical situations. In order to illustrate the selection of the threshold for the stopping criterion, we plot in Figure 9 the evolution of the relative error and allocation time as a function of the accuracy in the stopping criterion. Note that a threshold of 10^{−2} provides a good tradeoff between both performance metrics for the worst scenario, i.e., for B 3. Finally, if we consider a higher available bandwidth, e.g., 99% of the whole system demand, this threshold value keeps the allocation time small as in B 3 at the expenses of a higher allocation error (around 5%). However, the accuracy degradation appears only in this extreme case and it is not critical in practice as far as all the users nearly reach their demands.
6 Conclusions and future work
This article has contributed with novel decomposition ideas that efficiently intertwine the classical primal and dual decomposition approaches in a single iteration of a new technique, called the CDM. It solves generic convex optimization problems that have one coupling constraint with the known advantages of decompositionbased approaches, that is, the implementation of decentralized solutions. However, it reduces the number of iterations by more than one order of magnitude with respect to the classical primal and dual decomposition solutions and furthermore, it is completely unsupervised, that is, there is no parameter that requires a manual adjustment. Moreover, when the problem is particularized (but still of interest), additional results regarding the convergence rate of the proposed technique are achieved and an stopping criterion that enhances the performance of the method (in terms of the number of iterations required to achieve the optimal solution) is derived.
The proposed method has been tested in three different problems, two dealing with power allocation in OFDMbased systems and a third one dealing with DBA. In the first two cases, the goal is to find the wellknown waterfilling solution in power. In one case, we benefit from a decentralized approach that suits the system architecture whereas in the other case, the proposed method is applied to a conventional OFDM transmission that deals with a timevarying channel. In both examples, we have compared our solution to other decomposition strategies and our approach performs significantly better than the available alternatives when a decentralized solution is required. In particular, our results show that the signaling requirements can be reduced at least by a factor 2. Moreover, in the centralized application of the method, that is, the conventional OFDM transmission, the proposed method benefits from being unsupervised and the channel variability does not compromise the performance as classical methods do. In particular, and at the high SNR regime, the difference in the NMSE of the power allocation between the proposed method and the bisection is more than two orders of magnitude in all the explored scenarios.
Finally, when applied to a NUM problem and thanks to the enhanced version of the method, the proposed technique reduces the computational time by a factor 2 with respect to wellestablished techniques such as the bisection method. This reduction is very important in systems having a large number of users (as it happens in satellite communication networks), where the bandwidth allocation has to be computed in a shorttime interval.
Appendix
Proof of Proposition 1
For the sake of simplicity, in what follows we obviate the iteration index k as well as the modifier \widehat{(\xb7)} in {\u0177}_{j}^{k}, {\widehat{\lambda}}_{j}^{k}, and μ ^{k}. Let us consider first the dependance of λ _{ j } on y _{ j } in the primal subproblems. Interestingly, the function p _{ j } in (13) has already been studied in the convex literature and it is known as the primal function ([15], Sec. 5.4.4). We recall here two main results related to the primal function: (i) p _{ j } defines a convex function over the set {\mathcal{P}}_{j} defined as {\mathcal{P}}_{j}=\left\{{y}_{j}\phantom{\rule{0.3em}{0ex}}\right\phantom{\rule{0.3em}{0ex}}{p}_{j}\left({y}_{j}\right)<\infty \} and (ii) the optimal value of the dual variable associated to the constraint h _{ j }(x _{ j }) ≤ y _{ j } and with opposite sign, say {\lambda}_{j}^{\ast}\left({y}_{j}\right), is a subgradient of p _{ j } at y _{ j }. In our case, note that {\mathcal{Y}}_{j}\subseteq {\mathcal{P}}_{j}\phantom{\rule{1em}{0ex}}\forall j and thus, these results can be applied to (13). Furthermore, since p _{ j } is convex in the region of interest, it is guaranteed to be continuous although not necessarily differentiable. However, given that the objective function as well as all the constraints in the problem (finite number of them) are differentiable, then for each singular value in {\mathcal{Y}}_{j} we can find an open interval that includes it where p _{ j } is differentiable (except of course at the singular point). Then, taking into account that the first derivative of a convex function is nondecreasing by definition and noting that the subgradient equals to the gradient where the function is differentiable, we obtain the desired result. In other words, {\lambda}_{j}^{\ast}\left({y}_{j}\right) is nondecreasing in the intervals where p _{ j } is differentiable just because the subgradient and the first derivative coincide whereas it takes a value inbetween the right and left derivatives at the singular points, thus preserving the nondecreasing property. Finally, by removing the minus sign we can state that {\lambda}_{j}^{\ast}\left({y}_{j}\right) is nonincreasing on y _{ j }.
Second, we prove that y _{ j }(μ) is nonincreasing on μ in (10). Let us first rewrite (10) as
Then take any two values of μ, say μ _{1} and μ _{2}, that accomplish: (i) 0 ≤ μ _{1} < μ _{2} and (ii) there exist two values in {\mathcal{Y}}_{j}, {y}_{j,1}^{\ast} and {y}_{j,2}^{\ast}, such that {d}_{j}\left({\mu}_{1}\right)={p}_{j}\left({y}_{j,1}^{\ast}\right)+{\mu}_{1}{y}_{j,1}^{\ast} and {d}_{j}\left({\mu}_{2}\right)={p}_{j}\left({y}_{j,2}^{\ast}\right)+{\mu}_{2}{y}_{j,2}^{\ast}. In other words, {y}_{j,1}^{\ast} and {y}_{j,2}^{\ast} are minimizers of d _{ j }(μ _{1}) and d _{ j }(μ _{2}), respectively. Now, since {y}_{j,1}^{\ast} is not necessarily a minimizer of d _{ j }(μ _{2}), we can establish the following inequality,
Next, we prove by contradiction assuming {y}_{j,2}^{\ast}>{y}_{j,1}^{\ast}. In this case, it is true that (i) ({\mu}_{2}{\mu}_{1}){y}_{j,2}^{\ast}>({\mu}_{2}{\mu}_{1}){y}_{j,1}^{\ast} since μ _{2} − μ _{1} > 0 and (ii) {p}_{j}\left({y}_{j,2}^{\ast}\right)+{\mu}_{1}{y}_{j,2}^{\ast}\ge {p}_{j}\left({y}_{j,1}^{\ast}\right)+{\mu}_{1}{y}_{j,1}^{\ast} since {y}_{j,1}^{\ast} is a minimizer of d _{ j }(μ _{1}). Finally, observations (i) and (ii) together contradict the inequality in (57) and therefore {y}_{j,2}^{\ast} must necessarily satisfy {y}_{j,2}^{\ast}\le {y}_{j,1}^{\ast}. This proves our second statement.
Proof of Proposition 2
Let us apply the KKT optimality conditions corresponding to (12). The Lagrangian is
where {\mathit{y}}_{\mathit{\text{min}}}=[\text{min}{\mathcal{Y}}_{1},\dots ,\text{min}{\mathcal{Y}}_{J}] and {\mathit{y}}_{\mathit{\text{max}}}=[\text{max}{\mathcal{Y}}_{1},\dots ,\text{max}{\mathcal{Y}}_{J}] and, if we look at the optimality condition \partial L/\partial {\u0177}_{j}^{k}=0, we get
Therefore, assuming that the j th optimal primal value, i.e., {\u0177}_{j}^{k,\ast}, lies inside the interval (\text{min}{\mathcal{Y}}_{j},\text{max}{\mathcal{Y}}_{j}), then β _{ j }= α _{ j }= 0 due to slackness and {\u0177}_{j}^{k}={y}_{j}^{k}\mu (with \mu \in \mathbb{R}). If this is not the case, then either {\u0177}_{j}^{k,\ast}=\text{min}{\mathcal{Y}}_{j} or {\u0177}_{j}^{k,\ast}=\text{max}{\mathcal{Y}}_{j}. In both cases, note that we can choose an adequate value of the free dual multipliers α _{ j } or β _{ j }, respectively, in order to satisfy (59). Finally, taking these results into account, we can conclude that the solution to the primal projection is
where μ is adjusted to accomplish \sum _{j=1}^{J}{\u0177}_{j}^{k}=C. Since y ^{k}≽y ^{∗} (\sum \underset{j}{\overset{k}{y}}>C unless {y}_{j}^{k}={y}_{j}^{\ast}\phantom{\rule{1em}{0ex}}\forall j) and \sum _{j=1}^{J}{y}_{j}^{\ast}=C, it is necessary that μ > 0.
Proof of Proposition 3
From Proposition 2 it is verified that nonoptimal values y that attain \sum _{j=1}^{J}{y}_{j}>C diminish its value in the primal projection in order to achieve \sum _{j=1}^{J}{\u0177}_{j}=C unless {y}_{j}=\text{min}{\mathcal{Y}}_{j}, in which case \widehat{{y}_{j}}={y}_{j}. Let us distinguish two subsets of variables: \mathcal{I}\phantom{\rule{0.5em}{0ex}} includes the indexes of the values y _{ j } that attain {y}_{j}=\text{min}{\mathcal{Y}}_{j} and \stackrel{\u0304}{\mathcal{I}} the rest. Note that \stackrel{\u0304}{\mathcal{I}} exactly contains the indexes of the \left\{\stackrel{\u0306}{{\lambda}_{j}}\right\} candidates used in the dual projection. Then it is true that
and therefore,
since the equality constraints \sum _{j=1}^{J}{\u0177}_{j}=\sum _{j=1}^{J}{y}_{j}^{\ast}=C are always fulfilled. This fact assures that there is at least one value {\u0177}_{j} with j\in \stackrel{\u0304}{\mathcal{I}} that attains {\u0177}_{j}>{y}_{j}^{\ast} unless all the values are optimal yet.
Endnotes
^{a}Notation: ≼, ≺, ≽, and ≻ stand for componentwise inequalities.^{b}The vector s is a subgradient of the function \mathit{f}:{\mathbb{R}}^{n}\to \mathbb{R} at \mathit{x}\in {\mathbb{R}}^{n} if \mathit{f}\left(\mathit{y}\right)\ge \mathit{f}\left(\mathit{x}\right)+{(\mathit{y}\mathit{x})}^{T}\mathit{s},\phantom{\rule{1em}{0ex}}\forall \mathit{y}\in {\mathbb{R}}^{n}. If f is differentiable at x, the subgradient s and the gradient ∇f(x) coincide. Otherwise, there exist many subgradients.^{c}The notation {\mathit{x}}_{j}^{\ast}\left({\mu}^{k}\right) stands for the optimal solution of the j th subproblem given μ ^{k}.^{d}Notation: ⊙ stands for vector elementwise product. If a= [a _{1},a _{2},…,a _{ N }]^{T} and b= [b _{1},b _{2},…,b _{ N }]^{T}, then a⊙ b= [a _{1}·b _{1},a _{2}·b _{2},…,a _{ N }· b _{ N }]^{T}.^{e}Notation: {\left[a\right]}_{{m}_{j}}^{{d}_{j}} equals a if m _{ j }< a < d _{ j }, m _{ j } if a ≤ m _{ j } and d _{ j } if a ≥ d _{ j }.
References
Bertsekas D, Nedić A, Ozdaglar A: Convex Analysis and Optimization. Belmont, MA, USA: Athena Scientific; 2003.
Boyd L, Vandenberghe S: Convex Optimization. Cambridge, MA: Cambridge University Press; 2003.
Palomar D, Cioffi J, Lagunas M: Joint TxRx beamforming design for multicarrier MIMO channels: a unified framework for convex optimization. IEEE Trans. Signal Process 2003, 51(9):23812401. 10.1109/TSP.2003.815393
Yache H, Mazumdar R, Rosenberg C: A game theoretic framework for bandwidth allocation and pricing in broadband networks. IEEE/ACM Trans. Netw 2000, 8(5):667678. 10.1109/90.879352
Xiao L, Johansson M, Boyd S: Simulatenous routing and resource allocation via dual decomposition. IEEE Trans. Commun 2004, 52(7):11361144. 10.1109/TCOMM.2004.831346
Palomar D, Chiang M: Alternative decompositions for distributed maximization of network utility: framework and applications. IEEE Trans. Autom. Control 2007, 52(12):22542269.
Tan CW, Palomar D, Chiang M: Distributed optimization of coupled systems with applications to network utility maximization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toulousse; May 2006:981984.
Liao S, Cheng W, Liu W, Yang Z, Ding Y: Distributed optimization for utilityenergy tradeoff in wireless sensor networks. In IEEE International Conference on Communications (ICC). Glasgow; June 2007:31903194.
Xiao L, Boyd S: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. (JOTA) 2006, 129(3):469488. 10.1007/s1095700690801
Palomar D, Fonollosa J: Practical algorithms for a family of waterfilling solutions. IEEE Trans. Signal Process 2005, 53(2):686695.
Palomar DP, Bengtsson M, Ottersten B: Minimum BER linear transceivers for MIMO channels via primal decomposition. IEEE Trans. Signal Process 2005, 53(8):28662882.
Scaglione A, Barbarossa S, Giannakis GB: Optimal adaptive precoding for frequencyselective Nagakamim fading channels. In IEEE 52nd Vehicular Technology Conference (VTC Fall 2000). Boston; September 2000:12911295.
Marqués AG, Digham FF, Giannakis GB: Optimizing power efficiency of OFDM using quantized channel state information. IEEE J. Sel. Areas Commun 2006, 24(8):15811592.
Arkhangel’skii A, Fedorchuk V: The Basic Concepts and Constructions of General Topology. In General Topology, I, Encyclopedia of the Mathematical Sciences. New York: Springer; 1990.
Bertsekas D: Nonlinear Programming. Belmont, MA, USA: Athena, Scientific; 1999.
Arrow KJ, Hurwicz L, Uzawa H: Iterative Methods in Concave Programming. In Studies in Linear and Nonlinear Programming. Palo Alto: Stanford University Press; 1958:154165.
Holmberg K, Kiwiel K: Mean value cross decomposition for nonlinear convex problems. Optim. Methods Softw 2006, 21(3):401417. 10.1080/10556780500098565
Holmberg K: Primal and Dual Decomposition as Organizational Design: Price and/or Resource Directive Decomposition. In Design Models for Hierarchical Organizations: Computation, Information, and Decentralization. Dordrecht: Kluwer Academic Publishers; 1995:6192.
Necoara I, Suykens JAK: Application of a smoothing technique to decomposition in convex optimization. IEEE Trans. Autom. Control 2008, 53(11):26742679.
Dinh QT, Necoara I, Savorgnan C, Diehl M: An inexact perturbed pathfollowing method for lagrangian decomposition in largescale separable convex optimization. SIAM J. Optim 2013, 23(1):95125. 10.1137/11085311X
Tseng P, Bertsekas D: On the convergence of the exponential multipliers method for convex programming. Math. Program 1993, 60(13):119. 10.1007/BF01580598
Polyak R: Primal–dual exterior point method for convex optimization. Optim. Methods Softw 2008, 23(1):141160. 10.1080/10556780701363065
Akyildiz I, Mohanty S, Xie J: A ubiquitous mobile communication architecture for nextgeneration heterogeneous wireless systems. IEEE Commun. Mag 2005, 43(6):S29S36.
Wang D, Miao K, John V, Rungta S, Chan W: Considering wireless mesh network with heterogeneous multiple radios. In IEEE WiCom. Shanghai; September 2007:16811684.
Cover T, Thomas J: Elements of Information Theory. New York: Wiley; 1991.
IEEE: Air Interface for Fixed and Mobile Broadband Wireless Access Systems; Amendment 2: Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Band and Corrigendum 1, IEEE Standards,. 2006.
ETSI: Digital Video Broadcasting (DVB); Interaction Channel for Satellite Distribution Systems ETSI EN 301 790, 2005
Acar G, Rosenberg C: Weighted fair bandwidthondemand (WFBoD) for geostationary satellite networks with onboard processing. Comput. Netw 2002, 39(1):520. 10.1016/S13891286(01)00295X
Mo J, Walrand J: Fair endtoend windowbased congestion control. IEEE/ACM Trans. Netw 2000, 8(5):556567. 10.1109/90.879343
SecoGranados G, VazquezCastro M, Morell A, Vieira F: Algorithm for fair bandwidth allocation with QoS constraints in DVBS2/RCS. In Proceedings of the IEEE Global Telecommunication Conference (GLOBECOM). San Francisco, USA; November 2006:15.
Acknowledgements
This work is supported by the Spanish Government under project TEC201128219 and the Catalan Government under grant 2009 SGR 298.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Morell, A., Vicario, J.L. & SecoGranados, G. Coupleddecompositions: exploiting primal–dual interactions in convex optimization problems. EURASIP J. Adv. Signal Process. 2013, 41 (2013). https://doi.org/10.1186/16876180201341
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/16876180201341
Keywords
 Power Allocation
 Dual Variable
 Normalize Mean Square Error
 Optimal Power Allocation
 Primal Decomposition