Skip to main content

Coupled-decompositions: exploiting primal–dual interactions in convex optimization problems

Abstract

Decomposition techniques implement the so-called “divide and conquer” in convex optimization problems, being primal and dual decompositions the two classical approaches. Although both solutions achieve the goal of splitting the original program into several smaller problems (called the subproblems), these techniques exhibit in general slow speed of convergence. This is a limiting factor in practice and in order to circumvent this drawback, we develop in this article the coupled-decompositions method. As a result, the number of iterations can be reduced by more than one order of magnitude. Furthermore, the new technique is self-adjustable, i.e., it does not depend on user-defined parameters, as opposite to what happens with classical strategies. Given that in signal processing applied to communications and networking we usually deal with a variety of problems that exhibit certain coupling structures, our method is useful to design decentralized as well as centralized optimization schemes with advantages over the existing techniques in the literature. In this article, we expose there different resource allocation problems where the proposed method is successfully applied.

1 Introduction

Convex optimization theory [1, 2] has provided in the last decades a powerful framework to solve optimization problems in many distinct areas. Besides the numerous applications existing in the signal processing literature, it is also possible to find examples in topics such as filter design, machine learning, or finance among others. This great success has been motivated by (i) convex optimization provides relevant insights into each specific problem, thanks to a mature theoretical framework, (ii) some problems can be solved analytically or semi-analytically applying the so-called Karush–Kuhn–Tucker (KKT) optimality conditions, and (iii) efficient numerical methods, e.g., interior point methods, have been developed to solve generic convex problems in polynomial time.

In many engineering areas, optimization problems with a partially coupled structure arise. In particular, we consider programs where the objective can be expressed as a sum of functions that depend on disjoint sets of variables, which are additionally coupled by the problem constraints (e.g., [35]). The optimization of such programs is the topic addressed by decomposition methods [6] and a common strategy is to split the original problem into several smaller subproblems that are somehow coordinated until they reach the optimal solution. Additionally and as a by-product, the resulting methods can deal more naturally with decentralized implementations [7, 8].

However, existing decomposition methods exhibit some drawbacks in practice. Roughly speaking, the speed of convergence of the algorithms is in general slow (this can be appreciated, for instance, in the numerical examples of [6]) and furthermore, it is necessary to manually adjust the step-size used in the successive updates of the algorithms. Since there is no universal rule to do that optimally, the performance of the methods is compromised [9]. In order to overcome these drawbacks, we introduce a novel technique, the coupled-decompositions method (CDM). It can be applied to decentralized implementations and furthermore, due to its superior computational performance in terms of convergence speed, the new technique is also competitive when compared to well-established centralized methods.

In the following, we synthesize the main contributions of this article: (i) development of new interactions between the primal and dual domains in convex decomposition problems, (ii) development of a new method based on these novel interactions for problems with a single coupling constraint, (iii) convergence proof of the proposed method, (iv) further analysis of the method when it is applied to a subset of the problems of interest, and (v) presentation of numerical examples that show the benefits of having an unsupervised and efficient solution (in terms of both computational cost and convergence speed).

The remainder of the article is organized as follows. Section 2 formulates the type of problems that we deal with and it also reviews the classical decomposition techniques. Section 3 describes the proposed CDM and proves its convergence to the optimal solution whereas Section 4 provides further analysis on the proposed method when the problem is particularized. Finally, Section 5 presents numerical examples of the proposed method and Section 6 concludes the article.

2 Problem formulation and existing solutions

In this section, we first define the type of problems that we deal with throughout the text. Thereafter, the existing decomposition techniques in the literature are reviewed.

2.1 Problem formulation

Let us consider the following optimization problem,

min { x j } j = 1 J f j ( x j ) s.t. x j X j , j = 1 , , J j = 1 J h j ( x j ) C
(1)

with variables x j R n j . The functions f j , h j : R n j R are assumed convex and differentiable in the sets X j , also convex and compact too. These sets are defined as X j ={ x j | g j ( x j ) 0 } a with g j ( x j )= [ g j 1 ( x j ) , , g j G j ( x j ) ] T , where the functions g j k : R n j R are convex and differentiable. Therefore, Equation (1) defines a convex problem and if we further assume that its feasible region has non-empty relative interior, then strong duality holds.

Note that we may interpret (1) as the distribution of a quantity C of resources among J entities where the j th entity aims to set the values of the variables in x j (constrained to lie in X j ) in order to minimize the global cost function j = 1 J f j ( x j ) without exceeding the coupling constraint j = 1 J h j ( x j )C. The presented formulation applies, among others, to fair dynamic bandwidth allocation (DBA) in point-to-multipoint networks [4], to problems related with multiple-input multiple-output design [3, 10, 11] or problems related to OFDM system design [12, 13].

The problem in (1) is suitable for a dual decomposition approach and also for a primal decomposition if it is adequately reformulated. In the next sections, those classical solutions are reviewed.

2.2 Primal decomposition

Let us consider the following modified version of (1),

min { x j } , y j = 1 J f j ( x j ) s.t. x j X j , j = 1 , , J h j ( x j ) y j , j = 1 , , J j = 1 J y j C y Y , Y = Y 1 × × Y J
(2)

where we have introduced the coupling variables y= [y 1,…,y J ]T. The subsets Y j R are defined as the images of X j through the functions h j , i.e., h j : X j Y j . Since the functions h j are convex over X j and so continuous, the subsets Y j are guaranteed to be compact ([14], Th. 5.2.2). Therefore, each Y j has both a minimum and a maximum.

In primal decomposition, we assume that the coupling variables are fixed to a given value yY (more details can be found in [15], Sec. 6.4.2). Then, the problem in (2) is solved as J independent problems in the variables x j . They are called the subproblems and they are expressed as

p j ( y j )= min x j f j ( x j ) s.t. h j ( x j ) y j x j X j
(3)

Interestingly, we know from ([15], Sec. 5.4.4) that −λ j , i.e., minus the Lagrange multiplier associated to the constraint h j (x j )≤y j , is in fact a subgradientb of p j at y j .

Having defined the primal subproblems, we can rewrite (2) as

min { y j } j = 1 J p j ( y j ) s.t. j = 1 J y j C y Y
(4)

and (4) is referred to as the primal master problem. Note that since the subgradients of the primal subproblems are obtained at no cost, we can use a projected gradient approach ([15], Sec. 2.3) to solve the problem. In other words, the following recursion (k indexes iterations)

y k + 1 = y k α k s k
(5)

with s k = [ λ 1 ( y 1 k ) , , λ J ( y J k ) ] T and where [·]is the projection onto the feasible set (i.e., y{y|yY, j = 1 J y j C}) converges to y . The interested reader can find more details about primal decomposition in ([15], Sec. 6.4.2) and also in [6]. However, note that it is necessary to appropriately adjust the value of α k in order to guarantee the convergence to the desired solution [6, 9].

2.3 Dual decomposition

Dual decomposition is the dual-domain alternative to primal decomposition. Let us compute the partial Lagrangian of (1) by means of relaxing only the coupling constraint

q(μ)= j = 1 J min x j X j { f j ( x j )+μ h j ( x j )}μC
(6)

Clearly, the problem in (6) decouples into J independent problems, called the dual subproblems and defined as

q j (μ)= min x j f j ( x j ) + μ h j ( x j ) s.t. x j X j
(7)

Note that the dual subproblems are convex programs for μ ≥ 0 and that given a value of μ, the values of the variables in x j are found after solving the subproblems in (7) for j = 1,…,J, which can be computed in parallel. In particular, the optimal values of the primal variables, i.e., {x j }, are obtained from an optimal value of the dual variable, i.e., μ .

Using the dual subproblems, the dual master problem is written as

max μ j = 1 J q j ( μ ) μC s.t. μ 0
(8)

and, as in primal decomposition, a projected gradient approach can be applied ([15], Sec. 6.4.1) to finally get μ . The recursion is

μ k + 1 = [ μ k + α k s k ] +
(9)

where k indexes iterations and [a]+= max{0,a}. As well as in primal decomposition, it can be shown that a subgradient of q j at μ k is readily found asc h j x j μ k once the dual subproblems are solved ([15], Sec. 6.1) and therefore, a subgradient of q at μ k is given by s k = j = 1 J h j x j μ k C. Finally, note that a user-defined step-size is also necessary in dual decomposition and, as we discuss later, this is a serious drawback of classical decomposition methods in practice.

2.4 Primal–dual techniques

There is a huge list of methods in the literature that are termed primal–dual but, to the best of our knowledge, the essentials in our proposed CDM have not been established previously. In general, all the reviewed methods suffer from (i) slow speed of convergence to the optimal solution (this restricts the number of practical applications), (ii) no consideration for the separated nature of the problem (i.e., the techniques are not decomposition-based approaches), and/or (iii) the decentralized implementation of the methods is not taken into account. On the contrary, all these aspects are addressed in the proposed CDM.

A first group of existing primal–dual techniques focus on iteratively finding a saddle-point of the Lagrangian, which is a convex and concave function of the primal and dual variables, respectively. Although these methods were not originally conceived from a decomposition perspective, they can be applied to the problems of interest in this article (and also implemented in a decentralized manner). Among these techniques, we find the classical work of Arrow et al. [16] or the more recent Mean Value Cross (MVC) decompositions method [17, 18]. However, both techniques need to fix an step-size (explicit or implicit as in the MVC decompositions method) and, as a consequence, they penalize in terms of convergence speed in practice.

In a second group of techniques we include all the possible combinations of classical primal and dual decompositions, as described in [6]. The idea in this case is to solve some parts of the problem with a primal decomposition approach while other parts are tackled by means of a dual decomposition. Therefore, these solutions do not consider full primal–dual interactions as in the proposed CDM, where each part considers both domains simultaneously. Furthermore, they also suffer from slow convergence speeds due, in part, to the manually adjusted step-sizes. However, it is important to remark that in the last decade a significant progress has been made in dual-decomposition-based solutions using smoothing [19] or path-following [20] strategies, improving the number of iterations of the classical dual decomposition by an order of magnitude. Notwithstanding, these methods tackle problems with linear constraints and are not designed under a decentralized implementation perspective.

Finally, let us mention the primal–dual interior point methods ([2], Sec. 11.7) and its variants [21, 22] as the third group of primal–dual approaches. In this case, the basic idea is to iteratively solve the KKT conditions of the problem using numerical methods typically applied to the resolution of systems of nonlinear equations such as the Newton method. These techniques have received great attention during the past years due to their good performance in terms of convergence speed when used in generic convex problems. However, since they were not conceived to exploit the separability of the problem (if it exists), it is not straightforward to derive decentralized solutions from this third group of techniques (one of the goals in this article).

3 The CDM

In order to overcome the detected drawbacks, we design our CDM with the aim to (i) exploit the primal and dual domains in convex optimization problems and (ii) simultaneously benefit from the separability of the problem in order to derive decentralized solutions. Although there are solutions in the literature that exploit both solution domains as discussed, the development of a fast technique satisfying (i) and (ii) is still pending. In the following, we first describe our proposed CDM and thereafter, we prove that the iterates of the method convergence to the optimal solution.

3.1 Description of the method

The proposed method has four building blocks: the primal subproblems, the dual subproblems, the primal projection, and the dual projection. These blocks are connected as depicted in Figure 1 and in what follows, we describe the actions taken at each step of the method and we provide a summary of the technique in algorithmic form. Thereafter, the convergence of the successive updates of the CDM, i.e., μ k, towards an optimal value of the dual variable, i.e., μ , is proved (and the same is valid for the rest of variables, primal, and dual).

Figure 1
figure 1

Block diagram of the CDM.

3.1.1 Step 1: dual subproblems

From μ k, the primal value y j k is obtained after solving the following convex optimization problem in d j

d j ( μ k )= min x j , y j k f j ( x j ) + μ k y j k s.t. h j ( x j ) y j k x j X j
(10)

Note that d j (μ k) coincides with (7) if we substitute y j k by h j (x j ). Note also that λ j k , i.e., the dual variable associated to the constraint h j ( x j ) y j k , always takes the value of μ k. This can be checked using one of the KKT optimality conditions of the problem as follows

L ( x j , y j k , λ j k , ) y j k = μ k λ j k =0 λ j k = μ k
(11)

where L( x j , y j k , λ j k ,) stands for the Lagrangian function of the problem. The interested reader can find more details on the Lagrangian function as well as on the KKT optimality conditions of convex problems in ([2], Sec. 5.1, Sec. 5.5).

3.1.2 Step 2: primal projection

In the second step of the method, the values in y j k from all the subproblems are grouped in y k = [ y 1 k , , y J k ] T and projected to the subset Y{y| i y i =C} if μ k> 0 and to the subset Y{y| i y i C} otherwise. Note that both projections force the values in y ̂ k to be feasible and that the choice of the projection subset depending on μ is in accordance with the complementary slackness constraint μ( j = 1 J y j C)=0 of the problem.

In the most usual case, that is, for μ k> 0, the following convex problem has to be solved

min y ̂ k | | y k y ̂ k | | 2 s.t. j = 1 J ŷ j k = C y ̂ k Y
(12)

which can be done semi-analytically as discussed in “Proof of Proposition 2” in Appendix.

3.1.3 Step 3: primal subproblems

The j th primal subproblem is defined as

p j ( ŷ j k )= min x j f j ( x j ) s.t. h j ( x j ) ŷ j k x j X j
(13)

and it can be solved once ŷ j k is available. In this case, we are interested in the optimal value of the Lagrange multiplier associated to h j ( x j ) ŷ j k , that is, λ ̂ j k . As later discussed in Section 3.4, the step 4 of the method uses only the values of λ ̂ j k that result from ŷ j k bd Y j , where bdA stands for the boundary of the subset A. The selected values are then grouped in the list { λ ̆ j k }, which is the input of the dual projection. Note that if j = 1 J ŷ j k =C, then the list is guaranteed to be non-empty as shown in Proposition 3 (Section 3.4). Besides, it is important to solve the primal subproblem in (13) according to its dual version in (10). In other words, if ŷ j k is fixed in the j th primal subproblem then λ ̂ j k (not necessarily unique) is accepted as valid only if d j ( λ ̂ j k ) gives y j k = ŷ j k .

3.1.4 Step 4: dual projection

If j = 1 J ŷ j k =C, a new update of μ, i.e., μ k + 1, is obtained as the solution of the following optimization problem

min μ k + 1 | | μ k + 1 μ k | | 2 s.t. μ k + 1 { λ ̆ j k }
(14)

In other words, μ k + 1 takes the value in { λ ̆ j k } that is the closest to μ k. As discussed in Section 3.3, this is equivalent to set μ k + 1 =min{ λ ̆ j k } if μ k< μ and μ k + 1 =max{ λ ̆ j k } if μ k> μ .

If j = 1 J ŷ j k <C then μ k + 1 is fixed to 0, which is in accordance with the complementary slackness constraint μ( j = 1 J y j C)=0.

3.2 The CDM in algorithmic form

Let us consider without loss of generality a decentralized implementation of the proposed method with a controller and J independent participants. Each participant is able to solve the corresponding primal and dual subproblems whereas the task of the controller is to compute the primal and dual projections. Note that both operations involve simple computations as discussed in the steps 2 and 4 above. The proposed CDM is then summarized in the following algorithm,

Choose an initial value for μ 0 and repeat

  1. 1.

    The controller sends μ k to the participants, which compute d j (μ k) in (10) and return y j k .

  2. 2.

    With y k =[ y 1 k , y 2 k ,, y J k ], the controller computes ŷ k using the primal projection (step 1 above) and sends ŷ j k to the participants if ŷ j k bd Y j .

  3. 3.

    The participants compute p j ( ŷ j k ) in (13) and return λ ̂ j k to the controller.

  4. 4.

    The controller fixes μ k + 1 to the received value that is closer to μ k.

Until convergence.

3.3 Resource–price interpretation

Often in convex optimization, primal variables are interpreted as resources and dual variables as prices to be paid for them. In the sequel, we revisit the proposed technique under this resource–price perspective. Initially, a global price μ k is fixed and sent to the parts. Given that price, the parts estimate the amount of resources they want to buy. Intuitively, there will be a deficit of resources (a total request over C) if the price is too low and an excess if it is too high. In both cases, the primal projection corrects the allocation in order to distribute all the available resources among the parts. However, there is no guarantee that the distribution follows a common market law. In order to correct the situation, the primal subproblems estimate the price to be paid for the new resource allocation and, in case the individual prices differ, the dual projection fixes a new common price μ k + 1 in order to advance towards a consensus price μ .

3.4 Proof of the method

Before proving that the successive updates of the proposed method converge to the optimal solution, let us establish the relationship between primal and dual variables in the subproblems with the following proposition.

Proposition 1.Take the jth primal subproblem p j in (13) and the jth dual subproblem d j in (10) of the CDM. Then, the following two statements hold: (i) λ ̂ j k ( ŷ j k ) is non-increasing on ŷ j k in (13) and (ii) y j k ( μ k ) is non-increasing on μ k in (10).

Proof. See “Proof of Proposition 1” in Appendix. □

Next, the goal is to verify that the primal and dual projections effectively coordinate the subproblems towards the optimal solution. Let us assume, without loss of generality, that the initial guess is μ 0 = 0 so that μ 0μ . From that value, the CDM starts by solving the dual subproblems in (10) in order to obtain y 0. As a result, there are two possibilities, namely, (i) j y j 0 C and (ii) j y j 0 >C. In the first situation, μ 0 as well as y 0 and the corresponding values in {x j } are optimal. Note that the subproblems are in this case decoupled and therefore the individual optimization carried out in the dual subproblems is globally optimal, too. For the sake of brevity, we do not discuss here what are the outputs of the following steps and iterations of the method, but it can be checked that the solution remains unaltered as expected. In the second case, μ 0 = 0 is clearly non-optimal and in the sequel we show how the successive updates of μ kconverge to an optimal value of the dual variable, that is, μ > 0.

Let us revisit then a complete iteration of the method starting at the dual subproblems in (10) with μ k< μ , which holds at least for k = 0. Since y j k is a non-increasing function of μ kin the j th dual subproblem (see Proposition 1), μ k< μ and y j k ( μ )= y j , it is true that y j k y j . Moreover, if we take into account that j = 1 J y j =C (we are considering the case where the optimal solution is coupled), we can establish that j = 1 J y j k >C unless y k= y .

Thereafter, it is verified in the second step of the method (primal projection) that y ̂ k y k ( ŷ j k < y j k for some j) according to Proposition 2 next.

Proposition 2.Given the optimization problem in (12) and y k y (y ky ), its optimal solution can be expressed as y ̂ k = y k r with r 0(r j > 0 for some j).

Proof. See “Proof of Proposition 2” in Appendix. □

In the third step of the method, the j th primal subproblem defined in (13) computes the individual price λ ̂ j k and the list of individual prices { λ ̂ j k } is constructed with the values obtained from the J independent subproblems, indexed by j = 1,…,J. Note, however, that our main interest is not in the prices λ ̂ j k but in finding a global consensus price μ . Fortunately, if we come back to the problem definition in (2), we notice that there is a dependence between the dual variable associated to the constraint h j (x j )≤ y j , i.e., λ j , and the dual variable associated to the constraint j = 1 J y j C, i.e., μ (in terms of the proposed algorithm, ŷ j k , λ ̂ j k and μ k play the role of y j , λ j and μ, respectively). This dependance motivates in our algorithm the selection of some of the values in the list { λ ̂ j k }. To be more specific, the value λ ̂ j k is chosen if the corresponding primal variable ŷ j k satisfies ŷ j k bd Y j as discussed next.

Let us first write the Lagrangian of the problem in (2), that is

L ( { x j } , y , λ , μ , { ξ j } , { ψ j } ) = j = 1 J f j ( x j ) + j = 1 J ξ j T g j ( x j ) + j = 1 J ψ j T q j ( y j ) + j = 1 J λ j h j ( x j ) y j + μ ( j = 1 J y j C )
(15)

where the set of convex functions q j (y j ) with associated Lagrange multipliers ψ j define the subset Y j . From the Lagrangian function we derive some of the KKT optimality conditions of the convex optimization problem as far as the optimal values of the variables form a saddle-point in the function plot. In particular, let us consider the following condition

L y j =μ λ j + ψ j T y j q j ( y j )=0
(16)

that reveals

μ= λ j ψ j T y j q j ( y j )
(17)

This equality is not very useful in general and neither from an algorithmic point of view because the values of the multipliers in ψ j are unknown. However, we can make use of the following complementary slackness conditions ([2], Sec. 5.5.2) of the problem, compactly written as ψ j q j (y j ) = 0,d and observe that if y j bd Y j then q j ( y j ) 0 and consequently ψ j = 0. In that case, the link between μ and λ j is clear,

μ= λ j if y j bd Y j
(18)

Back to the algorithm, this result motivates the use of λ ̂ j k only if it is derived from ŷ j k bd Y j and so a new list { λ ̆ j k } that contains all these suitable dual values is constructed. Besides, it is necessary to guarantee that the new list { λ ̆ j k } is non-empty or, equivalently, that after the dual projection at least one value in y ̂ k satisfies ŷ j k bd Y j . This is the result of Proposition 3 next.

Proposition 3.Let y ̂ k = y k r ( y ̂ k y ) be a primal point resulting from the primal projection of the CDM with the value of r 0 suitable to fulfill j = 1 J ŷ j k =C, y ̂ k Y. Then, at least one value in { ŷ j k } verifies ŷ j k bd Y j and also ŷ j k > y j .

Proof. See “Proof of Proposition 3” in Appendix. □

Finally, we need to prove that the last step of the method, i.e., the dual projection in (14), is able to find an update of the global price μ from the list { λ ̆ j k } such that μ kk μ . Since we have assumed that primal and dual subproblems are reciprocal in the sense that they agree on the values of the dual variables λ ̂ j k and λ j k when y j k = ŷ j k (see Section 3.1, step 3), a consequence is that λ ̆ j k ( y j ) computed in (13) equals λ j = μ as well as y j k ( μ )= y j in (10). Note that we have intentionally written λ ̆ j k instead of λ ̂ j k because our focus is only on the primal subproblems with ŷ j k bd Y j , which ensures λ j = μ according to (18). Additionally, the following two claims can be made: (i) all the values in { λ ̆ j k } satisfy λ ̆ j k μ k and (ii) at least one value in the list verifies λ ̆ j k μ . The first statement uses Proposition 1 and in particular that λ j k ̂ (or λ j k ̆ equivalently) is a non-increasing function of y j k ̂ in the primal subproblems. Recalling that p j ( y j k ) in (13) would produce λ j k as inner Lagrange multiplier and that λ j k = μ k according to (11), it is true that λ ̆ j k μ k since y j k ̂ y j k (as a result of the primal projection). The second statement is verified in a similar manner taking into account that at least one value in { ŷ j k } verifies ŷ j k bd Y j and also ŷ j k > y j (see Proposition 3). Since λ ̆ j k ( y j )= λ j = μ in the j th primal subproblem, Proposition 1 establishes that λ ̆ j k μ .

Figure 2a explains the effects of the three steps of the CDM graphically from the dual domain point of view. Each bar represents an entity (J in total) and a point in that bar indicates the value of the dual variable λ j k or λ ̂ j k . The highest the point the highest the value. At the beginning of the k th iteration, the dual subproblems enforce λ j k = μ k j and translate these dual values to the primal variables in y k. Immediately after the primal projection, the corrected values in y ̂ k are converted again to dual variables, i.e., { λ ̂ j k }. In the figure, we appreciate the effect of the primal projection on the Lagrange multipliers of interest. In short, we notice that (i) all values increase and (ii) there is at least one value below μ .

Figure 2
figure 2

Primal and dual projections lead the updates of μ towards μ .

The role of the dual projection in (14) is then to update to μ k + 1 by selecting the closest value to μ k from the list { λ ̆ j k }, that is, μ k + 1 =min{ λ ̆ j k } if μ k< μ , as depicted in Figure 2b. Together with the previous results, i.e., λ ̆ j k μ k and λ ̆ j k μ , the new update verifies μ k + 1 [μ k,μ ] and thus our initial hypothesis (μ k< μ ) is also satisfied for the next iteration unless μ k + 1= μ . Therefore, successive iterations confirm μ kk μ and, accordingly, y ̂ k k y j j. This concludes the proof of the proposed method.

4 Convergence rate analysis and stopping criterion

This section provides additional insights into the proposed CDM by means of the following particularization of (1),

min { x j } , y j = 1 J f j ( x j ) s.t. x j X j , j = 1 , , J h j ( x j ) y j , j = 1 , , J j = 1 J y j C y Y , Y = Y 1 × × Y J
(19)

where the variables in {x j } as well as the subsets in { X j } are uni-dimensional. To be precise, not all the problems that can be formulated as in (19) are considered in the following convergence analysis but only those with the following dependance between the primal variable y j and the dual variable λ j in the subproblems of the CDM, still interesting as far as usual problems in the literature exhibit that relationship (see some examples in Section 5),

y j = a j λ j α + b j , for a certain value of α > 0 , a j > 0 and b j R
(20)

In the general case, the relationship between y j and λ j can be established again, thanks to the KKT optimality conditions of the problem. Therefore, let us construct the Lagrangian of (19), that is,

L ( { x j } , y , { λ j } , μ , { ξ j } ) = j = 1 J f j ( x j ) + μ ( j = 1 J y j C ) + j = 1 J λ j h j ( x j ) y j + j = 1 J ξ j T ( g j ( x j ) ) + j = 1 J ψ j T q j ( y j )
(21)

and consider the following optimality condition,

L x j = f ̇ j ( x j )+ λ j h ̇ j ( x j )+ ξ j T x j g j ( x j )=0
(22)

where f ̇ and h ̇ stand for the first derivatives of the functions f and h, respectively. Note that if x j bd X j then ξ j = 0 due to complementary slackness and

λ j = f ̇ j ( x j ) h ̇ j ( x j )
(23)

Moreover, if the constraint h (x j ) ≤ y j is satisfied with equality (the usual case as we consider coupled problems) then x j = h j 1 ( y j ) and the relationship between λ j and y j is established.

Finally, as we show in Section 5, (20) is found for common functions f and h appearing in usual problems. Furthermore, the convergence rate of the proposed method can be derived assuming (20) and a stopping criterion that enhances the performance of the CDM can be designed. These two issues are developed in the following subsections.

4.1 Convergence rate analysis

In order to find out the convergence rate of the proposed method, let us compare the value of |(μ k)α−(μ )α| in two successive iterations, i.e., k and k + 1. First, let us classify the optimal primal variables { y j } into three groups: I includes the indexes j corresponding to the variables that satisfy y j =inf Y j , S embraces the indexes where y j =sup Y j and finally, A contains the remaining indexes, i.e., those associated to y j bd Y j . Using (20) and recalling the optimality condition λ j = μ seen in (11), it is true that

y j = a j μ α + b j j A m j j I d j j S
(24)

where m j =inf Y j and d j =sup Y j . Assuming that j = 1 J y j =C is fulfilled, we get

μ α = C j A b j j I m j j S d j j A a j
(25)

For any other value μ kμ we define

y j k = a j μ k α + b j j A k m j j I k d j j S k
(26)

where the subsets A k , I k , and S k are defined likewise A , I , and S but refer to the indexes of the variables in { y j k }.

Let us assume μ k< μ and let us obtain { y j k } from (26). Clearly, since (μ k)α> (μ )α, it holds that y j k y j j. As a result of the primal projection in (12), now with the objective value modified by the weighting matrix W= [1/a 1,…,1/a J ]T, i.e., W 1 / 2 || y k y ̂ k | | 2 , the corrected ŷ j k values can be expressed as

ŷ j k = a j μ k α + b j a j K j A k m j j I k d j a j K j S k
(27)

for the value of K > 0 to be determined. The proof is very similar to the case W= I in “Proof of Proposition 1” in Appendix and the convergence of the method is not affected. We use this projection in this particularized version simply because it offers better performance and we did not use it before just because we had no means to find a better weighting matrix than the identity matrix.

At the third step of the method, i.e., the dual subproblems, the reduced list { λ ̆ j k } is obtained from the values ŷ j k in (27) with j A k S k . In other words, reversing (20) we find

λ ̆ j k α = ŷ j k b j a j = μ k α K , j A k d j b j a j K , j S k
(28)

Finally, in the dual projection we select the minimum value in { λ ̆ j k }, which is in this case the closest to μ kgiven μ k< μ because μ k + 1[μ k,μ ] (see Section 3.3),

μ k + 1 =min{ λ ̆ j k }
(29)

or equivalently,

μ k + 1 α =max{ λ ̆ j k }= μ k α K
(30)

since d j b j a j is always lower than (μ k)α or otherwise d j b j a j K would belong to A k . Note in (27) that the definition of the subsets A k and S k implies d j < a j (μ k)α+ b j .

Using the previous results, we can state that

| μ k + 1 α μ α |=| μ k α μ α K|
(31)

This can be further refined if K is developed using (27) and j = 1 J ŷ j k =C,

K = μ k α j A k a j + j A k b j + j I k m j + j S k d j C j A k S k a j = j A k a j j A k S k a j μ k α j A k a j j A k S k a j × C j A k b j j I k m j j S k d j j A k a j
(32)

Particularly, note that the expression within brackets in (32) is exactly (μ )α when the subsets A k , I k and S k coincide with the optimal ones. We say that the algorithm is in the optimal zone when the sets ( A k , I k , S k ) coincide with ( A , I , S ).

Finally, we can conclude that the speed of convergence within the optimal zone obeys the following rule, which is obtained by plugging (32) into (31),

| ( μ k + 1 ) α ( μ ) α | = | ( μ k ) α ( μ ) α | × 1 j A a j j A S a j
(33)

In other words, (μ k)α converges linearly to (μ )α except when S ={}, showing superlinear convergence. Alternatively, if the initial hypothesis is μ 0> μ , the convergence is also linear expect for I ={}, in which case it is superlinear. Note in both cases that since (1) and (19) are assumed coupled problems, A {}.

4.2 Stopping criterion

The previous convergence rule in (33) can be used to define a stopping criterion for the CDM. It is based on the particular evolution followed by μ k inside the optimal zone. For that purpose, let us take three consecutive values of μ, i.e., μ k, μ k + 1, and μ k + 2, all of them in the optimal zone. The successive application of (33) leads to

| ( μ k + l ) α ( μ ) α | = | ( μ k ) α ( μ ) α | × 1 j A a j j A S a j l l = { 0 , 1 , 2 }
(34)

From (34) it is verified that

μ k + 2 α μ k + 1 α μ k + 1 α μ k α =1 j A a j j A S a j
(35)

and therefore, in the optimal zone, the left side of (35) is a constant number regardless of k. From the practical point of view and thanks to this result, we can monitor the evolution of

S C k = μ k + 2 α μ k + 1 α μ k + 1 α μ k α ,k
(36)

and stop the iterations when S C k stabilizes to a constant value. Afterwards, the optimal solution is readily obtained since at that point we know which allocations saturate to either m j or d j and the exact value of μ can be computed by means of (25).

4.3 Graphical comparison among decomposition techniques

In the sequel, we include a graphical comparison among decomposition techniques and the goal is to highlight the manner in which the different methods operate in essence. We do this with the support of the following toy optimization problem

min x 1 , x 2 , y 1 , y 2 a 1 ( x 1 c 1 ) 2 + a 2 ( x 2 c 2 ) 2 x i y i , i = 1 , 2 s.t. y 1 + y 2 C 0 x i x i max , i = 1 , 2 0 y i x i max , i = 1 , 2
(37)

where we have included the variables y i to match the formulation of the proposed CDM and a primal decomposition as well. In Figure 3, we compare our proposed method to the classical decomposition techniques. In all the cases, the feasibility region of the problem in terms of the variables y 1,y 2 is marked in grey. Also, the contour lines of the objective function (centered at c= [c 1,c 2]T) are represented in the plots (even we know that the dependance of the objective function is on x 1,x 2 instead of y 1,y 2).

Figure 3
figure 3

Comparison among decomposition techniques. Toy example.

As depicted in the figure, a primal decomposition approach updates y kby adding the subgradient to the point and, if the result is not feasible, a projection corrects the situation by finding the closest point in the feasible set. In the figure, note that arrows represent subgradients and dashed lines projections. In this way, the successive projections tend to the optimal solution, i.e., y . Next, let us consider a dual decomposition approach. In order to analyze it from the perspective of the primal variables, we need to establish first the relationship between y j and λ j and also between λ j and μ. For that purpose, we consider again the Lagrangian of the problem, that is

L ( x , y , λ , μ , ξ 1 , ξ 2 , ψ 1 , ψ 2 ) = a 1 ( x 1 c 1 ) 2 + a 2 ( x 2 c 2 ) 2 + λ T ( x y ) + μ ( y 1 + y 2 C ) i = 1 2 ξ 1 , i x i + i = 1 2 ξ 1 , i ( x i x i max ) i = 1 2 ψ 1 , i y i + i = 1 2 ψ 1 , i ( y i y i max )
(38)

For the case x i (0, x i max ),i=1,2 and y i = x i , the dual variables in ψ i and ξ i satisfy ψ i = ξ i = 0,i = 1,2 due to slackness (note that y i max = x i max ,i=1,2). In that conditions, the KKT optimality condition L/ x i = 0 forces

x i = c i λ i a i
(39)

Furthermore, if we take into account that y i = x i in the case of interest and λ i = μ due to the KKT optimality condition L/ y i = 0, then we verify that

y i (μ)= c i μ a i
(40)

The dashed line in Figure 3 shows all the points that can be obtained by changing the value of μ in (40). Note in particular that for μ = 0 the optimum of the unconstrained version of (38) is achieved. Using the dual decomposition technique (in the figure we start with μ 0 = 0), the successive updates move along this line (the orientation and module defined by the subgradient) until the optimal solution is achieved.

In the two classical decomposition approaches, primal and dual decomposition, a good election of the step-size that modifies the length of the subgradient plays a central role and it is recommended to choose a value that diminishes with the iteration number in order to prevent the successive updates from moving indefinitely around the optimal solution without reaching it. This issue is in fact an important practical impairment of both solutions. Notwithstanding, the method we propose in this article avoids the usage of a user-defined step-size. As the reader can appreciate in the figure, once the initial guess y 0 derived from μ 0 = 0 is projected to the feasible subset, the proposed method finds out several candidates to update μ 0 to μ 1 (two in this case, i.e., λ ̆ 1 1 and λ ̆ 2 1 ). These two candidates provide the possible updates y 1 ( λ ̆ 1 1 ) and y 1 ( λ ̆ 2 1 ) and the method always chooses the dual candidate that provides the smallest possible update. This operation guarantees that the primal update, i.e., y 1 in this case, remains in the same half-space (with respect to the frontier y 1 + y 2 = C). Note with this simple example the interesting feature of the proposed method in comparison with the other techniques, that is, the step is automatically controlled.

5 Applications and numerical results

In the sequel, we present three different applications of the proposed method, the first two are related to power allocation problems and the third one deals with DBA in satellite networks. In the first problem, a decentralized solution is required to reduce the amount of signaling information. In the second one, a centralized implementation of the CDM is used to solve a time-varying water-filling problem and the aim is to show the benefits of having an unsupervised method in that changing conditions. Finally, the third example shows the advantages of the proposed technique when a small allocation time is required in order to accommodate a large number of users.

5.1 Decentralized power allocation for cognitive radios

Let us consider a communication device that is able to establish simultaneous communication links by joining several networks or using multiple channels within the same system (e.g., this is possible in IEEE 802.11n). To do so, the device integrates multiple radio transceivers [23, 24] which, at their turn, operate over multiple subchannels or subcarriers in order to combat the multipath fading (see Figure 4). We assume that the device can sense the wireless channel and determine the non-used subcarriers in each subsystem, as it is usual in cognitive scenarios. Furthermore, each transceiver is able to optimally allocate the available power among its subcarriers using the water-filling solution. This is advantageous from the system design point of view because we can employ off-the-shelf radio transceivers and simply balance the device power among them. Finally, there is a central controller that performs the distribution task, being the global objective to maximize the total sum rate capacity. Note that, depending on the signal strength and capacity in each subsystem, some of the transceivers may remain temporarily idle.

Figure 4
figure 4

Example of a cognitive device.

The problem can be formulated for M radio transceivers as

max { P i } i = 1 M r i ( P i ) s.t. i = 1 M P i P T P i 0 , i = 1 , , M
(41)

where r i (P i ) is the transmission rate of the i th transceiver when power P i is allocated to it and P T is the total available power. Each of the transmission rates is actually the result of another optimization problem, that is,

r i ( P i )= max { p j i } j = 1 N s i B W i log 1 + p j i H j i N 0 B W i s.t. p j i 0 j = 1 N s i p j i P i
(42)

where N s i is the number of subcarriers of the i th radio, B W i stands for subcarrier bandwidth, N 0 is the noise power spectral density, and H j i is the channel gain at the j th subcarrier of the i th transceiver.

Note that given the separability of the problem, i.e., there are many independent transceivers coupled by a total power constraint, a decentralized optimization method is adequate both from a mathematical and a practical point of view. In this approach, the controller decides the total power per transceiver and each subsystem computes its own optimal allocation. The application of the CDM to solve (41) is briefly detailed next.

Given μ k< μ , each transceiver computes the following dual subproblem,

max { p i , j } , P i k j = 1 N s i B W i log 1 + p i , j H i , j N 0 B W i μ k P i k s.t. j = 1 N s i p i , j P i k p i , j 0
(43)

and the application of the KKT conditions gives the solution

p i , j = B W i 1 μ k N 0 H i , j + P i k = j = 1 N s i p i , j
(44)

As a result of the dual subproblems we obtain P k = [ P 1 k , , P M k ] T and we use it as the input for the primal projection, that is,

min { P ̂ i k } | | P ̂ k P k | | 2 s.t. i = 1 M P ̂ i k = P T P ̂ i k 0
(45)

where P ̂ k = [ P ̂ 1 k , , P ̂ M k ] T .

The corrected values P ̂ i k are then used in the primal subproblems defined in (42) and each transceiver computes the optimal allocation by its own. As a result of the primal subproblems, the Lagrange multipliers associated to the constraints j = 1 N s i p i , j P ̂ i k at the k th iteration, i.e., λ ̂ i k , are obtained. Discarding the values that result from P ̂ i k =0, we obtain the reduced list { λ ̆ i k } and finally, the dual projection updates μ k from { λ ̆ i k } by doing

μ k + 1 =min{ λ ̆ i k }
(46)

A completely different approach is to gather all the information at the controller and to compute there the optimal power allocation. Afterwards, the result is sent back to the transceivers. Note that this centralized solution has an important drawback in terms of signaling because the powers in all the subcarriers and all the transceivers need to be exchanged. On the contrary, decentralized solutions benefit from transceiver-level signaling. In the numerical results below, we compare the CDM to other approaches.

5.1.1 Numerical results

We consider a device with three different OFDM transmitters. The first transmitter employs 256 subcarriers spanning a total bandwidth of 1.536 MHz (6 kHz per subchannel), the second one has 256 subcarriers as well and 3.072 MHz of bandwidth (12 kHz per subchannel) and the third one manages 128 subcarriers in 1.28 MHz of bandwidth (10 kHz per subchannel), so that a total of 640 subcarriers and 5.888 MHz have to be controlled (see Table 1). We assume frequency selective Rayleigh-fading channels in all three systems with a channel length of 20 taps and an exponential power delay profile where the delay spread is 1 ms. Mean channel gain is 0 dB in system 1, −10 dB in system 2, and −5 dB in system 3. Moreover, we assume that the noise power spectral density is flat over frequency with N 0 = σ n 2 /B W 1 , being B W 1 the subcarrier bandwidth in system 1. Initially, we set up a uniform power allocation in all the methods and the total available power is always P T (dB)= σ n 2 (dB)+10 log 10 (640)+5.

Table 1 Description of the subsystems

Figure 5 shows a multi-system water-filling allocation example. The plot at the top depicts one channel realization for the three systems whereas the plot at the bottom shows the optimal power allocation. As expected, most of the power is allocated to transceivers 1 and 3, which are the ones that have the best channel condition. On the contrary, subsystem 2 only allocates power to a few subcarriers that have the highest channel gains. Notwithstanding, in absolute terms, transceiver 2 receives quite a large allocation in order to exploit the higher subcarrier bandwidth.

Figure 5
figure 5

Distributed water-filling example. Top: channel gains. Bottom: power allocation. In f 1 – f 3 we have the initial band frequencies of subsystems 1 – 3.

Figure 6 shows the evolution of the Normalized Mean Squared Error (NMSE) in the power allocation with respect to the number of messages exchanged between the transmission subsystems and the central controller. The optimal power allocation is computed using the bisection method (relative error below 10−5). We compare the proposed CDM to the classical primal and dual decomposition techniques, the classical primal–dual algorithm of Arrow et al. [16] and also to a centralized approach. The classical decomposition techniques use α k =1/ k as step-size and the Arrow–Hurwicz method initializes the value in the dual variable μ to 0, the primal variables with a uniform power allocation and the step-size is fixed to 0.1. On the one hand, results show that the proposed CDM is the best option, whereas the remaining alternatives require at least to double the amount of signaling in order to achieve the same allocation error. On the other hand, note that the classical decomposition techniques as well as the primal–dual approach are penalized in terms of convergence speed even taking into account that we have manually adjusted the step-size of each method in order to achieve the best possible result. Finally, note also that a centralized approach is not efficient at all as far as the allocation error becomes small enough only when the entire allocation has been transmitted. This requires 640 messages in our case to send the channel gains to the controller and 640 messages more to return the optimal power allocation values to the radios.

Figure 6
figure 6

NMSE versus number of signaling messages.

5.2 Power allocation in a conventional OFDM transmission

In the following, we apply our method to a classical water-filling problem where a decentralized solution is not necessary. In this occasion, we are interested in the adaptability of the method in time-varying scenarios.

Let us consider the well-known single-user water-filling solution over parallel Gaussian channels ([25], Sec. 10.4), which provides the optimal power allocation to the subcarriers of an OFDM-based system in order to maximize the mutual information given a total power constraint. Mathematically,

max { p i } i = 1 N s log 1 + p i σ n i 2 s.t. p i 0 i = 1 N s p i P
(47)

where N s is the total number of subcarriers or parallel channels in the system, P is the total transmission power, σ n i 2 is the noise variance in the i th subcarrier and p i stands for the allocated power. The application of the KKT optimality conditions to (47) leads to the solution

p i = 1 μ σ n i 2 +
(48)

where (a)+ = max {0,a} and 1 μ is denoted as the water-level and shall be chosen in order to satisfy the total power constraint. Typically, the bisection method is employed to find μ . However, note that (47) can be rewritten in the form of (2) and also (19). Therefore, we can apply the proposed CDM as well. Indeed, (48) and the relationship in (20) match if we identify p i with y i and μ with λ i (remember that the required relationship applies only to y i bd Y i , that is, y i = p i > 0).

5.2.1 Numerical results

Let us assume N s = 512 subcarriers. The channel is time-varying and frequency selective; it has 20 taps. The power delay profile is assumed exponential with a delay spread of 1 ms and the baseband sampling time is 1 μ s. We compare now the proposed CDM to the bisection method and also to the classical primal–dual algorithm in [16]. It is remarkable that the CDM requires no modification at all (it is completely unsupervised) and the same holds for the primal–dual algorithm. On the contrary, the bisection method requires a slight modification to be able to track the time-varying scenario. For that purpose, we introduce the updating factor α u . Initially, the method is applied as usual, that is, having the initial hypothesis on μ l 0 and μ u 0 (two values that are below and above μ , respectively), we compute μ 1 =1/2( μ l 0 + μ u 0 ) and we update μ l 1 to μ 1 if i = 1 N s p i ( μ 1 )>P or μ h 1 to μ 1 otherwise. In the subsequent iterations, given that the channel is time-varying, we need to check first if μ l k and μ h k are still valid. If i = 1 N s p i ( μ l k )>P is not accomplished, we update μ l k to μ l k α u and we repeat this while i = 1 N s p i ( μ l k )>P. Similarly, if i = 1 N s p i ( μ u k )<P is not attained, we modify μ u k to α u · μ u k and we repeat this while i = 1 N s p i ( μ u k )<P. Then, we compute μ k + 1 =1/2( μ l k + μ u k ) and we update the hypothesis accordingly, as in the normal version of the technique.

Figure 7 plots the NMSE of the power allocation for both methods as a function of the mean SNR. As in the previous application example, we compute the optimal power allocation using the bisection method (relative error below 10−5). Moreover, all the algorithms are initialized to the optimal solution for the current channel condition, α u = 1.05 in the bisection technique, the step-size is fixed to 0.001 in the Arrow–Hurwicz method and we have considered three different channel velocities, namely, (i) T c 1 =10· T CDM , (ii) T c 2 =100· T CDM , and (iii) T c 3 =1000· T CDM , where T CDM is the time taken by one complete iteration of the CDM and T c i is the coherence time of the channel at the i th scenario. Note that we have manually adjusted α u in the bisection method and the step-size in the primal–dual algorithm in order to achieve the best possible performance at the worst channel condition, that is, when the channel coherence time is the smallest one, i.e., T c 1 .

Figure 7
figure 7

Normalized MSE of the power allocation versus SNR in a time-varying channel.

Results show that the CDM usually outperforms the bisection method and it is far better than the primal–dual algorithm. Indeed, it performs worse than the bisection only for T c 1 and at low SNR. Note that since the CDM has no user-defined parameter, it automatically adapts to the different channel velocities. On the contrary, this adaptation does not occur in the other two methods. This is reflected in Figure 7, where, for example, the bisection method saturates to an NMSE around 10−4 for T c 1 , T c 2 , and T c 3 as the SNR grows.

5.3 Fair DBA

The fair DBA problem arises in many-to-one communication systems [26, 27] and the goal is to fairly distribute the available bandwidth. In many cases and specially in systems with a huge number of users [28], the computational cost of the techniques plays an important role. Additionally, let us remark that recent works on the topic aim at providing mechanisms for QoS differentiation [4, 29] to modify a plain fair allocation. Therefore, we consider the following network utility maximization (NUM) formulation to solve a fair DBA problem,

max { r j } j = 1 N U j ( r j ; p j ) s.t. m j r j d j , j = 1 , , N j = 1 N r j B ,
(49)

where B is the available bandwidth, r j is the rate allocated to the j th flow, and U j is the j th utility function (the terms bandwidth and rate are used interchangeably). The parameters m j , d j (with 0 ≤ m j < d j ), and p j > 0 are used to define the QoS requirements for each ongoing connection and they represent the minimum necessary rate, the required (maximum) bandwidth and the priority of the j th flow, respectively. Furthermore, we assume that j m j <B< j d j , i.e., the problem is coupled. As argued before, the utility functions can adequately be chosen in order to achieve a fair distribution of resources in different degrees. The following family of functions parameterized by γ

U j ( r j ; p j ,γ)= p j log ( r j ) , γ = 1 p j r j ( 1 γ ) 1 γ , γ 1
(50)

define different types of fairness, being γ (max–min fairness) and γ = 1 (proportional fairness) the most relevant ones [29].

Note that (49) can be rewritten in the form of (19) and in particular, the problem is strictly convex and we assume that strong duality holds, i.e., there is at least one strictly feasible point. Therefore, we can apply the KKT optimality conditions to solve (49) semi-analytically. In this case, the optimal rates must verifye

r j (μ)= p j μ 1 γ m j d j
(51)

and the optimal value of μ is such that j = 1 N r j ( μ )=B. The bisection method is a classical technique widely used in the literature in order to approximate μ but, alternatively, we can also apply the enhanced version of the CDM. Specifically, by adding the new variables {y j } and identifying f j (r j ) with − U j (r j ) and h j (r j ) = r j , (23) together with r j = h j 1 ( y j ), (51) turns into

y j = p j λ j 1 γ
(52)

when m j < y j < d j and has the required form in (20). Therefore, once the subsets S , I , and A are known, the optimal value of μ is readily found according to (25) as

μ = i A p i γ B i I m i i I d i γ .
(53)

5.3.1 Numerical results

Let us draw the values of m j from an integer uniform distribution between 0 and 10. Each request d j is obtained summing m j and an integer random number between 0 and 100. The priority values p j are drawn from a uniform distribution that takes values between 0.25 and 5 in steps of 0.25 and γ = 1. Figure 8 plots the mean allocation time, i.e., execution time, of the CDM when centrally computed in combination with the stopping criterion defined in Section 4.2. The algorithm has been executed in a Intel Core 2 Duo CPU running at 2.2 GHz and programmed in Matlab . We have considered three different values for the total available bandwidth, namely B1= j m j +0.25 j d j , B2= j m j +0.5 j d j , and B3= j m j +0.75 j d j . The results of the CDM have been compared to the classical bisection method and to the hypothesis testing method [30]. Since the allocation time is not sensitive to the available capacity for the latter methods, in Figure 8 we distinguish among B 1, B 2, and B 3 only for the CDM.

Figure 8
figure 8

Allocation time.

In order to provide a fair comparison among the methods, it is necessary to take into account the accuracy with respect to the optimal solution. The hypothesis testing strategy always achieves the exact optimal solution (see the details in [30]). The bisection method has been adjusted to achieve a relative error in the allocation lower than 10−6, or in other words,

| | r BI r | | | | r | | 1 0 6 ,
(54)

where r is the optimal allocation (which can be obtained with the hypothesis testing method) and r BIis the allocation achieved by the bisection method. Initially, the two hypothesis for the values of μ are 0 and 10. In the CDM we stop the iterations when

| S C k + 1 S C k | | S C k + 1 | 1 0 2 .
(55)

Note that as the number of users grows, the difference in time between the proposed method and the others also grows, specially when the system is more restricted in terms of capacity, i.e., for B 1. In this case, the CDM is able to compute the allocation in half the time required by classical methods. In terms of accuracy in the solution, (55) gives the exact optimal solution for B 1, B 2 and a relative allocation error lower than 10−4 for B 3. Overall, the solution is good in practice; it is optimal in capacity-constrained scenarios and nearly optimal in less critical situations. In order to illustrate the selection of the threshold for the stopping criterion, we plot in Figure 9 the evolution of the relative error and allocation time as a function of the accuracy in the stopping criterion. Note that a threshold of 10−2 provides a good trade-off between both performance metrics for the worst scenario, i.e., for B 3. Finally, if we consider a higher available bandwidth, e.g., 99% of the whole system demand, this threshold value keeps the allocation time small as in B 3 at the expenses of a higher allocation error (around 5%). However, the accuracy degradation appears only in this extreme case and it is not critical in practice as far as all the users nearly reach their demands.

Figure 9
figure 9

Precision adjustment of the stopping criterion.

6 Conclusions and future work

This article has contributed with novel decomposition ideas that efficiently intertwine the classical primal and dual decomposition approaches in a single iteration of a new technique, called the CDM. It solves generic convex optimization problems that have one coupling constraint with the known advantages of decomposition-based approaches, that is, the implementation of decentralized solutions. However, it reduces the number of iterations by more than one order of magnitude with respect to the classical primal and dual decomposition solutions and furthermore, it is completely unsupervised, that is, there is no parameter that requires a manual adjustment. Moreover, when the problem is particularized (but still of interest), additional results regarding the convergence rate of the proposed technique are achieved and an stopping criterion that enhances the performance of the method (in terms of the number of iterations required to achieve the optimal solution) is derived.

The proposed method has been tested in three different problems, two dealing with power allocation in OFDM-based systems and a third one dealing with DBA. In the first two cases, the goal is to find the well-known water-filling solution in power. In one case, we benefit from a decentralized approach that suits the system architecture whereas in the other case, the proposed method is applied to a conventional OFDM transmission that deals with a time-varying channel. In both examples, we have compared our solution to other decomposition strategies and our approach performs significantly better than the available alternatives when a decentralized solution is required. In particular, our results show that the signaling requirements can be reduced at least by a factor 2. Moreover, in the centralized application of the method, that is, the conventional OFDM transmission, the proposed method benefits from being unsupervised and the channel variability does not compromise the performance as classical methods do. In particular, and at the high SNR regime, the difference in the NMSE of the power allocation between the proposed method and the bisection is more than two orders of magnitude in all the explored scenarios.

Finally, when applied to a NUM problem and thanks to the enhanced version of the method, the proposed technique reduces the computational time by a factor 2 with respect to well-established techniques such as the bisection method. This reduction is very important in systems having a large number of users (as it happens in satellite communication networks), where the bandwidth allocation has to be computed in a short-time interval.

Appendix

Proof of Proposition 1

For the sake of simplicity, in what follows we obviate the iteration index k as well as the modifier ( · ) ̂ in ŷ j k , λ ̂ j k , and μ k. Let us consider first the dependance of λ j on y j in the primal subproblems. Interestingly, the function p j in (13) has already been studied in the convex literature and it is known as the primal function ([15], Sec. 5.4.4). We recall here two main results related to the primal function: (i) p j defines a convex function over the set P j defined as P j ={ y j | p j ( y j )<} and (ii) the optimal value of the dual variable associated to the constraint h j (x j ) ≤ y j and with opposite sign, say λ j ( y j ), is a subgradient of p j at y j . In our case, note that Y j P j j and thus, these results can be applied to (13). Furthermore, since p j is convex in the region of interest, it is guaranteed to be continuous although not necessarily differentiable. However, given that the objective function as well as all the constraints in the problem (finite number of them) are differentiable, then for each singular value in Y j we can find an open interval that includes it where p j is differentiable (except of course at the singular point). Then, taking into account that the first derivative of a convex function is non-decreasing by definition and noting that the subgradient equals to the gradient where the function is differentiable, we obtain the desired result. In other words, λ j ( y j ) is non-decreasing in the intervals where p j is differentiable just because the subgradient and the first derivative coincide whereas it takes a value in-between the right and left derivatives at the singular points, thus preserving the non-decreasing property. Finally, by removing the minus sign we can state that λ j ( y j ) is non-increasing on y j .

Second, we prove that y j (μ) is non-increasing on μ in (10). Let us first rewrite (10) as

d j (μ)= min y j p j ( y j ) + μ y j
(56)

Then take any two values of μ, say μ 1 and μ 2, that accomplish: (i) 0 ≤ μ 1 < μ 2 and (ii) there exist two values in Y j , y j , 1 and y j , 2 , such that d j ( μ 1 )= p j ( y j , 1 )+ μ 1 y j , 1 and d j ( μ 2 )= p j ( y j , 2 )+ μ 2 y j , 2 . In other words, y j , 1 and y j , 2 are minimizers of d j (μ 1) and d j (μ 2), respectively. Now, since y j , 1 is not necessarily a minimizer of d j (μ 2), we can establish the following inequality,

d j ( μ 2 ) = p j ( y j , 2 ) + μ 1 y j , 2 + ( μ 2 μ 1 ) y j , 2 p j ( y j , 1 ) + μ 1 y j , 1 + ( μ 2 μ 1 ) y j , 1
(57)

Next, we prove by contradiction assuming y j , 2 > y j , 1 . In this case, it is true that (i) ( μ 2 μ 1 ) y j , 2 >( μ 2 μ 1 ) y j , 1 since μ 2μ 1 > 0 and (ii) p j ( y j , 2 )+ μ 1 y j , 2 p j ( y j , 1 )+ μ 1 y j , 1 since y j , 1 is a minimizer of d j (μ 1). Finally, observations (i) and (ii) together contradict the inequality in (57) and therefore y j , 2 must necessarily satisfy y j , 2 y j , 1 . This proves our second statement.

Proof of Proposition 2

Let us apply the KKT optimality conditions corresponding to (12). The Lagrangian is

L y ̂ k , μ , α , β = j = 1 J y j k ŷ j k 2 + μ j = 1 J ŷ j k C + α T ( y min y ̂ ) + β T ( y ̂ y max )
(58)

where y min =[min Y 1 ,,min Y J ] and y max =[max Y 1 ,,max Y J ] and, if we look at the optimality condition L/ ŷ j k =0, we get

ŷ j k = y j k μ β j + α j ,j=1,,J
(59)

Therefore, assuming that the j th optimal primal value, i.e., ŷ j k , , lies inside the interval (min Y j ,max Y j ), then β j = α j = 0 due to slackness and ŷ j k = y j k μ (with μR). If this is not the case, then either ŷ j k , =min Y j or ŷ j k , =max Y j . In both cases, note that we can choose an adequate value of the free dual multipliers α j or β j , respectively, in order to satisfy (59). Finally, taking these results into account, we can conclude that the solution to the primal projection is

ŷ j k = y j k μ min Y j max Y j
(60)

where μ is adjusted to accomplish j = 1 J ŷ j k =C. Since y ky ( y j k >C unless y j k = y j j) and j = 1 J y j =C, it is necessary that μ > 0.

Proof of Proposition 3

From Proposition 2 it is verified that non-optimal values y that attain j = 1 J y j >C diminish its value in the primal projection in order to achieve j = 1 J ŷ j =C unless y j =min Y j , in which case y j ̂ = y j . Let us distinguish two subsets of variables: I includes the indexes of the values y j that attain y j =min Y j and I ̄ the rest. Note that I ̄ exactly contains the indexes of the { λ j ̆ } candidates used in the dual projection. Then it is true that

y j ̂ y j ,jI j I ŷ j j I y j ,
(61)

and therefore,

j I ̄ ŷ j > j I ̄ y j
(62)

since the equality constraints j = 1 J ŷ j = j = 1 J y j =C are always fulfilled. This fact assures that there is at least one value ŷ j with j I ̄ that attains ŷ j > y j unless all the values are optimal yet.

Endnotes

aNotation: , , , and stand for component-wise inequalities.bThe vector s is a subgradient of the function f: R n R at x R n if f(y)f(x)+ ( y x ) T s,y R n . If f is differentiable at x, the subgradient s and the gradient f(x) coincide. Otherwise, there exist many subgradients.cThe notation x j μ k stands for the optimal solution of the j th subproblem given μ k.dNotation: stands for vector element-wise product. If a= [a 1,a 2,…,a N ]T and b= [b 1,b 2,…,b N ]T, then a b= [a 1·b 1,a 2·b 2,…,a N · b N ]T.eNotation: a m j d j equals a if m j < a < d j , m j if am j and d j if ad j .

References

  1. Bertsekas D, Nedić A, Ozdaglar A: Convex Analysis and Optimization. Belmont, MA, USA: Athena Scientific; 2003.

    MATH  Google Scholar 

  2. Boyd L, Vandenberghe S: Convex Optimization. Cambridge, MA: Cambridge University Press; 2003.

    MATH  Google Scholar 

  3. Palomar D, Cioffi J, Lagunas M: Joint Tx-Rx beamforming design for multicarrier MIMO channels: a unified framework for convex optimization. IEEE Trans. Signal Process 2003, 51(9):2381-2401. 10.1109/TSP.2003.815393

    Article  Google Scholar 

  4. Yache H, Mazumdar R, Rosenberg C: A game theoretic framework for bandwidth allocation and pricing in broadband networks. IEEE/ACM Trans. Netw 2000, 8(5):667-678. 10.1109/90.879352

    Article  Google Scholar 

  5. Xiao L, Johansson M, Boyd S: Simulatenous routing and resource allocation via dual decomposition. IEEE Trans. Commun 2004, 52(7):1136-1144. 10.1109/TCOMM.2004.831346

    Article  Google Scholar 

  6. Palomar D, Chiang M: Alternative decompositions for distributed maximization of network utility: framework and applications. IEEE Trans. Autom. Control 2007, 52(12):2254-2269.

    Article  MathSciNet  Google Scholar 

  7. Tan CW, Palomar D, Chiang M: Distributed optimization of coupled systems with applications to network utility maximization. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toulousse; May 2006:981-984.

    Google Scholar 

  8. Liao S, Cheng W, Liu W, Yang Z, Ding Y: Distributed optimization for utility-energy tradeoff in wireless sensor networks. In IEEE International Conference on Communications (ICC). Glasgow; June 2007:3190-3194.

    Google Scholar 

  9. Xiao L, Boyd S: Optimal scaling of a gradient method for distributed resource allocation. J. Optim. Theory Appl. (JOTA) 2006, 129(3):469-488. 10.1007/s10957-006-9080-1

    Article  MathSciNet  MATH  Google Scholar 

  10. Palomar D, Fonollosa J: Practical algorithms for a family of waterfilling solutions. IEEE Trans. Signal Process 2005, 53(2):686-695.

    Article  MathSciNet  Google Scholar 

  11. Palomar DP, Bengtsson M, Ottersten B: Minimum BER linear transceivers for MIMO channels via primal decomposition. IEEE Trans. Signal Process 2005, 53(8):2866-2882.

    Article  MathSciNet  Google Scholar 

  12. Scaglione A, Barbarossa S, Giannakis GB: Optimal adaptive precoding for frequency-selective Nagakami-m fading channels. In IEEE 52nd Vehicular Technology Conference (VTC Fall 2000). Boston; September 2000:1291-1295.

    Google Scholar 

  13. Marqués AG, Digham FF, Giannakis GB: Optimizing power efficiency of OFDM using quantized channel state information. IEEE J. Sel. Areas Commun 2006, 24(8):1581-1592.

    Article  Google Scholar 

  14. Arkhangel’skii A, Fedorchuk V: The Basic Concepts and Constructions of General Topology. In General Topology, I, Encyclopedia of the Mathematical Sciences. New York: Springer; 1990.

    Google Scholar 

  15. Bertsekas D: Nonlinear Programming. Belmont, MA, USA: Athena, Scientific; 1999.

    MATH  Google Scholar 

  16. Arrow KJ, Hurwicz L, Uzawa H: Iterative Methods in Concave Programming. In Studies in Linear and Nonlinear Programming. Palo Alto: Stanford University Press; 1958:154-165.

    Google Scholar 

  17. Holmberg K, Kiwiel K: Mean value cross decomposition for nonlinear convex problems. Optim. Methods Softw 2006, 21(3):401-417. 10.1080/10556780500098565

    Article  MathSciNet  MATH  Google Scholar 

  18. Holmberg K: Primal and Dual Decomposition as Organizational Design: Price and/or Resource Directive Decomposition. In Design Models for Hierarchical Organizations: Computation, Information, and Decentralization. Dordrecht: Kluwer Academic Publishers; 1995:61-92.

    Chapter  Google Scholar 

  19. Necoara I, Suykens JAK: Application of a smoothing technique to decomposition in convex optimization. IEEE Trans. Autom. Control 2008, 53(11):2674-2679.

    Article  MathSciNet  Google Scholar 

  20. Dinh QT, Necoara I, Savorgnan C, Diehl M: An inexact perturbed path-following method for lagrangian decomposition in large-scale separable convex optimization. SIAM J. Optim 2013, 23(1):95-125. 10.1137/11085311X

    Article  MathSciNet  MATH  Google Scholar 

  21. Tseng P, Bertsekas D: On the convergence of the exponential multipliers method for convex programming. Math. Program 1993, 60(1-3):1-19. 10.1007/BF01580598

    Article  MathSciNet  MATH  Google Scholar 

  22. Polyak R: Primal–dual exterior point method for convex optimization. Optim. Methods Softw 2008, 23(1):141-160. 10.1080/10556780701363065

    Article  MathSciNet  MATH  Google Scholar 

  23. Akyildiz I, Mohanty S, Xie J: A ubiquitous mobile communication architecture for next-generation heterogeneous wireless systems. IEEE Commun. Mag 2005, 43(6):S29-S36.

    Article  Google Scholar 

  24. Wang D, Miao K, John V, Rungta S, Chan W: Considering wireless mesh network with heterogeneous multiple radios. In IEEE WiCom. Shanghai; September 2007:1681-1684.

    Google Scholar 

  25. Cover T, Thomas J: Elements of Information Theory. New York: Wiley; 1991.

    Book  MATH  Google Scholar 

  26. IEEE: Air Interface for Fixed and Mobile Broadband Wireless Access Systems; Amendment 2: Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Band and Corrigendum 1, IEEE Standards,. 2006.

    Google Scholar 

  27. ETSI: Digital Video Broadcasting (DVB); Interaction Channel for Satellite Distribution Systems ETSI EN 301 790, 2005

  28. Acar G, Rosenberg C: Weighted fair bandwidth-on-demand (WFBoD) for geostationary satellite networks with on-board processing. Comput. Netw 2002, 39(1):5-20. 10.1016/S1389-1286(01)00295-X

    Article  Google Scholar 

  29. Mo J, Walrand J: Fair end-to-end window-based congestion control. IEEE/ACM Trans. Netw 2000, 8(5):556-567. 10.1109/90.879343

    Article  Google Scholar 

  30. Seco-Granados G, Vazquez-Castro M, Morell A, Vieira F: Algorithm for fair bandwidth allocation with QoS constraints in DVB-S2/RCS. In Proceedings of the IEEE Global Telecommunication Conference (GLOBECOM). San Francisco, USA; November 2006:1-5.

    Google Scholar 

Download references

Acknowledgements

This work is supported by the Spanish Government under project TEC2011-28219 and the Catalan Government under grant 2009 SGR 298.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Antoni Morell.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Morell, A., Vicario, J.L. & Seco-Granados, G. Coupled-decompositions: exploiting primal–dual interactions in convex optimization problems. EURASIP J. Adv. Signal Process. 2013, 41 (2013). https://doi.org/10.1186/1687-6180-2013-41

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2013-41

Keywords