The LS-MIMO detector mixed Gibbs sampling (MGS) proposed in [10] is revisited in this subsection, which is based on the motivation to solve the stalling problem presented in the conventional GS detector.
To sample the estimated symbol at each position, a target distribution [20] is evaluated, which is given by:
$$ p\left(\hat{s}_{1},\hat{s}_{2},\dots,\hat{s}_{2K}|\mathbf{y}, \mathbf{H}\right) \propto \exp \left(- \frac{||\mathbf{y} - \mathbf{H} \mathbf{s} ||^{2}}{\alpha^{2} \sigma^{2}}\right), $$
(5)
where \(\hat {s}_{i}\) denotes the ith position of the estimated symbols vector \(\hat {\mathbf {s}}, \alpha \) denotes a positive parameter, which tunes the mixing time of the Markov chain [20] and is also called as temperature. The conventional Gibbs sampling detector does not include the α parameter in its sample process and thus can be viewed as a special case when α=1. A larger temperature speeds up the mixing and aims to reduce the higher moments of the number of iterations when finding the correct solution. However, as stated in [10], the stalling problem persists even with large α.
The MGS detector utilizes a mixing of (a) conventional Gibbs sampling (i.e., α=1) and (b) the infinite temperature version of (5) (i.e., α=∞), resulting in a random and uniform sample from all the possibilities, called a noisy or random solution in this paper. In this way, the MGS follows a sampling distribution given by:
$$ p\left(\hat{s}_{1},\dots,\hat{s}_{2K}|\mathbf{y}, \mathbf{H}\right) \sim \left(1-q\right) \psi \left(\alpha_{1}\right) + q \psi \left(\alpha_{2}\right) $$
(6)
and
$$ \psi (\alpha) = \exp \left(- \frac{||\mathbf{y} - \mathbf{H} \hat{\mathbf{s}} ||^{2}}{\alpha^{2} \sigma^{2}}\right), $$
(7)
where q denotes the mixing ratio. The MGS detector of [10] considers the α1=1,α2=∞ combination, which results in a near-ML performance, overcoming the stalling problem of the GS, being also a simple implementation choice. On the other hand, in high-order modulation, such as 64-QAM and 256-QAM, the noisy solution interferes in the algorithm’s convergence, since there are a large number of symbols in the constellation and a simple random solution in this signal space has a high possibility of being far from the real solution, which causes the algorithm to require more iterations for convergence. In this sense, the proposed d-sMGS detector acts to mitigate this harmful effect.
Regarding the mixing ratio parameter q, in [10], an analysis in low-order QAM constellations is carried out and its suitable value choice is presented as the inverse of the number of dimensions in the system, i.e., \(q=\frac {1}{2K}\), which is also employed in the proposed detector during our numerical simulations.
In the MGS algorithm, an initial solution \(\hat {\mathbf {s}}^{(t=0)}\) is considered for the estimated symbols vector, where t represents the current iteration. Indeed, the initial solution may be chosen either by a random symbols vector or as the output of a linear low-complexity detector, such as zero forcing (ZF) or MMSE. The index i, in addition to the position of the vector \(\hat {\mathbf {s}}\), also denotes the coordinate referring to the MGS algorithm, where \(i=1,2,\dots,2K\). Therefore, each iteration requires 2K coordinate updating. At each iteration, updating the 2K coordinates is performed by sampling the distributions given by:
$$ \hat{s}_{i}^{(t)} \sim p\left(\hat{s}_{i}|\hat{s}_{1}^{(t)},\dots,\hat{s}_{i-1}^{(t)},\hat{s}_{i+1}^{(t-1)},\dots, \hat{s}_{2K}^{(t-1)},\mathbf{y},\mathbf{H}\right). $$
(8)
One can notice that by (8) each updated coordinate is fed, in the same iteration, to the next coordinate.
The probability of the ith symbol assuming the value \(a_{j} \in \mathbb {A}, \forall j = 1,\dots,|\mathbb {A}|\) can be written as:
$$ p\left(\hat{s}_{i}=a_{j}|\hat{\mathbf{s}}_{i-1}, \mathbf{y},\mathbf{H}\right) = \frac{\exp\left({- \frac{||\mathbf{y} - \mathbf{H} \hat{\mathbf{s}}_{i,j} ||^{2}}{\alpha^{2}\sigma^{2}}}\right)}{\sum_{l=1}^{|\mathbb{A}|} \exp\left({- \frac{||\mathbf{y} - \mathbf{H} \hat{\mathbf{s}}_{i,l} ||^{2}}{\alpha^{2}\sigma^{2}}}\right)}, $$
(9)
where the cardinality of set \(\mathbb {A}\) is expressed as \(|\mathbb {A}|\), while \(\hat {\mathbf {s}}_{i,j}\) denotes the vector \(\hat {\mathbf {s}}^{(t)}\) with its ith position changed to the symbol aj.
The sampling process based on (9) can lead to a numerical limitation due to the exponential function. In this sense, such implementation was carried out through a logarithmic intermediate step, as:
$$\begin{array}{*{20}l} & \log \left(p(\hat{s}_{i}=a_{j}|\hat{\mathbf{s}}_{i-1}, \mathbf{y},\mathbf{H})\right) = \notag\\ &= \scriptstyle {f(i,j) - \left[f^{\text{ord}}_{0} + \log \left(1 + \sum_{m=1}^{|\mathbb{A}|-1} \exp \left(f^{\text{ord}}_{m} - f^{\text{ord}}_{0} \right)\right)\right]} \notag \\ & = g(i,j) \end{array} $$
(10)
where \(f(i,j) = {- \frac {||\mathbf {y} - \mathbf {H} \hat {\mathbf {s}}_{i,j} ||^{2}}{\alpha ^{2}\sigma ^{2}}}\) and \(f^{\text {ord}}_{i}\) is ith position of f in descending order, for \(i=1,\dots,|\mathbb {A}|\). A practical and computationally efficient evaluation of MGS target Function is summarized in Algorithm 1.
The MGS algorithm ends after a certain amount of iterations, and the vector of estimated symbols is chosen as the vector that presented the lowest ML cost, considering all iterations. In the next subsections, the additional strategy of multiple restarts (MR) [10] and the stopping criteria for the iterations and the restarts are addressed.
3.1 Multiple restarts
In medium QAM order modulations, such as 16-QAM, the mixing strategy of MGS is unable to achieve near-optimal performance [21] in a reasonable number of iterations, while MR procedure, as proposed in [10], has demonstrated promising results, leading the MGS-MR under 16-QAM to near-optimal performance.
In the aMGS and d-sMGS detectors, the MR strategy is also incorporated, namely aMGS-MR and d-sMGS-MR detectors. Thus, Algorithms 2 and 3 run either a maximal number of restarts Rmax times or it is limited by a stopping criterion and the lowest cost found considering all restarts is the final solution. As discussed in Section 6, the MR strategy can improve the convergence of the algorithm compared to the same number of iterations in a single execution, resulting in a better performance-complexity tradeoff.
3.2 Stopping criterion
Given that the mixing strategy provides the local minimum escaping feature, the evolution of the cost function values across iterations becomes unpredictable and the optimal solution can be found before the maximum number of iterations \(\mathcal {I}\) has been reached [14]. In this sense, an efficient stopping criterion is paramount in reducing the complexity of the MGS detector.
Similarly, the decision to set a restart in the algorithm requires a criterion definition, since the optimal solution may already have been found, not requiring an extra execution of the algorithm. Hence, MR strategy must be balanced aiming to achieve a better performance-complexity tradeoff.
Stopping criteria have been proposed in the literature. For instance, in [10], the stopping criterion is based on the difference between the best ML cost found so far and the noise variance. Moreover, the QAM constellation size could be taken into account. The main idea in [10] is to stop the detection iterations if a maximum number of iterations \(\mathcal {I}\) is attained or if the iteration in stalling mode is larger than a maximum of Θs iterations.
Assume the estimated symbol vector, in the tth iteration, is \(\hat {\mathbf {s}}^{(t)}\). The quality metric of \(\hat {\mathbf {s}}^{(t)}\) is defined as
$$ \hspace{.13\linewidth}\phi\left(\hat{\mathbf{s}}^{(t)}\right) = \frac{||\mathbf{y} - \mathbf{H} \hat{\mathbf{s}}^{(t)}||^{2} - N\sigma^{2}}{\sqrt{N}\sigma^{2}}. $$
(11)
Hence, the stalling limit for iterations, Θs, is given by
$$ \hspace{.18\linewidth}\Theta_{s}\left(\phi\left(\hat{\mathbf{s}}^{(t)}\right)\right) = c_{s} \cdot e^{\phi\left(\hat{\mathbf{s}}^{(t)}\right)}, $$
(12)
where cs is a constant depending upon the M-QAM constellation size, which increases with M. Although (12) is suitable as a stopping criterion, a minimum number of iterations cmin must be defined to ensure the quality of symbol detection. Therefore, Θs can be rewritten as
$$\begin{array}{@{}rcl@{}} \Theta_{s}\left(\phi\left(\hat{\mathbf{s}}^{(t)}\right)\right) &=& \left\lceil \max \left(c_{\text{min}},\,\, c_{s} \cdot e^{\phi\left(\hat{\mathbf{s}}^{(t)}\right)}\right) \right\rceil, \\ \text{with} \qquad c_{s} &=& c_{1} \log_{2} (M), \end{array} $$
(13)
where c1 is a tunning constant which defines the allowed number of iterations in stalling mode.
For the MR strategy, the criterion set the allowable number of restarts Θr, which also is based on quality metric \(\phi \left (\hat {\mathbf {s}}^{(t)}\right)\):
$$\begin{array}{@{}rcl@{}} \Theta_{r}\left(\phi\left(\hat{\mathbf{s}}^{(t)}\right)\right) &=& \left\lceil \max \left(0,\,\, c_{r} \cdot \phi\left(\hat{\mathbf{s}}^{(t)}\right) \right) \right\rceil + 1, \\ \text{with}\quad c_{r} &=& c_{2} \log_{2} (M) \ , \end{array} $$
(14)
and c2 is the tuning constant adjusting the maximum number of restarts.
At the end of each restart, Θr is computed and checked if the actual number of repetitions is less than Θr. If yes, go to another run of the algorithm; else, output the solution vector with the minimum cost so far as the final solution.
For the aMGS and d-sMGS detectors presented below, aMGS and d-sMGS, we also assume the stop criteria described in this subsection.