Massive machine type communication (mMTC) is a significant scenario in 5g Internet of Things, whose main characteristics are massive connection, sporadic transmissions and so on. Sporadic transmissions mean the probability of each user being active is very low so that only a small percentage of users are active in communicating with the base station at the same time. So we can propose the sparsity of active users by taking advantage of this characteristic, which forms a sparse signal recovery problem [13]. Compressed sensing can be used to solve this problem effectively. In this letter, signals from inactive users are set to zero and the signals from active users are selected from constellation set Y, \(Y=\left\{ 1+i,1i,1+i,1i\right\}\). In the traditional CS model, the sparsity of users signals x is often considered, and active users can appear anywhere in x. In this letter, we propose a block sparse method that divides users into blocks and represent x as a combination of signal block \(x^{[i]}\). Furthermore, active users appearing at the beginning of each block share the common support, making the best use of block sparsity. Then each block is recovered by reconstruction algorithm. In this way, the degree of freedom of the solution can be reduced and the robustness of the system can be improved, which can bring better recovery performance than traditional algorithms with random sparsity. The number and position of the active users in each block \(x^{[i]}\) are the same. The specific block sparsity representation method is shown in Fig. 1. For example, we assume that the number of blocks is 3 and the number of active users is 9. Each block is represented as \(x^{[1]}\), \(x^{[2]}\), \(x^{[3]}\) using the proposed algorithm. The entire process is depicted in the Fig. 2. As shown in the figure, x is the original sparse signal and \({\hat{x}}\) is the recovered sparse signal. By comparing x and \({\hat{x}}\), the BER performance of the proposed method is obtained.
We define \(G_{x^{i}}\) as the index set of nonzero elements of the ith block, whose relationship can be expressed as
$$\begin{aligned} G_{x^{[1]}}= & {} G_{x^{[2]}}=G_{x^{[3]}}=\cdots =G_{x^{[d]}} \end{aligned}$$
(11)
$$\begin{aligned} \left\ x^{[1]}\right\ _2= & {} \left\ x^{[2]}\right\ _2=\left\ x^{[3]}\right\ _2=\cdots =\left\ x^{[d]}\right\ _2 \end{aligned}$$
(12)
where \(x^{[i]}_2\) indicates the \(L_2\) norm of \(x^{[i]}\), which represents the number of nonzero elements. In this letter, the compressed sensing reconstruction algorithm we use is ISD algorithm, so \(G_{x^{[i]}}\) can be represented as support set \(supp(x^{[i]})\). According to (7), channel matrix \(A^{[i]}\) have an influence on signal received by the station \(y^{[i]}\), so it’s significant to choose the proper channel matrix. In the theory of compressed sensing, the measurement matrix have to satisfy the the Restricted Isometry Property (RIP). In general, Gaussian random matrix or Bernoulli random matrix is used as the measurement matrix in some papers, and we consider using the circulant matrix as the measurement matrix to reduce computational complexity and save storage space by utilizing the previously proposed block sparse structure. According to (9), the steps to generate a block circulant matrix are listed as follows.

(1)
Channel matrix \(A^{[i]}\) can be divided into K/d subchannel matrices, whose elements are composed of 0 and 1 with generation probability of 1/2 respectively. The first subchannel matrix of length M/d is set to \(A_{1}^{[i]}\).

(2)
Move all elements of the matrix \(A_{1}^{[i]}\) by one place to generate the second matrix \(A_{2}^{[i]}\). Then move all elements of \(A_{2}^{[i]}\) by one place to generate the third matrix \(A_{3}^{[i]}\), and all subchannel matrices are generated in turn like this.

(3)
Combine all subchannel matrices to get a complete measurement matrix \(A^{[i]}\). The specific circulant matrix structure is as follows.
$$\begin{aligned} A^{[i]}=\begin{bmatrix} g_{11}^{[i]} &\quad g_{12}^{[i]} &\quad \cdots &\quad g_{1 \frac{M}{d}}^{[i]} \\ g_{1 \frac{M}{d}}^{[i]} &\quad g_{11}^{[i]} &\quad \cdots &\quad g_{1(\frac{M}{d}1)}^{[i]} \\ \cdots &\quad \cdots &\quad \cdots &\quad \cdots \\ g_{1(\frac{M}{d}\frac{K}{d}+2)}^{[i]} &\quad g_{1(\frac{M}{d}\frac{K}{d}+3)}^{[i]} &\quad\cdots &\quad g_{1(\frac{M}{d}\frac{K}{d}+1)}^{[i]} \end{bmatrix} \end{aligned}$$
(13)
where \(A^{[i]}\) contains only 0 and 1. This structure greatly simplifies the measurement matrix, makes use of the characteristic of block sparsity and also satisfies the RIP. The experimental results prove that it contributes to improving recovery accuracy. However, it can only ensure that the signal is reconstructed with a high probability by using circulant matrix or other measurement matrices for compressed sensing. For a certain reconstruction algorithm, whether there is a measurement matrix that can perfectly recover the original signal is still a problem to be studied.
The reconstruction algorithm used in this letter is ISD algorithm, which is different from traditional greedy algorithm such as OMP algorithm in support set detection. For instance, the index set of ISD algorithm will be updated after each iteration, while the index set of the greedy algorithm will remain unchanged or continue to grow. Therefore, ISD algorithm can reconstruct the original signal more accurately than the greedy algorithm. Compared with the traditional ISD algorithm, the block sparse ISD algorithm proposed in this letter expresses the original sparse signal x as \(x^{[1]},x^{[2]},\ldots ,x^{[d]}\), which reduces the algorithm complexity and shortens the signal reconstruction time. Recently, a new SISD algorithm exploiting the structured sparsity is proposed, which improves ISD algorithm by recovering multiple sparse signals in J continuous time slots in a joint manner. And the BER will decrease with the increase of time slots. Whereas, only when the number of time slots J is very large can the performance of algorithm be significantly improved, which increase the algorithm complexity. When the number of time slots is small, the performance of SISD algorithm is similar to the traditional ISD algorithm, which shows that it has limitations. On the contrary, our proposed method can recover signal accurately by fully taking advantage of block sparsity among multiple related blocks even when there are few blocks. In summary, the proposed algorithm performs better than SISD algorithm under low complexity condition. Finally, we give the specific steps of block sparse ISD algorithm in Algorithm 1. The detailed steps can be described as follows.
Before the iteration starts, the original sparse signal x is represented as d signal blocks by the proposed method: \(x^{[1]},x^{[2]},x^{[3]},\ldots ,x^{[d]}\), and then multiply the signal block \(x^{[r]}\) by the corresponding channel sensing matrix \(A^{[r]}\) to obtain the received signal \(y^{[r]}\). Next, ISD algorithm is used to recover the received signal \(y^{[r]}\) into \({\hat{x}}^{[r]}\), and then combine all signal blocks \({\hat{x}}^{[r]}\) into recovered sparse signal \({\hat{x}}\). And recovery process can be expressed as the following steps.
(\(Step\ 2\)) Set support set \(I^{(0)}=\varnothing\), and calculate the complement set \(T^{(0)}=(I^{(0)})^{C}\).
(\(Step\ 4\)) Obtain the rth signal block \({x^{[r](l)}}\) from the truncated weighted BP model in the lth iteration.
(\(Step\ 5\)) Set proper threshold \(\epsilon ^{(l)}\). First, the components of \(x^{[r](l)}\) are sorted from small to large according to the absolute value to obtain a new signal block \(G^{(l)}\). Then subtract the absolute value of the previous element \(g_{i}^{(l)}\) from the absolute value of the latter element \(g_{i+1}^{(l)}\) in \(G^{(l)}\). And \(\tau ^{(l)}\) is given preliminarily according to the paper [14, 15]. Then we find the minimum i value of the of the adjacent components satisfying the condition: \(g_{i+1}^{(l)}g_i^{(l)}>\tau ^{(l)}\) and the component \(g_{i}^{(l)}\) corresponding to the minimum i is the threshold \(\epsilon ^{(l)}\). That is, when there is a big jump in the absolute value of two adjacent components for the first time, the smaller component is selected as the threshold. Then we detect support set of \(x^{[r](l)}\) and update a new support set \(I^{(l+1)}\).
(\(Step\ 9\)) When the support set \(I^{(l)}\) contains enough elements to reach the end loop condition, the recovered sparse signal block \({\hat{x}}^{[r]}\) is returned. And all sparse signal blocks are obtained in turn: \(x^{[1]}, x^{[2]}, x^{[3]}, \ldots , x^{[d]}\).
(\(Step\ 11\)) Combine all d sparse signal blocks into the original sparse signal \({\hat{x}}\).
In the proposed algorithm, the threshold setting is based on the prior information that the nonzero elements of the each sparse signal block has fast degradation distribution characteristics. According to step 4, the optimization problem is the L1 minimization problem, and the true nonzero values of sparse signal \(x^{[r](l)}\) are very large, but the number is small, while false nonzero values are very small, but the number is large. Therefore, it is possible that the sum of true nonzero elements is the same as the sum of false nonzero element, then true nonzero elments will be replaced by false nonzero elements in support set, which causes false detection. In this way, we can get right position of nonzero elements and determine more support set elements in the next iteration, and finally recover the complete support set.
The traditional ISD algorithm and the SISD algorithm proposed in recent years only consider the conventional sparse signal without additional structure. SISD algorithm can recover multiple sparse signals at the same time by expanding a received signal into a combination of multiple received signals in a joint manner. However, the structure of the original sparse signal is unchanged, and the position of nonzero elements is random. Different from ISD and SISD algorithm, the sparse signal in our proposed algorithm has a block sparse structure, where nonzero elements \(x^{[1]}, x^{[2]}, x^{[3]}, \ldots , x^{[d]}\). As a result, each block has the same support set. By exploiting this structure, the signal blocks can be recovered in a joint manner, which not only increases the probability of signal recovery, but also improves the accuracy of signal recovery. Obviously, when \(d=1\), the signal with sparse structure degenerate into conventional sparse signal. And computational complexity mainly coming from BP problem in \(Step\ 4\) is an indicator used to measure the performance of algorithm. And block method can reduces the scale of the received signal in a recovery process, which greatly speeds up the detection of the support set and reduces the complexity on the whole.
In the grantfree NOMA system, we apply the block sparse ISD algorithm to realize the detection of user activity. A sparse signal block can be regarded as a user group and different signal blocks correspond to different user groups. In a user group, each user communicates by using NOMA technology. And appropriate power is allocated to users who are divided into groups according to their channel conditions.