Image fusion algorithm in Integrated Space-Ground-Sea Wireless Networks of B5G

In recent years, in Space-Ground-Sea Wireless Networks, the rapid development of image recognition also promotes the development of images fusion. For example, the content of a single-mode medical image is very single, and the fused image contains more image information, which provides a more reliable basis for diagnosis. However, in wireless communication and medical image processing, the image fusion effect is poor and the efficiency is low. To solve this problem, an image fusion algorithm based on fast finite shear wave transform and convolutional neural network is proposed for wireless communication in this paper. This algorithm adopts the methods such as fast finite shear wave transform (FFST), reducing the dimension of the convolution layer, and the inverse process of fast finite shear wave transform. The experimental results show that the algorithm has a very good effect in both objective indicators and subjective vision, and it is also very feasible in wireless communication.


Introduction
In the field of Space-Ground-Sea Wireless Networks, with the combination of medical image analysis, environmental detection, and digital photography with computer technology [1,2], the development of image processing has been greatly promoted.Image fusion [3], as an important branch of image processing, and the rapid development of remote sensing monitoring, target recognition, and light and shadow technology, deeply reflects the importance of image fusion technology, especially in the accuracy and accuracy of image information processing.
At the beginning, image fusion was mainly used in military applications.Lvarez et al. proposed Laplace pyramid image fusion algorithm for the first time [4].Then, Akhtarkavan et al. proposed the quantization and threshold method [5], which made image fusion enter a new stage.Image fusion [6,7] refers to the collection of the same target image information through multi-channel, through information extraction, enhancement, denoising, or other computer technology, to collect the effective information in the image, and finally generate the image with the largest amount of information.Under the same target, multiple different source images are fused by the operator or neural network to form a new image with multiple information.
In recent years, the application of medical imaging technology in clinical practice has become increasingly prominent.At present, the main medical imaging methods include magnetic resonance, X-ray, computed tomography, and so on [8].Due to different imaging mechanisms, medical images with different medical modes present different kinds of organ/tissue information.Single-mode medical imaging [9] has been unable to be used as the basis for judging the location of important lesions in clinical diagnosis and treatment due to its incomplete imaging results and low definition.Multimodal medical image fusion [10] combines the source images of different modes with different supplementary information into a visual composite image to help doctors make better diagnosis and decision.By fusing multimodal medical images, the problem of insufficient information of single-mode medical images can be solved, and the accuracy of disease and the efficiency of the treatment can be improved.
The early mainstream image fusion methods include averaging, HS transform, principal component analysis, etc. [11].These methods all decompose and fuse images on one layer.In the mid-1980s, pyramid decomposition had the characteristics of multidirectional image extraction, and image fusion methods based on pyramid decomposition began to develop.In the 1990s, wavelet transform provides a new tool for image fusion with the characteristics of multi-resolution.Wavelet transform inherits the idea of short-time Fourier transform [12] and has been successfully applied to image processing, video processing, and other aspects.However, since wavelet transform is often more concentrated in image processing, it is difficult to deal with the texture and details of the image, especially when processing two-dimensional or high-dimensional images.In recent years, the research of multimodal medical image fusion mainly focuses on the image fusion method based on multi-scale transformation theory.The method based on multi-scale transformation is generally divided into three steps: decomposition, fusion, and reconstruction.
Among the above algorithms, the multi-scale analysis algorithm [13] is favored because it can extract the detailed and significant information of the image.However, at the same time, the processing algorithm becomes more and more complex.With the increasing demand for image clarity, wavelet transform in multi-scale analysis cannot optimally represent high-dimensional functions or two-dimensional images with surface singularity [14].Fast finite shear wave transform [15] has all the advantages of wavelet transform, which can effectively reduce the influence of error, so it is more suitable for image fusion.On the basis of the existing medical fusion algorithm research, aiming at the problems of medical fusion algorithm, combined with the advantages of convolution neural network model [16], an improved medical image fusion algorithm based on fast finite shear wave transform and convolution neural network (FFST) is proposed.After the source image is decomposed by FFST, the corresponding fusion strategy is designed according to different coefficient characteristics, and the corresponding experiments are carried out.The superiority of the algorithm is verified by comparing the experimental results of multiple groups of images.

Convolutional neural network
Convolutional neural network (CNN) is a trainable feedforward network [17].CNN has the characteristics of representation learning, so it maintains translation invariance and scaling invariance for input information to a certain extent.
CNN has three core ideas: local perception, weight sharing, and pooling [18].Local perception [19] means that in CNN, the pixels and hidden nodes between images are not fully connected, but are connected by local tiny pixels, which can greatly reduce the training parameters.Weight sharing [20] means that the number of convolution cores in the same image or different images but with the same training position can be reduced.The parameters can be reduced to a certain extent to avoid repeated convolution cores.Pooling [21] refers to the image after convolution which is downsampled to shorten the size of the image, ensure that there is no overfitting, and improve the efficiency of calculation.The processing flow of the convolution neural network is shown in Fig. 1.

Fast finite shear wave transform
Shear wave transform [22] is based on the theory of synthetic wavelet.As an analysis tool of multi-scale geometry, it overcomes the original shortcomings of wavelet transform and generates shear wave functions with different characteristics through affine transform such as scaling, shearing, translation, and so on.When processing image or video is translated by shear wave, it is decomposed into three steps: (1) The Laplace pyramid algorithm is used to f j−1 a decompose the image into a high pass filtered image f j d and a low pass filtered image f j a For (3) After redefining the Cartesian sampling coordinate [23], the shear wave coefficient is obtained by using inverse two-dimensional FFT or inverse pseudo FFT on the data processed by the previous filtering.(4) j = j + 1, repeat (1) to (4) until j = L, as shown in Fig. 2.
For the decomposition of the two-dimensional shear wave, the definition is as follows (1).
where d=0, 1 corresponding to up and down and left and right respectively.For each scale, there will be support regions corresponding to different directions to ensure its stability, as shown in Fig. 3.
3 Image fusion algorithm in Space-Ground-Sea Wireless Networks

Improved convolutional neural network model
The traditional CNN model obtains the probability distribution of the input through the gradient descent method and uses multiple convolution and pooling layers to classify the extracted features [24].The improved CNN in this paper is realized by dimension reduction.In details, this paper decomposes the source image by fast finite shear wave transform, improves the convolutional neural network, determines the number of iterations and the optimal weight of the convolutional neural network through experiments, and then fuses the image through the inverse process of fast finite shear wave transform.
First, the training set fx i ; y i g M i−1 is used as input, and the output estimation function is: where W is the weight, B is the offset, C is the number of convolution operations, and X is the convolution operation.which is converted into function mapping through the hidden layer.When L[C, W, x i ] = [y i − Wψ(x) + b] 2 is an objective function, it can be defined as: Assuming that the input equals the output, the estimation function is as follows: where the kernel function is The formula of CNN's maximum pooling layer is as follows: where φ is the W×H×G 3D array of the input image, which is also the feature map of each input image.H is the height, W is the width, G is the number of channels, k=2, which g(h,w,i,j,v) means the mapping from R to k at a certain step size.After continuous iteration, when k=3 and R=2, the result can maintain the invariance of feature scale to a great extent; the formula is: where θ is the kernel weight, l represents the number of outputs, i, j, u represents the coordinates (i, j) on the scale u, and the activation function is ReLU.
The improved CNN greatly reduces the loss of information and effectively extracts the features and details of each image.

Integration process
The medical fusion algorithm of fast finite shear wave transform and convolutional neural network is divided into four steps.
Firstly, source image A and source image B are decomposed by fast finite shear wave transform to obtained L{A} l and L{B} l , where L{A} l and L{B} l represent the decomposition of the source image and the source image B at the l layer respectively.
Then, the weights of source image A and source image B are generated by using the improved convolution neural network, and the result is obtained.Secondly, calculate the regional energy of the layer where the sum is located, as shown in Eq. ( 7).
The weight map W is input into the l layer of fast finite shear wave transform to decompose L{W} l The floor function is used to H×W get the highest number of layers of ⌊log 2 min(H, W)⌋ the source image.Then, formula ( 8) is used for fusion.
Finally, the fusion image C is reconstructed by L{C} 1 inverse fast finite shear wave transforms.The flow chart of fusion is shown in Fig. 4.
The above algorithm can reduce the impact of weight allocation on the experiment, and overcome the problems of incomplete and unclear information in the image.

Results and discussion
Using the brain MRI image of the Harvard database as the data set, this paper selects two types of CT images of multiple cerebral infarction and meningitis to simulate the fusion algorithm.
In order to verify the effectiveness of the algorithm, this paper selects the multi-focus image fusion (NPF) algorithm based on NSCT and pulse coupled neural network (PCNN) [25], the surface wavelet transform (SCT) [26], and the multi-focus image fusion (FGF) algorithm based on fast finite shear wave transform and guided filtering [27] for comparison.Average gradient (AG), spatial frequency (SF), mutual information MI, and edge-preserving information transfer factor QAB/F (high weight evaluation standard) were used for objective evaluation [28,29].
In order to highlight the advantages of this algorithm, the multi-focus images with strict registration (256×256 pixel size) are selected as samples, and the experiments are carried out on MATLAB 2016a.Different fusion algorithms are compared by experiments, and the above indexes are used for objective evaluation, and the experimental results are analyzed.
According to the different fusion algorithms, this paper selects two groups of medical images as test data and compares them with the three algorithms.The results are shown in Figs. 5 and 6.For NPF algorithm, the source image is decomposed by NSCT, and the spatial frequency of the decomposed coefficient region is calculated.The frequency is used as the input neuron of PCNN to generate neuron pulse.Then the Figure 5 shows the image of stroke processed by different fusion algorithms.Figure 5a is a CT image; Fig. 5b is a fusion image processed by NPF algorithm, which can be clearly seen that there are many noise points and some information is not saved; Fig. 5c is the image after SCT fusion, and the image is relatively fuzzy in general, and some details are not displayed; In Fig. 5d, although the effect is very good, its edge brightness is still dark, which leads to some information not being preserved save; and Fig. 5e is the image processed by this algorithm.The image is much higher than that of other algorithms, whether from internal details or edge brightness.The information of the source image is kept to the maximum extent, which is close to the ideal image and more in line with the visual characteristics of human eyes.This paper verifies the superiority of the algorithm.
Figure 6 shows the image of meningioma processed by different fusion algorithms.Figure 6a is a CT image; Fig. 6b is a fusion image processed by NPF algorithm.The overall effect of the image is dim, the details are not reflected basically, and the soft tissue is not sure; Fig. 6c is the image after SCT fusion, which can clearly see that its brightness is too high to make some information loss; Fig. 6d has the same problem as Fig. 6c; and Fig. 6E is the image processed by the algorithm in this paper.The image has a stable effect and keeps the information of the source image to the greatest extent.The skeleton is clear, and it is more in line with the visual characteristics of human eyes.The information of the image is very rich, which further verifies the superiority of the algorithm in this paper.
Figure 7 shows the image of multiple cerebral infarction processed by different fusion algorithms.Figure 7a   algorithm, with the overall effect of the image darker and some edges fuzzy; Fig. 7c is the image after SCT fusion, similar to Fig. 7d, the image is relatively gray and dark, and some details are not shown; in Fig. 7d, although the effect is very good, the middle part of it is fuzzy; and Fig. 7e is the image processed by the algorithm in this paper.The image retains the information of the source image to the greatest extent, and the whole image is bright, close to the ideal image, which is more in line with the visual characteristics of human eyes, and some details are also preserved.Table 1 presents the objective evaluation index data of medical images based on four fusion algorithms.According to the data in Figs. 5, 6,and 7 and Table 1, it can be seen that the image processed by the proposed algorithm has rich visual effect information and clear and continuous edges and is superior to other algorithms in all data indexes.Then it is verified for the superiority of this algorithm in image fusion processing.

Conclusion
In wireless communication, the improvement of image fusion technology is of great significance [30].For example, medical image processing is often more complex and accurate than ordinary image processing, digital medical image processing has higher requirements.Therefore, image fusion is necessary for medical image before communication.Many researchers introduce various fusion methods into medical image fusion.After continuous research and development and improvement by researchers, great progress has been made in the field of image fusion.
The continuous development and improvement of image fusion technology has greatly promoted the field of medical diagnosis and provided technical support for  improving people's health levels.The content of a single-mode medical image is very single, and the fused image contains more image information, which provides a more reliable basis for diagnosis.However, in medical image processing, the image fusion effect is poor and the efficiency is low.This paper proposes a medical image fusion algorithm based on fast finite shear wave transform and a convolutional neural network.
The algorithm decomposes the source image by fast finite shear wave transform, improves the convolutional neural network, determines the number of iterations and the optimal weight of the convolutional neural network through experiments, and then fuses the image through the inverse process of fast finite shear wave transform.The algorithm effectively reduces the impact of weight allocation on the experiment and overcomes the problems of incomplete information and unclear details in the image.In this paper, through several comparative experiments, through the average gradient AG, spatial frequency SF, mutual information MI, and edge-preserving information transfer factor qAB/F (high weight evaluation standard), the algorithm is superior to other algorithms in both objective evaluation and subjective evaluation and has achieved very good results, which has certain social practicability.
) The matrix pf j d is obtained by calculating on pseudo polar lattice f j d , and then the matrix pf j d is processed by the band-pass filter.

Fig. 2
Fig. 2 process of image processing for Space-Ground-Sea Wireless Networks

Fig. 3
Fig. 3 two dimensional shear wave decomposition image

Fig. 4
Fig. 4 flow chart of fusion image in Space-Ground-Sea Wireless Networks is a CT image; Fig.7bis a fusion image processed by NPF

Fig. 5
Fig. 5 CT/fusion of stroke.a CT image.b NPF.c SCT. d FGF. e Proposed

Fig. 7
Fig. 7 CT/fusion of multiple cerebral infarction.a CT image.b NPF.c SCT. d FCG. e Proposed

Table 1
Objective indicators of different fusion algorithms