Skip to main content

Image fusion algorithm in Integrated Space-Ground-Sea Wireless Networks of B5G

Abstract

In recent years, in Space-Ground-Sea Wireless Networks, the rapid development of image recognition also promotes the development of images fusion. For example, the content of a single-mode medical image is very single, and the fused image contains more image information, which provides a more reliable basis for diagnosis. However, in wireless communication and medical image processing, the image fusion effect is poor and the efficiency is low. To solve this problem, an image fusion algorithm based on fast finite shear wave transform and convolutional neural network is proposed for wireless communication in this paper. This algorithm adopts the methods such as fast finite shear wave transform (FFST), reducing the dimension of the convolution layer, and the inverse process of fast finite shear wave transform. The experimental results show that the algorithm has a very good effect in both objective indicators and subjective vision, and it is also very feasible in wireless communication.

1 Introduction

In the field of Space-Ground-Sea Wireless Networks, with the combination of medical image analysis, environmental detection, and digital photography with computer technology [1, 2], the development of image processing has been greatly promoted. Image fusion [3], as an important branch of image processing, and the rapid development of remote sensing monitoring, target recognition, and light and shadow technology, deeply reflects the importance of image fusion technology, especially in the accuracy and accuracy of image information processing.

At the beginning, image fusion was mainly used in military applications. Lvarez et al. proposed Laplace pyramid image fusion algorithm for the first time [4]. Then, Akhtarkavan et al. proposed the quantization and threshold method [5], which made image fusion enter a new stage. Image fusion [6, 7] refers to the collection of the same target image information through multi-channel, through information extraction, enhancement, denoising, or other computer technology, to collect the effective information in the image, and finally generate the image with the largest amount of information. Under the same target, multiple different source images are fused by the operator or neural network to form a new image with multiple information.

In recent years, the application of medical imaging technology in clinical practice has become increasingly prominent. At present, the main medical imaging methods include magnetic resonance, X-ray, computed tomography, and so on [8]. Due to different imaging mechanisms, medical images with different medical modes present different kinds of organ/tissue information. Single-mode medical imaging [9] has been unable to be used as the basis for judging the location of important lesions in clinical diagnosis and treatment due to its incomplete imaging results and low definition. Multimodal medical image fusion [10] combines the source images of different modes with different supplementary information into a visual composite image to help doctors make better diagnosis and decision. By fusing multimodal medical images, the problem of insufficient information of single-mode medical images can be solved, and the accuracy of disease and the efficiency of the treatment can be improved.

The early mainstream image fusion methods include averaging, HS transform, principal component analysis, etc. [11]. These methods all decompose and fuse images on one layer. In the mid-1980s, pyramid decomposition had the characteristics of multi-directional image extraction, and image fusion methods based on pyramid decomposition began to develop. In the 1990s, wavelet transform provides a new tool for image fusion with the characteristics of multi-resolution. Wavelet transform inherits the idea of short-time Fourier transform [12] and has been successfully applied to image processing, video processing, and other aspects. However, since wavelet transform is often more concentrated in image processing, it is difficult to deal with the texture and details of the image, especially when processing two-dimensional or high-dimensional images. In recent years, the research of multimodal medical image fusion mainly focuses on the image fusion method based on multi-scale transformation theory. The method based on multi-scale transformation is generally divided into three steps: decomposition, fusion, and reconstruction.

Among the above algorithms, the multi-scale analysis algorithm [13] is favored because it can extract the detailed and significant information of the image. However, at the same time, the processing algorithm becomes more and more complex. With the increasing demand for image clarity, wavelet transform in multi-scale analysis cannot optimally represent high-dimensional functions or two-dimensional images with surface singularity [14]. Fast finite shear wave transform [15] has all the advantages of wavelet transform, which can effectively reduce the influence of error, so it is more suitable for image fusion. On the basis of the existing medical fusion algorithm research, aiming at the problems of medical fusion algorithm, combined with the advantages of convolution neural network model [16], an improved medical image fusion algorithm based on fast finite shear wave transform and convolution neural network (FFST) is proposed. After the source image is decomposed by FFST, the corresponding fusion strategy is designed according to different coefficient characteristics, and the corresponding experiments are carried out. The superiority of the algorithm is verified by comparing the experimental results of multiple groups of images.

2 Methods

2.1 Convolutional neural network

Convolutional neural network (CNN) is a trainable feedforward network [17]. CNN has the characteristics of representation learning, so it maintains translation invariance and scaling invariance for input information to a certain extent.

CNN has three core ideas: local perception, weight sharing, and pooling [18]. Local perception [19] means that in CNN, the pixels and hidden nodes between images are not fully connected, but are connected by local tiny pixels, which can greatly reduce the training parameters. Weight sharing [20] means that the number of convolution cores in the same image or different images but with the same training position can be reduced. The parameters can be reduced to a certain extent to avoid repeated convolution cores. Pooling [21] refers to the image after convolution which is downsampled to shorten the size of the image, ensure that there is no overfitting, and improve the efficiency of calculation. The processing flow of the convolution neural network is shown in Fig. 1.

Fig. 1
figure 1

convolution neural network processing flow

2.2 Fast finite shear wave transform

Shear wave transform [22] is based on the theory of synthetic wavelet. As an analysis tool of multi-scale geometry, it overcomes the original shortcomings of wavelet transform and generates shear wave functions with different characteristics through affine transform such as scaling, shearing, translation, and so on. When processing image or video is translated by shear wave, it is decomposed into three steps:

  1. (1)

    The Laplace pyramid algorithm is used to \( {f}_a^{j-1} \) decompose the image into a high pass filtered image \( {f}_d^j \) and a low pass filtered image \( {f}_a^j \) For \( {f}_a^{j-1}\in {L}^2\left({Z}_{N_{j-1}}^2\right) \), there are \( {f}_a^j\in {L}^2\left({Z}_{N_j}^2\right) \), among them Nj − 2/2.

  2. (2)

    The matrix \( {pf}_d^j \)is obtained by calculating on pseudo polar lattice \( {f}_d^j \), and then the matrix \( {pf}_d^j \)is processed by the band-pass filter.

  3. (3)

    After redefining the Cartesian sampling coordinate [23], the shear wave coefficient is obtained by using inverse two-dimensional FFT or inverse pseudo FFT on the data processed by the previous filtering.

  4. (4)

    j = j + 1, repeat (1) to (4) until j = L, as shown in Fig. 2.

Fig. 2
figure 2

process of image processing for Space-Ground-Sea Wireless Networks

For the decomposition of the two-dimensional shear wave, the definition is as follows (1).

$$ {\varphi}_{j,l,k}^{(d)}:j\ge 0,-{2}^j\le l\le {2}^j,k\in {Z}^2 $$
(1)

where d=0, 1 corresponding to up and down and left and right respectively. For each scale, there will be support regions corresponding to different directions to ensure its stability, as shown in Fig. 3.

Fig. 3
figure 3

two dimensional shear wave decomposition image

3 Image fusion algorithm in Space-Ground-Sea Wireless Networks

3.1 Improved convolutional neural network model

The traditional CNN model obtains the probability distribution of the input through the gradient descent method and uses multiple convolution and pooling layers to classify the extracted features [24]. The improved CNN in this paper is realized by dimension reduction. In details, this paper decomposes the source image by fast finite shear wave transform, improves the convolutional neural network, determines the number of iterations and the optimal weight of the convolutional neural network through experiments, and then fuses the image through the inverse process of fast finite shear wave transform.

First, the training set \( {\left\{{x}_i,{y}_i\right\}}_{i-1}^M \) is used as input, and the output estimation function is:

$$ f\left(C,W,{x}_i\right)={W}^T\psi (x)+b $$
(2)

where W is the weight, B is the offset, C is the number of convolution operations, and X is the convolution operation. which is converted into function mapping through the hidden layer. When L[C, W, xi] = [yi − Wψ(x) + b]2 is an objective function, it can be defined as:

$$ P\left(C,W\right)=\sum \limits_{i=1}^ML\left[{y}_i,f\left(C,W,{x}_i\right)\right]+\gamma \frac{{\left\Vert W\right\Vert}^2}{2} $$
(3)

Assuming that the input equals the output, the estimation function is as follows:

$$ f(x)=\sum \limits_{i=1}^MK\left( Cx,{Cx}_i\right)+b $$
(4)

where the kernel function is K(Cx, Cxi) = ψ(Cx)Tψ(Cxi), i = 1, …, M The formula of CNN's maximum pooling layer is as follows:

$$ {S}_{i,j,v}\left(\varphi \right)={\left(\sum \limits_{h=-\left\lfloor k/2\right\rfloor}^{\left\lfloor k/2\right\rfloor}\sum \limits_{w=-\left\lfloor k/2\right\rfloor}^{\left\lfloor k/2\right\rfloor }{\left|{\varphi}_{g\left(h,w,i,j,v\right)}\right|}^p\right)}^{1/p} $$
(5)

where φ is the W×H×G 3D array of the input image, which is also the feature map of each input image. H is the height, W is the width, G is the number of channels, k=2, which g(h,w,i,j,v) means the mapping from R to k at a certain step size. After continuous iteration, when k=3 and R=2, the result can maintain the invariance of feature scale to a great extent; the formula is:

$$ {u}_{i,j,l}\left(\varphi \right)=f\left(\sum \limits_{h=-\left\lfloor k/2\right\rfloor}^{\left\lfloor k/2\right\rfloor}\sum \limits_{w=-\left\lfloor k/2\right\rfloor}^{\left\lfloor k/2\right\rfloor}\sum \limits_{v=1}^M{\theta}_{h,w,v,l}\cdot \kern0.5em {\varphi}_{g\left(h,w,i,j,v\right)}\right) $$
(6)

where θ is the kernel weight, l represents the number of outputs, i, j, u represents the coordinates (i, j) on the scale u, and the activation function is ReLU.

The improved CNN greatly reduces the loss of information and effectively extracts the features and details of each image.

3.2 Integration process

The medical fusion algorithm of fast finite shear wave transform and convolutional neural network is divided into four steps.

Firstly, source image A and source image B are decomposed by fast finite shear wave transform to obtained L{A}l and L{B}l, where L{A}l and L{B}l represent the decomposition of the source image and the source image B at the l layer respectively.

Then, the weights of source image A and source image B are generated by using the improved convolution neural network, and the result is obtained.

Secondly, calculate the regional energy of the layer where the sum is located, as shown in Eq. (7).

$$ {\displaystyle \begin{array}{c}{E}_A^l\left(x,y\right)=\sum \limits_m\sum \limits_nL{\left\{A\right\}}^l{\left(x+m,y+n\right)}^2\\ {}{E}_B^l\left(x,y\right)=\sum \limits_m\sum \limits_nL{\left\{B\right\}}^l{\left(x+m,y+n\right)}^2\end{array}} $$
(7)

The weight map W is input into the l layer of fast finite shear wave transform to decompose L{W}l The floor function is used to H×W get the highest number of layers of ⌊log2 min(H, W)⌋ the source image. Then, formula (8) is used for fusion.

$$ L{\left\{C\right\}}^l\left(x,y\right)=G{\left\{W\right\}}^l\left(x,y\right)+\left(1-G{\left\{W\right\}}^l\left(x,y\right)\right)-L{\left\{C\right\}}^l\left(x,y\right) $$
(8)

Finally, the fusion image C is reconstructed by L{C}1 inverse fast finite shear wave transforms. The flow chart of fusion is shown in Fig. 4.

Fig. 4
figure 4

flow chart of fusion image in Space-Ground-Sea Wireless Networks

The above algorithm can reduce the impact of weight allocation on the experiment, and overcome the problems of incomplete and unclear information in the image.

4 Results and discussion

Using the brain MRI image of the Harvard database as the data set, this paper selects two types of CT images of multiple cerebral infarction and meningitis to simulate the fusion algorithm.

In order to verify the effectiveness of the algorithm, this paper selects the multi-focus image fusion (NPF) algorithm based on NSCT and pulse coupled neural network (PCNN) [25], the surface wavelet transform (SCT) [26], and the multi-focus image fusion (FGF) algorithm based on fast finite shear wave transform and guided filtering [27] for comparison. Average gradient (AG), spatial frequency (SF), mutual information MI, and edge-preserving information transfer factor QAB/F (high weight evaluation standard) were used for objective evaluation [28, 29].

In order to highlight the advantages of this algorithm, the multi-focus images with strict registration (256×256 pixel size) are selected as samples, and the experiments are carried out on MATLAB 2016a. Different fusion algorithms are compared by experiments, and the above indexes are used for objective evaluation, and the experimental results are analyzed.

According to the different fusion algorithms, this paper selects two groups of medical images as test data and compares them with the three algorithms. The results are shown in Figs. 5 and 6. For NPF algorithm, the source image is decomposed by NSCT, and the spatial frequency of the decomposed coefficient region is calculated. The frequency is used as the input neuron of PCNN to generate neuron pulse. Then the coefficient with the longest ignition time is used as the fusion coefficient, and finally, it is fused by inverse NSCT. For the CT algorithm, the image is decomposed by Surfacelet, and the decomposed low-frequency subband coefficients and high-frequency subband coefficients are combined, and then reconstructed by inverse transform. For the FGF algorithm, the source image is decomposed by fast finite shear wave transform. The decomposed low-frequency subband coefficients are fused by Region nsml. The high-frequency subband coefficients are fused by region energy weighted high-frequency fusion rule of a guided filter. Finally, the fused image is reconstructed by inverse FFST.

Fig. 5
figure 5

CT/fusion image of stroke. a CT image. b NPF. c SCT. d FGF. e Proposed

Fig. 6
figure 6

CT/fusion image of meningioma. a CT image. b NPF. c SCT. d FCG. e Proposed

Figure 5 shows the image of stroke processed by different fusion algorithms. Figure 5a is a CT image; Fig. 5b is a fusion image processed by NPF algorithm, which can be clearly seen that there are many noise points and some information is not saved; Fig. 5c is the image after SCT fusion, and the image is relatively fuzzy in general, and some details are not displayed; In Fig. 5d, although the effect is very good, its edge brightness is still dark, which leads to some information not being preserved save; and Fig. 5e is the image processed by this algorithm. The image is much higher than that of other algorithms, whether from internal details or edge brightness. The information of the source image is kept to the maximum extent, which is close to the ideal image and more in line with the visual characteristics of human eyes. This paper verifies the superiority of the algorithm.

Figure 6 shows the image of meningioma processed by different fusion algorithms. Figure 6a is a CT image; Fig. 6b is a fusion image processed by NPF algorithm. The overall effect of the image is dim, the details are not reflected basically, and the soft tissue is not sure; Fig. 6c is the image after SCT fusion, which can clearly see that its brightness is too high to make some information loss; Fig. 6d has the same problem as Fig. 6c; and Fig. 6E is the image processed by the algorithm in this paper. The image has a stable effect and keeps the information of the source image to the greatest extent. The skeleton is clear, and it is more in line with the visual characteristics of human eyes. The information of the image is very rich, which further verifies the superiority of the algorithm in this paper.

Figure 7 shows the image of multiple cerebral infarction processed by different fusion algorithms. Figure 7a is a CT image; Fig. 7b is a fusion image processed by NPF algorithm, with the overall effect of the image darker and some edges fuzzy; Fig. 7c is the image after SCT fusion, similar to Fig. 7d, the image is relatively gray and dark, and some details are not shown; in Fig. 7d, although the effect is very good, the middle part of it is fuzzy; and Fig. 7e is the image processed by the algorithm in this paper. The image retains the information of the source image to the greatest extent, and the whole image is bright, close to the ideal image, which is more in line with the visual characteristics of human eyes, and some details are also preserved.

Fig. 7
figure 7

CT/fusion images of multiple cerebral infarction. a CT image. b NPF. c SCT. d FCG. e Proposed

Table 1 presents the objective evaluation index data of medical images based on four fusion algorithms. According to the data in Figs. 56,and 7 and Table 1, it can be seen that the image processed by the proposed algorithm has rich visual effect information and clear and continuous edges and is superior to other algorithms in all data indexes. Then it is verified for the superiority of this algorithm in image fusion processing.

Table 1 Objective indicators of different fusion algorithms

5 Conclusion

In wireless communication, the improvement of image fusion technology is of great significance [30]. For example, medical image processing is often more complex and accurate than ordinary image processing, digital medical image processing has higher requirements. Therefore, image fusion is necessary for medical image before communication. Many researchers introduce various fusion methods into medical image fusion. After continuous research and development and improvement by researchers, great progress has been made in the field of image fusion.

The continuous development and improvement of image fusion technology has greatly promoted the field of medical diagnosis and provided technical support for improving people’s health levels. The content of a single-mode medical image is very single, and the fused image contains more image information, which provides a more reliable basis for diagnosis. However, in medical image processing, the image fusion effect is poor and the efficiency is low. This paper proposes a medical image fusion algorithm based on fast finite shear wave transform and a convolutional neural network. The algorithm decomposes the source image by fast finite shear wave transform, improves the convolutional neural network, determines the number of iterations and the optimal weight of the convolutional neural network through experiments, and then fuses the image through the inverse process of fast finite shear wave transform. The algorithm effectively reduces the impact of weight allocation on the experiment and overcomes the problems of incomplete information and unclear details in the image. In this paper, through several comparative experiments, through the average gradient AG, spatial frequency SF, mutual information MI, and edge-preserving information transfer factor qAB/F (high weight evaluation standard), the algorithm is superior to other algorithms in both objective evaluation and subjective evaluation and has achieved very good results, which has certain social practicability.

Availability of data and materials

The procedures are available as MATLAB source codes.

Abbreviations

FFST:

Fast finite shear wave transform

CNN:

Convolutional neural network

PCNN:

Pulse coupled neural network

SCT:

Surface wavelet transform

FGF:

Focus image fusion

AG:

Average gradient

SF:

Spatial frequency

References

  1. N. Saeed, A. Celik, T.Y. Al-Naffouri, M.-S. Alouini, Underwater optical wireless communications, networking, and localization: A survey. Ad Hoc Netw. 94, 101935 (2019)

  2. Y. Li et al., Medical Image Fusion Method by Deep Learning. Int. J. Cogn. Comput. Eng. 2, 21-29 (2021)

  3. Wang Guofen,Li Weisheng,Huang Yuping. Medical image fusion based on hybrid three-layer decomposition model and nuclear norm. Comp. Biol. Med. 2020,129: 104179 (2020).

  4. Lvarez, D. , P González-Rodríguez, and M. Kindelan . A Local Radial Basis Function Method for the Laplace–Beltrami Operator. J. Sci. Comput. 86.3(2021).

  5. Akhtarkavan, E. , et al. Fragile high capacity data hiding in digital images using integer-to-integer DWT and lattice vector quantization. Multimedia Tools. Appl. 79.8, 13427–13447 (2020).

  6. Hariharan, K. , and N. R. Raajan . Performance enhanced hyperspectral and multispectral image fusion technique using ripplet type-II transform and deep neural networks for multimedia applications. Multimedia Tools Appl. 79, 1-10(2018).

  7. D.J. Liu, Z.R. Chen, The adaptive finite element method for the P-Laplace problem$1. Appl. Num. Math. 152, 323–337 (2020)

    Article  MathSciNet  Google Scholar 

  8. W. Jian, Y. Ke, R. Ping, Q. Chunxia, Z. Xiufei, Multi-source image fusion algorithm based on fast weighted guided filter. J. Syst. Eng. Electron 30(05), 831–840 (2019)

    Article  Google Scholar 

  9. N. Yoneda et al., Analysis of circular-to-rectangular waveguide T-junction using mode-matching technique. Electron. Commun. Japan 80(7), 37–46 (2015)

    Google Scholar 

  10. Luong, D. L. , D. H. Tran , and P. T. Nguyen. Optimizing multi-mode time-cost-quality trade-off of construction project using opposition multiple objective difference evolution. Int. J. Construct. Manage. 21(3) 1-13(2018).

  11. Q. Li et al., Medical Image Fusion Using Segment Graph Filter and Sparse Representation. Comput. Biol. Med. 1, 104–239 (2021)

    Google Scholar 

  12. Ma, C. , et al. Single image super resolution via wavelet transform fusion and SRFeat network. J. Ambient Intell. Human. Comput. 2(2020).

  13. F. Saltari, D. Dessi, F. Mastroddi, Mechanical systems virtual sensing by proportional observer and multi-resolution analysis. Mech. Syst. Signal Process 146, 107003 (2021)

    Article  Google Scholar 

  14. P. Yonghao et al., A Multi-scale Inversion Method Based on Convolutional Wavelet Transform Applied in Cross-Hole Resistivity Electrical Tomography. IOP Conf. Series Earth Environ. Sci. 660(1), 012062 (2021)

    Article  Google Scholar 

  15. X. Xu et al., Atrial Fibrillation Beat Identification Using the Combination of Modified Frequency Slice Wavelet Transform and Convolutional Neural Networks. J. Healthcare Eng. 2018, 1–8 (2018)

    Google Scholar 

  16. Li, T. , et al. Random-Drop Data Augmentation of Deep Convolutional Neural Network for Mineral Prospectivity Mapping. Nat. Resources Res: 30, 1-12(2020).

  17. Lan, R. , et al. Image denoising via deep residual convolutional neural networks. Signal Image. Video Process. 9, 1-8 (2019).

  18. S. Aich et al., Multi-Scale Weight Sharing Network for Image Recognition. Pattern Recogn. Lett. 131, 348–354 (2020)

    Article  Google Scholar 

  19. Malekzadeh, M. . Developing new connectivity architectures for local sensing and control IoT systems. Peer-to-Peer Netw. Appl. 4, 609–626 (2020).

  20. H. Louati et al., Deep Convolutional Neural Network Architecture Design as a Bi-level Optimization Problem. Neurocomputing 439, 44-62 (2021)

  21. M. Varshney, P. Singh, Optimizing nonlinear activation function for convolutional neural networks. Signal Image Video Process. 8, 1–8 (2021)

    Google Scholar 

  22. S. Routray et al., A new image denoising framework using bilateral filtering based non-subsampled shearlet transform. Optik – Int. J. Light. Electron. Optics 216, 164903 (2020)

    Article  Google Scholar 

  23. R. Singh, A. Chakraborty, B.S. Manoj, Graph Fourier Transform based on Directed Laplacian. Int. Confer. Signal Process. Commun. IEEE 2016, 1–5 (2016)

    Google Scholar 

  24. M. Li, Y. Wang, Z. Wang, H. Zheng, A deep learning method based on an attention mechanism for wireless network traffic prediction. Ad Hoc Netw. 107(102258), 102258 (2020)

  25. Panou, G. , and R. Korakitis . The direct geodesic problem and an approximate analytical solution in Cartesian coordinates on a triaxial ellipsoid. J. Appl. Geodesy 14.2, 205-213 (2020).

  26. S. Xueping, Research on image fusion method based on NSCT and PCNN [D] (Tianjin University of technology, 2016)

    Google Scholar 

  27. T. Xiaoqiang, K. Lingfu, K. Deming, C. Yongqiang, Using discrete stationary wavelet transform to improve NURBS quadric surface fitting method. Acta metrologica Sinica 41(06), 662–668(2020) (2020)

    Google Scholar 

  28. Shuaiqi Liu,Mingzhu Shi,Zhihui Zhu,Jie Zhao. Image fusion based on complex-shearlet domain with guided filtering Multidimensional Systems and Signal Processing, 28(1), 207-224 (2017).

  29. Y. Zhang, An Improved Algorithm of Parameter Kernel Cutting Based on Complex Fusion Image [C]. Science and Engineering Research Center.Proceedings of 2019 International Conference on Mathematics, Big Data Analysis and Simulation and Modeling (MBDASM 2019). Sci. Eng. Res. Center, 22–26 (2019, 2019)

  30. H. Fawaz, M. El Helou, S. Lahoud, K. Khawam, A reinforcement learning approach to queue-aware scheduling in full-duplex wireless networks. Comput. Netw. 189, 107893 (2021)

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This research was supported in part by the National Natural Science Foundation of China (61602249).

Author information

Authors and Affiliations

Authors

Contributions

The research and the outcome of this specific publication are the result of a long cooperation between the authors about the fundamentals and applications of the image fusion algorithm in Integrated Space-Ground-Sea Wireless Networks. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Xiaobing Yu.

Ethics declarations

Ethics approval and consent to participate

All procedures performed in this paper were in accordance with the ethical standards of the research community. This paper does not contain any studies with human participants or animals performed by any of the authors.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, X., Cui, Y., Wang, X. et al. Image fusion algorithm in Integrated Space-Ground-Sea Wireless Networks of B5G. EURASIP J. Adv. Signal Process. 2021, 55 (2021). https://doi.org/10.1186/s13634-021-00771-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-021-00771-1

Keywords