Skip to main content

RETRACTED ARTICLE: Implementation of MapReduce parallel computing framework based on multi-data fusion sensors and GPU cluster

This article was retracted on 21 September 2022

This article has been updated

Abstract

Nowadays, with the rapid growth of data volume, massive data has become one of the factors that plague the development of enterprises. How to effectively process data and reduce the concurrency pressure of data access has become the driving force for the continuous development of big data solutions. This article mainly studies the MapReduce parallel computing framework based on multiple data fusion sensors and GPU clusters. This experimental environment uses a Hadoop fully distributed cluster environment, and the entire programming of the single-source shortest path algorithm based on MapReduce is implemented in Java language. 8 ordinary physical machines are used to build a fully distributed cluster, and the configuration environment of each node is basically the same. The MapReduce framework divides the request job into several mapping tasks and assigns them to different computing nodes. After the mapping process, a certain intermediate file that is consistent with the final file format is generated. At this time, the system will generate several reduction tasks and distribute these files to different cluster nodes for execution. This experiment will verify the changes in the running time of the PSON algorithm when the size of the test data set gradually increases while keeping the hardware level and software configuration of the Hadoop platform unchanged. When the number of computing nodes increases from 2 to 4, the running time is significantly reduced. When the number of computing nodes continues to increase, the reduction in running time will become less and less significant. The results show that NESTOR can complete the basic workflow of MapReduce, and simplifies the process of user development of GPU positive tree order, which has a significant speedup for applications with large amounts of calculations.

1 Introduction

With the rapid development of the Internet, network resources contain more and more various types of data and information, and people's needs for big data processing are becoming more and more urgent. With the rapid expansion of data volume, whether in storage space, access speed, network transmission, etc., relying on ordinary database systems to complete all data processing tasks can no longer meet the increasing real-time processing of massive data by people. Therefore, in the face of the application requirements of such a huge amount of data, how to effectively manage these data and how to achieve efficient access to these data have become key issues to be solved urgently.

This paper improves the collaborative filtering algorithm so that it can run on the MapReduce platform. Then the users are grouped by clustering method, and users in the same group are defined as neighbors. When grouping, the central users of all groups are marked, through the collaborative filtering algorithm based on users, the recommended value in the group is calculated with the user in the group defined as the neighbor, and the recommended value between the groups is calculated with the central user as the nearest neighbor.

With the rapid improvement of GPU programmability, GPUs are no longer limited to graphics rendering work. The general-purpose computing GPGPU technology developed on the basis of GPUs has been greatly developed, making GPUs more and more important in high-performance computing effect. Shan believes that in actual industrial applications, the health monitoring and fault diagnosis of ball screw pairs still face many challenges. In response to this problem, he proposed a new method for fault diagnosis of the ball screw pair. First, he proposed a new data segmentation algorithm to obtain uniform data of vibration signals. Secondly, he established a selection criterion for sensitive sensor data based on the failure mechanism of the ball screw, and obtained the importance factor of the sensor. Finally, he uses a convolutional neural network to classify the weighted data. Although his algorithm has certain validity, his research lacks specific experimental steps [1]. Hu JW believes that with the development of sensor fusion technology, people have conducted a lot of research on intelligent ground vehicles, and obstacle detection is one of the key links in vehicle driving. Obstacle detection is a complex task, which involves the diversity of obstacles, sensor characteristics and environmental conditions. Due to the limitations of sensors in detection range, signal characteristics and working conditions, it is difficult for a single type of sensor to meet the needs of obstacle detection. This has prompted researchers and engineers to develop multi-sensor fusion and system integration methods. He aims to summarize the main considerations of the on-board multi-sensor configuration of smart ground vehicles in off-road environments, and provide guidance for users to select sensors according to performance requirements and application environments. He reviewed the current latest multi-sensor fusion methods and system prototypes and correlated them with corresponding heterogeneous sensor configurations. Finally, he discussed the emerging technologies and challenges. Although his research is relatively comprehensive, he lacks specific experimental data [2]. Pan D believes that falls are a common phenomenon in the lives of the elderly and one of the top ten major causes of serious health injuries and deaths in the elderly. In order to prevent the elderly from falling, a real-time fall prediction system is installed on wearable smart devices, which can trigger an alarm in time and reduce accidental injuries caused by falls. He designed a fall detection system based on multi-sensor data fusion, using 100 volunteers to simulate falls and daily activity data, and analyzed the four stages of falls. He used the data fusion method to extract the three characteristic parameters of human acceleration and posture changes, and verified the effectiveness of the multi-sensor data fusion algorithm. In order to compare the applicability of random forest and support vector machine in the development of wearable smart devices, he established two fall gesture recognition models, and compared the training time and recognition time of the models. Although support vector machines are more suitable for the development of wearable smart devices, there is a lack of discussion on experimental results [3]. Zhou believes that train operation status identification is used for safety analysis to identify whether the train is operating according to a predetermined operating mechanism. When the train deviates from the scheduled operation mechanism, there is a potential operation risk between the trains. He proposed a train movement situation recognition mechanism based on multi-sensor data fusion under rolling horizon. The recognition process includes the definition of the framework of recognition (FOD), likelihood and confidence calculation, probability calculation and decision-making, and it is applied to dynamic process reasoning. He uses rolling horizon TBM for multi-sensor data fusion. He uses multiple positioning facilities, namely track circuits, transponders and global positioning systems, to verify risk prevention performance through train accidents. Although his recognition mechanism can correctly perceive the running status of the train, it lacks the necessary innovation [4].

In this paper, multi-sensor is used to observe the attributes that affect the state, and the observation results are integrated into the observation value of the global sensor. In addition, the random set theory is used to uniformly describe the multi-source heterogeneous information, so that the sensor detection data and the fuzzy information of expert opinions can be fused with the sensor data under the random set framework. This paper designs a parallel computing model that combines GPU and MapReduce, which is of great significance for further improving the computing speed of high-performance computing.

2 MapReduce parallel computing framework

2.1 Multiple data fusion sensors

The target system equation/model and measurement equation/model are as follows:

$$\left\{ {\begin{array}{*{20}l} {X(k + 1) = \Phi (k)X(k) + W(k)} \hfill \\ {Z(k) = H(k)X(k) + V(k)} \hfill \\ \end{array} } \right.$$
(1)

where X(k) is the state vector at time k; Z(k) is the observation vector at time k; \(\Phi (k)\) is the state transition matrix; H(k) is the observation matrix; W(k) is the mean value is zero, and the covariance the matrix is the white noise of Q(k), which is the system noise; V(k) is the white noise with the mean value of zero and the covariance matrix is R(k), which is the observation noise [5, 6].

In the correlation gate of the i-th track, define the difference vector between the observation j and the track i as the difference between the measured value and the predicted value, that is, the filter residual:

$$e_{ij} (k) = Z_{j} (k) - H(k)\hat{X}_{i} (k/k - 1)$$
(2)

Among them, \(\hat{X}_{i} (k/k - 1)\) is the predicted value of track i at time k [7]. Let \(S_{ij} (k)\) be the covariance matrix of \(e_{ij} (k)\). Then the statistical distance is:

$$d_{ij} (k) = \sqrt {e_{ij} (k)S_{ij}^{ - 1} (k)e_{ij}^{T} (k)}$$
(3)

Remember that the feasible event corresponding to the feasible matrix obtained after splitting is \(\theta_{i} ,i = 1,2, \ldots ,L\), then:

$$\beta_{jt} = \sum\limits_{i = 1}^{L} {P\left\{ {\theta_{i} /Z_{k} } \right\}\hat{\omega }_{jt} (\theta_{i} )}$$
(4)

Among them, \(P\left\{ {\theta_{i} /Z_{k} } \right\}\) is the conditional probability of the joint event \(\theta_{i}\), and \(\hat{\omega }_{jt}^{{}} (\theta_{i} )\) is the element in the feasible matrix [8].

After minimization, the best membership degree and the best fuzzy clustering center are as follows:

$$u_{ik} = \frac{1}{{\left[ {\sum\nolimits_{j = 1}^{T} {\left( {\frac{{d_{ik} }}{{d_{jk} }}} \right)^{2/(m - 1)} } } \right]}}\forall i,k$$
(5)
$$Vi = \frac{{\sum\nolimits_{k = 1}^{{m_{k} }} {(u_{ik} )^{m} X_{k} } }}{{\sum\nolimits_{k = 1}^{{m_{k} }} {(u_{ik} )^{m} } }}\forall i$$
(6)

In the parallel filtering of multi-sensor systems, information fusion is divided into two levels: first, at each subsystem level, information fusion is performed based on the subsystem state prediction information and the local sensor measurement information to obtain the subsystem state estimation information; then in the system at the level, the system filter combines the subsystem state estimation information, the subsystem state prediction information and the system state prediction information according to the principle of addition of the amount of information. Since parallel filtering does not use the subsystem state prediction information, it is only "borrowed". Therefore, when performing information fusion in the system filter, it is necessary to extract the subsystem state prediction information from the subsystem state estimation information [9].

The basic model of data fusion is shown in Fig. 1. The data preprocessing part obtains the data to be processed from the data source, and these data often come from multiple sources. The main function of data preprocessing is to preprocess multi-sensor data. The main work is to calibrate, standardize, format, and normalize data. Target state estimation, including target position estimation and identity estimation, etc. The main work of target position estimation is target positioning and target tracking, and target identity estimation is target recognition. After a proper estimation of the target state, in fact, there will be a preliminary understanding of the entire battlefield situation. It is possible to know the current military configuration of the enemy and us in the battlefield, and there will also be a preliminary estimate of the threat to the enemy [10]. Situation estimation is mainly divided into two aspects: one is static situation estimation, which includes the estimation of the forces, deployment and comprehensive combat effectiveness of both sides. Threat estimation requires a quantitative assessment of the threat that the enemy may pose to us on the basis of the situation assessment [11]. Information feedback and correction are a relatively important part of the entire data fusion model. The feedback results are helpful to the adjustment of the previous three-level fusion processing functions; this part allows proper manual intervention in the entire data fusion process, which helps Reduce the error and delay caused by data fusion [12].

Fig. 1
figure 1

Basic model of data fusion

According to the data sampling model sequence \(Y_{t}\), if the difference between \(s_{i}\) and \(s_{j}\) is large, it indicates that the mutual support between the two data is low, and the authenticity of the data is not high; if the difference between \(s_{i}\) and \(s_{j}\) is small, it means The mutual support between the two data is relatively high, and the authenticity of the data is relatively high [13]. Then the credibility of \(s_{i}\) and \(s_{j}\) at time t can be expressed as [14]:

$$s_{ij} (t) = \min \{ s_{i} ,s_{j} \} /\max \{ s_{i} ,s_{j} \} (1 \le i,j \le n)$$
(7)

Then the credibility matrix can be expressed as [15]:

$$S_{n} (t) = \left\{ {\begin{array}{*{20}c} {s_{11} (t)} & {s_{12} (t)} & \cdots & {s_{1n} (t)} \\ {s_{21} (t)} & {s_{22} (t)} & \cdots & {s_{2n} (t)} \\ \vdots & \vdots & \vdots & \vdots \\ {s_{n1} (t)} & {s_{n2} (t)} & \cdots & {s_{nn} (t)} \\ \end{array} \cdots } \right\}$$
(8)

The i-th row of the credibility matrix indicates the degree of mutual support between the measured value \(s_{i}\) of the sensor \(S_{i}^{{}}\) and the measured value of each other sensor. Then the mean value of the i-th row of the credibility matrix represents the average mutual support degree of \(s_{i}\) and other data [16]. The larger the mean value of credibility in the i-th row, the higher the credibility of \(s_{i}\), and the smaller the deviation from the true value. In order to reduce the influence of the credibility of oneself and oneself on the average credibility, the average credibility of the i-th row is expressed as follows [17]:

$$r_{i} (t) = \sum\limits_{j = 1}^{n} {s_{ij} - s_{ii} } /(n - 1)$$
(9)

The variance of the i-th row of the credibility matrix represents the degree of deviation between \(s_{i}\) and other measured values. The smaller the variance, the higher the credibility of \(s_{i}\); on the contrary, the lower the credibility [18]. The variance of the i-th line of credibility is expressed as follows:

$$\sigma_{i}^{2} (t) = \sum\limits_{j = 1}^{n} {\left( {\frac{{\sum\nolimits_{j = 1}^{n} {s_{ij} } }}{n} - s_{ij} } \right)} /n$$
(10)

2.2 GPU cluster

The basic structure of GPU is shown in Fig. 2. There are a total of 8 groups of 16 stream multiprocessors in the figure, and each stream multiprocessor contains 8 stream processing units, so there are a total of 128 stream processing units in the entire GPU. The stream multiprocessor uses the SIMD hardware instruction set. At the same time, all stream processing units in a stream multiprocessing execute the same instructions (programs). The only difference is that the data processed by each stream processing unit is different, that is, “Single Instruction Multiple Data (SIMD)”. Each stream processing unit constitutes a hardware thread at runtime. These threads are managed by the thread control unit inside the stream multiprocessor without program intervention. This is the realization principle of hardware multithreading [19].

Fig. 2
figure 2

Basic structure of GPU

After the Hadoop system receives a job, it first divides all the input data of the job into several data blocks of equal size, and each Map task is responsible for processing a data block. All Map tasks are executed at the same time, forming parallel processing of data. After that, sort the output intermediate data of the Map task. Then the system sends the intermediate data to the Reduce task for further protocol processing. Job Tracker will manage all tasks during the whole process of MapReduce execution of the job, such as repeatedly executing failed tasks, changing the execution status of the job, etc. Task is the basic unit of Hadoop MapReduce framework for parallel computing [20, 21].

2.3 MapReduce parallel computing

The MapReduce framework is a distributed processing framework. If task scheduling is processed in a centralized scheduling manner, this will cause the JobTracker's load to be too high. Therefore, MapReduce uses a de-centralized method (De-Central) or a passive task scheduling method [22]. By adopting a decentralized approach, JobTracker will not actively analyze which task should be assigned to which node, but TaskTracker will decide whether to accept a task based on its own computing power. If TaskTracker has enough redundant computing power, then through the Heartbeat mechanism, TaskTracker will submit a task application to Job Tracker. At this time, JobTracker will select a suitable task for TaskTracker according to the principle of data localization. In this process, JobTracker only needs to assign a task to a certain node, instead of tracking the task, assign the task to a non-determined node, which greatly improves the operating efficiency of JobTracker [23].

The distributed file system is the basis of the strategy of “data localization” and “computing closer to data” in the MapReduce computing model. There are also two types of nodes in Hadoop’s MapReduce framework, one is called “Jobtracker” and the other is called “Tasktracker”. Jobtracker is the scheduler of MapReduce jobs on the entire cluster. It is responsible for monitoring the running progress and exceptions of each task, as well as the assignment of tasks. Tasktracker nodes are the units of MapReduce tasks [24, 25]. Tasktracker proposes data from the distributed file system and performs calculations according to the requirements of the map or reduce function. This is the data localization strategy. Because it only communicates the running status instead of transferring a large amount of data, it also achieves the goal of “computing closer to the data” [26, 27].

3 MapReduce parallel computing framework design experiment

3.1 Experimental environment

This experimental environment uses a Hadoop fully distributed cluster environment, and the entire programming of the single-source shortest path algorithm based on MapReduce is implemented in Java language. 8 ordinary physical machines are used to build a fully distributed cluster, and the configuration environment of each node is basically the same. The configuration environment of each node is shown in Table 1. Hardware environment: Intel(R)Pentium(R)4 CPU, main frequency is 3.0 GHz, 1 GB RAM and 80 GB available hard disk space. Software environment: The operating system is WindowsXP, Cygwin and the distributed system cluster architecture platform Hadoop that simulates the Linux environment under the Windows platform, and the programming tools JDK and Eclipse, the programming language is JAVA.

Table 1 Configuration environment of each node

3.2 Parallel computing

The MapReduce framework performs two steps for each job requested: The first step is to divide the requested job into several mapping tasks and assign them to different computing nodes. The original input processing data of the mapping task is the input file. After the mapping process, it generates an intermediate file that is consistent with the final required file format. After all the mapping tasks are completed, it will enter the next reduction stage to merge these intermediate files. By adding an intermediate file generation process, the distributed algorithm greatly improves its flexibility and guarantees its distributed scalability. These characteristics make it have unlimited potential in the massive data processing in the era of big data.

3.3 Data scheduling

In the NESTOR framework, the places where I/O data is read are distributed in two places. One is to provide key value key-value pairs to the map() function on the mapper side, and the file reading module will read from the local disk according to the task requirements. Data and generate key value key-value pairs, and pass them as parameters to the map() function; the other is on the reducer side. Similarly, the system reads the data from the disk according to the calculation task and generates the parameters required by the reduce() function. The key and the value list corresponding to the key. Through this realization method, we can make multiple processing tasks can be in the working state at the same time. In fact, in the NESTOR framework, we also use such an implementation method to enable DealDataJob and WriteFileJob to be on standby at the same time after Collector is started.

3.4 Data scalability experiment

This experiment will verify the changes in the running time of the PSON algorithm when the size of the test data set gradually increases while keeping the hardware level and software configuration of the Hadoop platform unchanged. In this experiment, we need to use multiple data sets of different sizes for testing, so 10 different test data sets are generated, and the number of transactions included ranges from 1 million to 50 billion, and the corresponding test data set size it grew from 4 to 450 GB.

4 Results and discussion

The MAE values on the 100 K, 1 m and 10 m data sets are shown in Fig. 3. In the result of mixing on a 100 K data set, the results of each algorithm fluctuate relatively large. The reason is that the number of data in the training set is relatively small. When the three results were mixed, the mixed results were generally better than the first three results, and the MAE dropped by 0.01–0.1. When the data set is 1 m, the MAE of the three results before mixing is mainly concentrated between 0.79 and 0.82, and the fluctuation is not very large. When the results are mixed, the MAE value is between 0.72 and 0.76, compared to before the mixing, the MAE value is reduced by about 0.06. When the amount of data increases to 10 m, the result of the item-based algorithm is slightly better than that on the 1 m data set, and the floating situation is not very obvious. Based on the two results of users, there is no significant decline. When the three results are mixed, MAE has a significant decrease, but compared with the item-based results, the decrease is not obvious in the 1 m data set, and the decrease is only about 0.04.

Fig. 3
figure 3

MAE values on 100 K, 1 m and 10 m datasets

Figure 4 shows the running time of the three main steps after the parallel transformation of the QUBIC algorithm based on the MapReduce parallel computing model when the number of computing nodes changes. It can be seen from the figure that when the number of computing nodes increases from 2 to 4, the running time is significantly reduced. When the number of computing nodes continues to increase, the reduction in running time will become less and less significant. In other words, as the number of computing nodes increases, the downward trend of the running time curve will gradually become flat. This is because, as the number of computing nodes increases, not only the amount of communication between nodes in the Hadoop system is increasing, but also the synchronization and control operations among all nodes. Moreover, because the Reducer node must wait for all Mapper nodes to complete the calculation before starting the calculation, the increase in the number of calculation nodes also directly causes the start time of the Reduce phase to be delayed. These factors will increase the load of the entire system and affect the overall running time.

Fig. 4
figure 4

Running time

Figure 5 shows the distribution of data blocks under the scoring value strategy. As the number of data blocks increases, the number of data blocks obtained by each node is increasing, and the growth rate is stable. Since the performance of the DataNode1 node is better than that of DataNode2 and DataNode3, the score value of this node is higher than that of the other two nodes. When the data block is placed, the node will be given priority, so the number of data blocks obtained by this node is significantly more than other nodes. In the case of 100, 200, 300 data blocks, DataNode1 gets 13, 25, 44 more data blocks than DataNode2, and 7, 15, 27 more data blocks than DataNode3. The data blocks obtained by DataNode1 account for 40%, 40% and 42.3% respectively, which are more than 27%, 27.5%, 27.6% of DataNode2 and 33%, 32.5% and 33.3% of DataNode3. Losing the probability of being selected first, other nodes will replace DataNode1 as the node that preferentially stores data blocks. The consequence of this trend is to make each node get the number of data blocks close to a certain average value as evenly as possible, without causing unbalanced load due to the performance difference of the nodes, which affects the subsequent task execution.

Fig. 5
figure 5

Data block distribution under the scoring value strategy

The sort job in static and dynamic network environment is shown in Table 2. On average, the time of Sort jobs running under a dynamic network is twice as long as that under a static network. At this time, MapReduce's data localization mechanism also loses its meaning. Unlike the Wordcount job, the Sort job requires both processor and memory. When the processor is 1, the memory changes from 1 to 2 G, the job running time is reduced by 23%. When the memory is 1 and the processor 1 becomes 2, the job running time is reduced by 28%. However, when the processor is 2 or the memory is 2 G, the increase of the processor or memory has no obvious effect on the reduction of the operating time. So the Sort job is both computationally intensive and data-intensive. When the processor is 3, the adjustment of the memory has little effect on the job event, which is the same as the Wordcount type, which is caused by the static Slot mechanism of MapReduce. When the processor is running, the virtual machine should be set to 2 processors and 2 GB of memory. At this time, the performance of the virtual machine Sort job reaches the best, and the running time of the job is the shortest.

Table 2 Sort jobs in static and dynamic network environments

The parallel speedup test result data is shown in Table 3. It can be seen from the figure that the parallel speedup ratio will not necessarily increase when the graph scale increases, but there is a peak. When a certain peak value is reached, the speedup ratio will decrease instead. This article analyzes the reason that the increase in the number of maps and reduce will lead to an increase in communication and synchronization between nodes.

Table 3 Parallel speedup test result data

The execution time of different stages in the execution process is compared with Mars as shown in Fig. 6 data graph. For applications that require the Map process, such as string matching (SM) and matrix multiplication (MM), we can see from the figure that preprocessing in Mars can take up to 7–40% of the time. The preprocessing time in MM accounts for about 7% of the entire time, while the SM takes up more than 38% of the time because the size of the data output in the MM load can be fixed and can be quickly obtained when calculating the output of the Map task. On the contrary, in SM, the output size of each Map task is variable, so it is necessary to traverse the entire file to get the output size of each map task during preprocessing calculation. In Mars, because an array is used, after the Map is over, the key-value pairs with the same key need to be grouped and then handed over to the Reduce process. For the final output stage of data, due to the use of Zero-Copy Memory in the shared memory, GPU devices and CPU devices can access this space at the same time, eliminating the need for mutual copying of data.

Fig. 6
figure 6

The execution time of different stages in the execution process compared with Mars

Table 4 shows the running time comparison of the small data set. From the experimental results, it can be seen that the parallel algorithm in this paper is not suitable for too small data sets, and its running time is longer than the serial algorithm; and as the data set increases, the serial algorithm will not be able to run the results. This is because the data set is too small, the MapReduce framework necessary for parallel algorithms to create tasks, scheduling tasks, network transmission and other tasks are relatively high in proportion to computing tasks, making the running time of parallel algorithms longer than serial algorithms. This experiment also indirectly illustrates the necessity of parallelizing the attribute reduction algorithm when facing large data sets.

Table 4 Comparison of running time of small data sets

5 Conclusions

GPU has been paid more and more attention in general computing due to its multi-core, high parallelism, and high internal bandwidth. GPU-based big data processing platforms are distributed in major data processing centers in the world. In this paper, by combining the parallel computing features and processing mechanism of MapReduce, the calculation problem of relational data is transformed into a key-value pair form suitable for MapReduce calculation, thereby combining the high scalability of MapReduce computing power and the characteristics of parallel computing process, and giving full play to the cluster the computing power of the system greatly improves the computing efficiency of aggregation operations. Hadoop is an open source parallel computing platform that implements the functions of the MapReduce parallel computing model. This paper constructs a parallel information retrieval prototype system based on user log files based on the two parallel algorithms we proposed, and verifies the correctness and effectiveness of the parallelized transformation method for serial information retrieval algorithms proposed in this paper through comprehensive experiments. The running results of the prototype system show that the two types of parallel information retrieval algorithms proposed in this paper not only have ideal scalability and speedup performance, but also achieve ideal accuracy and effectiveness when processing large-scale user log files.

Availability of data and materials

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

Change history

Abbreviations

GPU:

Graphics processing unit

CPU:

Central processing unit

RAM:

Random Access Memory

MAE:

Mean Absolute Error

SM:

String matching

MM:

Matrix multiplication

References

  1. P. Shan, H. Lv, L. Yu et al., A multisensor data fusion method for ball screw fault diagnosis based on convolutional neural network with selected channels. IEEE Sens. J. 20(14), 7896–7905 (2020)

    Article  Google Scholar 

  2. J.W. Hu, B.Y. Zheng, C. Wang et al., A survey on multi-sensor fusion based obstacle detection for intelligent ground vehicles in off-road environments. Front. Inf. Technol. Electron. Eng. 21(5), 675–692 (2020)

    Article  Google Scholar 

  3. D. Pan, H. Liu, D. Qu et al., Human falling detection algorithm based on multisensor data fusion with SVM. Mob. Inf. Syst. 2020(7), 1–9 (2020)

    Google Scholar 

  4. Y. Zhou, X. Tao, Z. Yu et al., Train-movement situation recognition for safety justification using moving-horizon TBM-based multisensor data fusion. Knowl Based Syst 177(1), 117–126 (2019)

    Article  Google Scholar 

  5. K.I. Shah, S. Abbas, M.A. Khan et al., Autonomous parking-lots detection with multi-sensor data fusion using machine deep learning techniques. CMC-Tech Sci. Press 66(2), 1595–1612 (2020)

    Google Scholar 

  6. T.G. Akshaya, S. Sreeja, Multi-sensor data fusion for aerodynamically controlled vehicle based on FGPM: ScienceDirect. IFAC-PapersOnLine 53(1), 591–596 (2020)

    Article  Google Scholar 

  7. C.M.D. Farias, L. Pirmez, G. Fortino et al., A multi-sensor data fusion technique using data correlations among multiple applications. Future Gener Comput Syst 92, 109–118 (2019)

    Article  Google Scholar 

  8. P. Ferrer-Cid, J.M. Barcelo-Ordinas, J. Garcia-Vidal et al., Multisensor data fusion calibration in IoT air pollution platforms. IEEE Internet Things J. 7(4), 3124–3132 (2020)

    Article  Google Scholar 

  9. A. Paulino, L. Guimaraes, E.H. Shiguemori, Assessment of noise impact on hybrid adaptive computational intelligence multisensor data fusion applied to real-time UAV autonomous navigation. IEEE Lat. Am. Trans. 18(2), 295–302 (2020)

    Article  Google Scholar 

  10. B. Malakar, B.K. Roy, Train localization using an adaptive multisensor data fusion technique. Transport 34(4), 508–516 (2019)

    Article  Google Scholar 

  11. Q. Xiao, Y. Zhao, W. Huan, Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network. Multimed. Tools Appl. 78(11), 15335–15352 (2019)

    Article  Google Scholar 

  12. D. Li, C. Shen, X. Dai et al., Research on data fusion of adaptive weighted multi-source sensor. Comput. Mater. Continua 61(3), 1217–1231 (2019)

    Article  Google Scholar 

  13. L. Wang, Y. Chen, Z. Zhao et al., A multi-modal health data fusion and analysis method based on body sensor network. Int. J. Serv. Technol. Manag. 25(5/6), 474–491 (2019)

    Article  Google Scholar 

  14. K.O. Min, O.S. Kwon, B.G. Choi et al., A study on the accuracy analysis of land cover classification using fusion method of aerial multi-sensor data in coastal area. J. Korean Soc. Geospatial Inf. Sci. 28(1), 11–24 (2020)

    Google Scholar 

  15. X. Du, A. Zare, Multiresolution multimodal sensor fusion for remote sensing data with label uncertainty. IEEE Trans. Geosci. Remote Sens. 58(4), 2755–2769 (2020)

    Article  Google Scholar 

  16. Y. Wang, G. Zheng, X. Wang, Development and application of a goaf-safety monitoring system using multi-sensor information fusion. Tunn Undergr Space Technol 94, 103112.1-103112.15 (2019)

    Google Scholar 

  17. E. Anastasiou, A. Castrignano, K. Arvanitis et al., A multi-source data fusion approach to assess spatial-temporal variability and delineate homogeneous zones: a use case in a table grape vineyard in Greece. Sci. Total Environ. 684, 155–163 (2019)

    Article  Google Scholar 

  18. M.K. Villareal, A.F. Tongco, Multi-sensor fusion workflow for accurate classification and mapping of sugarcane crops. Eng. Technol. Appl. Sci. Res. 9(3), 4085–4091 (2019)

    Article  Google Scholar 

  19. I.S. Weon, S.G. Lee, Environment recognition based on multi-sensor fusion for autonomous driving vehicles. J. Inst. Control 25(2), 125–131 (2019)

    Google Scholar 

  20. J. Yan, Z. Xu, X. Luo et al., Feedback-based target localization in underwater sensor networks: a multisensor fusion approach. IEEE Trans. Signal Inf. Process. Over Netw. 5(1), 168–180 (2019)

    Article  Google Scholar 

  21. R.P. Palanisamy, B.J. Jung, S.H. Sim et al., Quasi real-time and continuous non-stationary strain estimation in bottom-fixed offshore structures by multimetric data fusion. Smart Struct. Syst. 23(1), 61–69 (2019)

    Google Scholar 

  22. N. Ghoroghchian, M. Mirmohseni, M. Nasiri-Kenari, Abnormality detection and monitoring in multi-sensor molecular communication. IEEE Trans. Mol. Biol. Multi-Scale Commun. 5(2), 68–83 (2020)

    Article  Google Scholar 

  23. J. Qu, C. Wu, Q. Li et al., Human fall detection algorithm design based on sensor fusion and multi-threshold comprehensive judgment. Sens. Mater. 32(4), 1209–1221 (2020)

    Google Scholar 

  24. J.H. Aheto, X. Huang, X. Tian et al., Multi-sensor integration approach based on hyperspectral imaging and electronic nose for quantitation of fat and peroxide value of pork meat. Anal. Bioanal. Chem. 412(5), 1169–1179 (2020)

    Article  Google Scholar 

  25. J. Gabela, A. Kealy, S. Li et al., The effect of linear approximation and Gaussian noise assumption in multi-sensor positioning through experimental evaluation. IEEE Sens. J. 19(22), 10719–10727 (2019)

    Article  Google Scholar 

  26. Z. Zhu, Y. Arezki, N. Cai et al., Data fusion-based method for the assessment of minimum zone for aspheric optics. Computer-Aided Design and Applications 18(2), 309–327 (2020)

    Article  Google Scholar 

  27. R. Caballero-Aguila, A. Hermoso-Carazo, J. Linares-Perez, Networked distributed fusion estimation under uncertain outputs with random transmission delays, packet losses and multi-packet processing. Signal Process 156, 71–83 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

Thank Google for providing some relevant information for this article

Funding

There is no fund for this article.

Author information

Authors and Affiliations

Authors

Contributions

DC—editing, LL—implementation of research process, YC—data collection, ZQ—design conception. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Li Li.

Ethics declarations

Ethics approval and consent to participate

This article is ethical, and this research has been agreed.

Consent for publication

The picture materials quoted in this article have no copyright requirements, and the source has been indicated.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article has been retracted. Please see the retraction notice for more detail: https://doi.org/10.1186/s13634-022-00914-y

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, D., Li, L., Chang, Y. et al. RETRACTED ARTICLE: Implementation of MapReduce parallel computing framework based on multi-data fusion sensors and GPU cluster. EURASIP J. Adv. Signal Process. 2021, 77 (2021). https://doi.org/10.1186/s13634-021-00787-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13634-021-00787-7

Keywords