 Research
 Open Access
 Published:
Spears and shields: attacking and defending deep model coinference in vehicular crowdsensing networks
EURASIP Journal on Advances in Signal Processing volume 2021, Article number: 114 (2021)
Abstract
Vehicular CrowdSensing (VCS) network is one of the key scenarios for future 6G ubiquitous artificial intelligence. In a VCS network, vehicles are recruited for collecting urban data and performing deep model inference. Due to the limited computing power of vehicles, we deploy a deviceedge coinference paradigm to improve the inference efficiency in the VCS network. Specifically, the vehicular device and the edge server keep a part of the deep model separately, but work together to perform the inference through sharing intermediate results. Although vehicles keep the raw data locally, privacy issues still exist once attackers obtain the shared intermediate results and recover the raw data in some way. In this paper, we validate the possibility by conducting a systematic study on the privacy attack and defense in the coinference of VCS network. The main contributions are threefold: (1) We take the road sign classification task as an example to demonstrate how an attacker reconstructs the raw data without any knowledge of deep models. (2) We propose a modelperturbation defense to defend against such attacks by injecting some random Laplace noise into the deep model. A theoretical analysis is given to show that the proposed defense mechanism achieves \(\epsilon\)differential privacy. (3) We further propose a Stackelberg gamebased incentive mechanism to attract the vehicles to participate in the coinference by compensating their privacy loss in a satisfactory way. The simulation results show that our proposed defense mechanism can significantly reduce the effects of the attacks and the proposed incentive mechanism is very effective.
Introduction
With the development of the Internet of Vehicle (IoV), more and more vehicletoeverything (V2X) communication technologies emerge, such as IEEEbased dedicated shortrange communication (DSRC) technologies and 3GPPbased LTE technologies [1, 2]. These technologies support stable wireless communication between vehicles and roadside infrastructures [3, 4]. Meanwhile, artificial intelligence becomes more and more popular. In the future 6G vision, there is no doubt that deep neural models will appear everywhere including Vehicular CrowdSensing (VCS) networks, one of the key scenarios in the future 6G ubiquitous artificial intelligence. In a VCS network, Service Providers (SPs) always require vehicular devices to collect image data of urban regions as the input of deep models and carry out the model inference [5]. With the inference results, the SPs are able to make better decisions and provide higher quality services [6,7,8].
However, the existing deviceonly and edgeonly inference paradigms are hard to support the deployment of deep model inference in VCS networks. On one hand, the vehicular device has to collect the streaming image data quickly and use them to perform the model inference when driving at a high velocity. On the other hand, a more complicated deep model consumes more computation resources and energy. The deviceonly inference paradigm that runs the model inference in vehicular devices is difficult to meet the two requirements due to the limited computation resources and battery capacity of the vehicular devices [9, 10]. Meanwhile, the edgeonly inference paradigm allows the vehicular devices to upload their collected data and executes the model inference in edge servers, but brings about considerable communication costs because of the transmission of largevolume raw data [11,12,13]. Besides, privacy disclosure risk hinders the vehicles from sharing their raw data and being willing to join the VCS networks.
To solve the above disadvantages of model inference paradigms, the deviceedge coinference paradigm was proposed [14]. In this paradigm, a deep model is partitioned into two parts. One part is stored in the vehicular device, while the other part is kept by the edge server. The vehicular device runs the first part of the deep model and uploads the intermediate output. The edge server uses the intermediate data as the input of the rest of the deep model and obtains the final result [15]. Previous works focused on finding out an appropriate partitioning way that has a smallsize intermediate output and puts the model layers of large computation load on the side of edge server [14, 16]. This can largely reduce communication costs and improve model inference efficiency. Besides, sharing intermediate model output instead of raw data alleviates the privacy disclosure issues to a certain extent [17].
Nonetheless, the deviceedge coinference paradigm still exists privacy issues. The attacker can reconstruct the raw data by obtaining and analyzing the intermediate model output [18]. Thus, designing defense mechanisms against privacy attacks is necessary. The work in [16, 19] chose a deeper layer as the partitioning point which outputs a smallersize and lessinformation intermediate result. The work in [19, 20] used a dropout mechanism to randomly set some pixel of input data or intermediate data into zero, which reduces the information carried in the intermediate output. The work in [19,20,21] injected randomly generated noise into input data or intermediate output, which perturbs the reconstruction performance. These defense mechanisms heavily rely on experimental experience and lack theoretical guidance.
In this paper, we introduce the deviceedge coinference paradigm into VCS networks. Through the collaboration of vehicular devices and edge servers, the execution efficiency of deep model inference applications in VCS networks is significantly improved. Besides, we use a black box reconstruction attack, which is able to recover the input raw data only based on the intermediate output, to validate the privacy vulnerability of the coinference. We then design a modelperturbation defense mechanism against such attacks by adding randomly generated noise to perturb the intermediate output. A differential privacy (DP) theoretical analysis is provided to verify that the proposed mechanism can guarantee \(\epsilon\)DP. Compared with the common defense approach that directly adds noise into intermediate data [19, 21, 22], our proposed mechanism enables a lower privacy budget, i.e., a higher privacy protection level. We further design a Stackelberg gamebased incentive mechanism that motivates vehicular devices to join the deep model inference and compensate for their economic loss from potential privacy leakage. The experimental results on the road sign classification dataset demonstrate that our proposed defense mechanism can significantly defend against the reconstruction attack and that the proposed incentive mechanism is effective.
In summary, the main contributions of this paper are as follows.

We introduce the deviceedge coinference paradigm into VCS networks. The vehicular devices and edge servers work together to improve the efficiency of deep model inference and reduce the communication costs in VCS networks.

We adopt a blackbox reconstruction attack to recover the input image in the road sign classification task. This demonstrates the privacy vulnerability of the coinference paradigm, which limits its deployment in VCS networks.

We then propose a model perturbation mechanism that perturbs the model parameters to defend against the reconstruction attack. A DP theoretical analysis is provided as a theoretical guidance to alleviate privacy breaches in the coinference of VCS networks.

We further propose a Stackelberg gamebased incentive mechanism. The mechanism quantifies the privacy loss of each vehicle by using DP properties and compensates them in a satisfactory way, thus attracting vehicles to join the coinference in VCS networks.
The remainder of this paper is organized as follows. Section 2 introduces the coinference paradigm for VCS networks and the reconstruction attack upon it. Section 3 describes the proposed model perturbation defense and the related analysis. Section 4 formulates the incentive mechanism design problem as a Stackelberg game. Section 5 provides a detailed description of game theory analysis. The simulation results and performance evaluation are shown in Sect. 6. Finally, the concluding remarks are made in Sect. 7.
Privacy vulnerability of coinference
In this section, we first present the deviceedge coinference paradigm in VCS networks and then adopt the blackbox reconstruction attack to demonstrate its privacy vulnerability.
Coinference in vehicular crowdsensing networks
Figure 1 gives an overview of a coinference paradigm of VCS networks over one urban region. We describe the main entities as follows.

Vehicles are running across the urban area and recruited by the SP to execute crowdsensing and deep model inference. Each vehicle is equipped with sensors and a vehicular device. The sensors are used to collect data at a high rate while the vehicular device has the processing and storage resources to execute a portion of the deep model using the acquired data as input.

Edge Server is rented by the SP to carry out the deep model inference. The edge server has significantly more computational capabilities and capacity than vehicular devices, allowing it to operate the more complex parts of the deep model. The deep model’s final calculation outputs assist the SP in making intelligent decisions.

Deep Model is partitioned by the SP into two parts. The first part, which has a lower computation load, is kept in vehicular devices, while the remainder, which has a higher computation load, is kept in the edge server. The intermediate data, which is the result of the deep model of vehicular devices, is sent to the edge server. The intermediate data is used as input by the edge server to run its deep model section and obtain the final results.
The cooperation between vehicular devices and edge servers can reduce communication costs and deep model inference delay. The vehicular devices do not share raw data in the coinference paradigm, but they are still vulnerable to privacy leakage, as illustrated by the following privacy attack.
Spears: blackbox reconstruction attack
As shown in Fig. 2, the vehicular device stores the first part of layers \(f_{\theta 1}\), while the edge server keeps the remainder \(f_{\theta 2}\). The vehicular device inputs raw image data \(x_0\) and obtains the intermediate output \(v_0 = f_{\theta 1}(x_0)\). The attacker tries to be an eavesdropper in VCS networks and intercepts the vehicular device’s shared \(v_0\). We consider a blackbox setting that the attacker doesn’t know the structure and parameters of the deep model \(f_{\theta 1}\). But it could query the model, i.e., use arbitrary data X as input to run the model and observe the intermediate outputs \(V = f_{\theta 1}(X)\). This assumption happens when the SP releases its APIs to other users. The blackbox setting is more realistic than the whitebox setting in which the structure and parameters of the deep model \(f_{\theta 1}\) are accessible. It is harder for the attacker to reconstruct the image data under the blackbox setting than under the whitebox setting [18]. To this, the attacker can train an inverse model \(g_{\omega } = f^{1}_{\theta 1}\) to learn the inverse mapping from the intermediate output V to the original input X.
The detailed attack process is shown in Algorithm 1 and includes three phases. In the observation phase, the attacker uses a set of samples \(X=\{x_1,\dots ,x_n\}\) to query \(f_{\theta 1}\) and gets \(V=\{f_{\theta 1}(x_1),\dots ,f_{\theta 1}(x_n)\}\). Here we consider that X follows the same distribution of \(x_0\). In the learning phase, the attacker trains the inverse model \(g_{\omega }\) with V as inputs and X as targets. The loss function is given as
Note that the structure of \(g_{\omega }\) needs not to be related to that of \(f_{\theta 1}\). In our experiment, we use a totally different structure. In the reconstruction phase, the attacker inputs \(v_0\) into the trained inverse model and obtains the recovered image \(x^{\prime}_0=g_{\omega }(v_0)\).
Differential privacy method
In this section, we first introduce preliminaries on DP method and then describe the proposed model perturbation defense mechanism. A theoretical analysis is given to show that the proposed defense mechanism provides \(\epsilon\)DP.
Preliminaries on differential privacy
DP is a statistical framework for measuring the privacy risk. The definition of \(\epsilon\)DP is as follows [23].
Definition 1
(\(\epsilon\)DP) Given two neighboring inputs X and \(X^{\prime}\) which differ in at least one sample, a randomized mechanism f provides \(\epsilon\)DP if
According to the above definition, given any neighboring inputs X and \(X^{\prime}\) into the mechanism f, the probability of their outputs being in the same range S is characterized by \(\epsilon\). The parameter \(\epsilon\) denotes the privacy budget. A smaller \(\epsilon\) leads to a better privacy protection for the vehicular device. That is to say, given any output, the attacker cannot tell if it is generated by inputting X or \(X^{\prime}\).
A common defense approach is to introduce randomly generated noise of some specific probability distribution into the output \(f(\cdot )\) [24]. One probability distribution used widely in DP is Laplace distribution denoted by \(Lap(0,\sigma )\), where 0 is the mean and \(\sigma\) is the scale. The Laplace Mechanism [20] is defined by
It provides \(\epsilon\)DP when the added noise is sampled from \(Lap(0,\sigma )\) with \(\sigma \ge \frac{\bigtriangleup f}{\epsilon }\). Here \(\bigtriangleup f\) is the global sensitivity indicating that the maximum difference between the outputs \(f(X)f(X^{\prime})_1\) with any pair of inputs X and \(X^{\prime}\).
Shields: model perturbation defense
Instead of adding noise directly into the intermediate output \(f_{\theta 1}(X)\) [19, 21, 22], we introduce noise into the deep model parameters \(\theta 1\) as shown in Fig. 2. This can avoid drastic change of the intermediate output and reduce the negative effect on the following inference. The challenge is that it is difficult to calculate the sensitivity. Hence, we limit the maximum value of the parameters within a fixed bound G to calculate the sensitivity. The clipping operation is carried out during the deep model training [23, 24]. Here, the parameters \(\theta 1\) is bounded by \(\theta 1/\max \left( 1,\frac{\theta 1_{\infty }}{G} \right)\). It means that the value of the parameter is not larger than G. Thus, the sensitivity can be approximately calculated as
Next, we add the noise randomly sampled from the Laplace distribution \(Lap(0,\frac{2G}{\epsilon })\) into the bounded parameters. The process of defense mechanism is shown in Algorithm 2. We show that Algorithm 2 gives the \(\epsilon\)DP guarantee in Theorem 1.
Theorem 1
Given the sensitive data X and the deep model \(f_{\theta 1}\), Algorithm 2 satisfies \(\epsilon\)DP when its injected Gaussian noise \(Lap(0,\sigma )\) is chosen by \(\sigma = \frac{2G}{\epsilon }\)
Proof
Given any adjacent inputs X and \(X^{\prime}\),
According to Definition 1, we have \(\epsilon = \frac{2G}{\sigma }\) and Algorithm 2 satisfies \(\epsilon\)DP. The proof is now completed. \(\square\)
Incentive mechanism for coinference
In this section, we first describe the utility functions of the vehicular devices and the edge server. Then, we formulate the incentive mechanism design problem as a twostage Stackelberg game problem. We theoretically prove that the game has a unique equilibrium.
Utility of vehicle and edge server
We consider that there is a set of vehicles N running across the urban area and being recruited by the SP for collecting data of targets. Each vehicle i is running at a constant speed \(v_i \in [20,60]\) km/h. According to [5], a slower vehicle could stay in an area longer and capture more image data. According to [25], the quality of the captured image data from a slower vehicle is higher, i.e., the image data has less vagueness occurred by the shake of sensors. In other words, the collected data of a vehicle with lower \(v_i\) has more considerable quality and quantity. After that, the vehicles input the collected data to run the deep model on their vehicular devices.
As aforementioned, the model perturbation defense mechanism provides a privacy protection for vehicular devices. In practice, there are a lot of deep model inference scenarios that need to protect the privacy of vehicular devices. For example, in the scenarios of recognizing target recognition, such as vehicle license plate, pedestrian, road sign, etc., the images contain sensitive information that may expose the drivers’ driving habits or the vehicles’ moving path. It may cause economic loss to the drivers if without the privacy protection. In this paper we consider the deep model inference for the road sign classification scenario and conduct experiments to measure the inference performance under the defense mechanism. The result in Fig. 5 shows that with a larger privacy budget \(\epsilon\), the inference accuracy increases. We fit the inference accuracy curve as
which is used to measure a vehicular device’s inference performance with its chosen \(\epsilon _i\).
We can see that the SP expects the vehicles to choose a higher \(\epsilon _i\) for a higher inference accuracy. Thus, the SP would design a reward R to compensate the privacy loss of vehicles. Given the reward from the SP, each vehicle’s profit is related to its contribution characterized by \(\epsilon _i\) and \(v_i\). Similar to [5, 26], the profit of i is denoted as \(R( \frac{\epsilon _i}{v_i}/\sum _{i \in I} \frac{\epsilon _i}{v_i} )\). The cost of i is defined as its potential privacy loss. Base on the DP analysis, a lower \(\epsilon\) means a higher privacy protection level. If the privacy is breached, the economical loss of i is denoted as \(c_i+\frac{e}{v_i}\), where \(\frac{e}{v_i}\) is the expense on executing crowdsensing tasks under driving speed \(v_i\), e is the unit expense, and \(c_i\) is the estimated value of collected data by i. Thus, the utility function of i is given as
The SP needs to aggregate the inference results from all the vehicles to alleviate individual error from crowdsensing. Here we consider that the aggregated inference performance is the weighted sum of all the vehicles, which is given as
where \(\frac{1}{v_i}\) is the weight. The SP puts higher weight on the slower vehicles since they usually collect data with higher quality and quantity for model inference [5, 25]. Thus, the utility function of the SP is given as
where \(\lambda\) is the conversion factor from inference performance to profits, and R is the reward that the SP offers to the vehicles.
Game formulation
A Stackelberg game is a decisionmaking tool that contains a leader player and several follower players [26]. Each player is rational and only wants to maximize its utility. The follower players can observe the decision made by the leader player and choose their strategies accordingly [27]. A noncooperative game is a decisionmaking tool that contains several rational players competing with each other [28]. They make the decisions at the same time.
In this paper, the problem is how the SP designs the reward R to compensate the privacy loss of vehicles while each vehicle chooses the privacy budget \(\epsilon _i\) to complete coinference tasks of VCS networks under privacy protection. We formulate the problem as a Stackelberg game, where the SP is the leader player, while the vehicles are the follower players. Each vehicle has to decide its optimal response \(\epsilon _i^*\) given R and other vehicles’ privacy strategies. Mathematically, the problem is written as Problem1:
The value of \(\epsilon _i\) affects both the inference accuracy and the privacy protection level. A higher \(\epsilon _i\) brings to a higher inference accuracy but a higher risk of privacy leakage. Here, \(\epsilon _{\min }\) ensures that the inference accuracy under model perturbation defense is acceptable and \(\epsilon _{\max }\) ensures that the least privacy protection level requirement is satisfied. The SP can command the expected inference performance by controlling R and aims to find the optimal reward \(R^*\) to balance the gained profit from deep model inference and expense on rewarding. Mathematically, the problem is written as Problem2:
The Stackelberg game is made up of Problem 1 and 2. The objective of this game is to find a Stackelberg Equilibrium (SE) point from which the SP and the vehicles have no motivation to deviate.. The definition of SE is as follows [28].
Definition 2
Let \(\epsilon _i^*\) be the optimal solution for Problem 1 and \(R^*\) is the optimal solution for Problem2. The point \((R^*, \pmb {\epsilon ^{*}})\) is an SE for the proposed Stackelberg game, if it satisfies
where \(\pmb {\epsilon ^*}\) with entry \(\epsilon _i^*\) is the set of best responses of the vehicles.
The vehicles compete with each other on the reward and thus form a noncooperative subgame. There may exist a Nash Equilibrium (NE) point where no vehicle can enhance its utility by changing its strategy unilaterally. The definition of NE is as follows [28].
Definition 3
Let \((\epsilon _i^*,\pmb {\epsilon _{i}^*} )\) be the solution for Problem 1, where \(\pmb {\epsilon _{i}^*}\) is the set of the best responses of the vehicles except i. The point \((\epsilon _i^*,\pmb {\epsilon _{i}^*} )\) is a NE point for the proposed noncooperative subgame if it satisfies
Game theory method
In this section, we use the backward induction method of game theory to analyze the two games and find the NE and the SE.
Subgame nash equilibrium
We use the backward induction method to analyze the existence and uniqueness of the NE in the subgame.
Theorem 2
There exists a NE point in the noncooperative subgame among vehicles.
Proof
The strategy space of each vehicle is nonempty, convex, and compact. From Eq. (7), \(U_i\) is continuous with respect to \(\epsilon\) in \([\epsilon _{\min },\epsilon _{\max }]\). We take the first and second derivatives of \(U_i\) with respect to \(\epsilon _i\) and obtain
We prove that \(U_i\) is strictly concave with respect to \(\epsilon _i\). Thus, the NE point exists. The proof is now completed. \(\square\)
Let \(\frac{\partial U_i}{\partial \epsilon _i}=0\) and we get the best response function of i as
where \(a_i = {\sum _{j \in N \backslash \{ i \}}} \frac{\epsilon _j}{v_j}\), \(k_i = c_i+\frac{e}{v_i}\), \({\underline{R}} = \frac{k_iv_i\left( a_i+\frac{\epsilon _{\min }}{v_i} \right) ^2}{a_i}\), and \({\overline{R}} = \frac{k_iv_i\left( a_i+\frac{\epsilon _{\max }}{v_i} \right) ^2}{a_i}\).
Theorem 3
At the NE point for the noncooperative subgame among the vehicles, the best response of i has a closedfrom expression given by
Proof
According to Eq. (15), we have
By computing the summation of this expression for all the vehicles, we obtain
We substitute Eq. (18) into Eq. (17) and get
which can be rewritten as Eq. (16). The proof is now completed. \(\square\)
Theorem 4
The NE for the noncooperative subgame is unique if the following condition is satisfied.
Proof
According to Eqs. (17) and (18), we have
Given R offered by the SP and privacy strategies \(\pmb {\epsilon _{i}}\) offered by other vehicles, the best response function in Eq. (15) is denoted as \(\epsilon _i^* = B_i(\pmb {\epsilon _{i}},R)\). The NE is unique if \(B(\pmb {\epsilon },R) = (B_1,B_2,\dots ,B_N)\) can be proved to be the standard function which meets the following conditions [5, 29].

Positivity: \(B(\pmb {\epsilon },R) > 0\),

Monotonicity: For all \(\pmb {\epsilon }\) and \(\pmb {\epsilon }^{\prime}\), \(B(\pmb {\epsilon },R) \ge B(\pmb {\epsilon }^{\prime},R)\) if \(\pmb {\epsilon } \ge \pmb {\epsilon }^{\prime}\),

Scalability: For all \(\mu > 1\), \(\mu B(\pmb {\epsilon },R) > B(\mu \pmb {\epsilon },R)\).
We first analyze the positivity. According to Eq. (20), we have \(\frac{N1}{\sum _{i \in N} v_i k_i} < \frac{1}{2 v_i k_i}\), and thus conclude that
We further conclude that \({\sum _{j \in N \backslash \{ i \}}} \frac{\epsilon _j}{v_j} < \sqrt{\frac{R}{v_i k_i} {\sum _{j \in N \backslash \{ i \}}} \frac{\epsilon _j}{v_j} }\). Thus, we have
which satisfies the positivity condition.
We then analyze the monotonicity. Taking the first derivative of \(B_i(\epsilon _{i},R)\) with respect to \(\epsilon _j\),\(j \in N \backslash \{ i \}\), we have
According to Eq. (22) that \({\sum _{j \in N \backslash \{ i \}}} \frac{\epsilon _j}{v_j} < \frac{R}{4 v_i k_i}\), we have \(\frac{1}{2} \sqrt{\frac{R}{v_i t_i} \frac{1}{{\sum _{j \in N \backslash \{ i \}}} \frac{\epsilon _j}{v_j}} } 1 > 0\). Thus, the monotonicity condition is satisfied.
Finally we analyze the scalability. We have
Therefore, \(\mu B_i(\pmb {\epsilon _{i}},R) \ge B_i(\mu \pmb {\epsilon _{i}},R)\) is always satisfied for \(\mu > 1\). The scalability condition is satisfied. \(B(\epsilon _{i},R)\) meets the three conditions and is a standard function. Thus, uniqueness of the NE is proved. The proof is now completed.
\(\square\)
Generally, we can obtain the NE point by using the best response dynamics [29] . Problem 1 is resolved and then we analyze the SE in the following.
Stackelberg equilibrium
We substitute \(\epsilon _i^*\) into the objective function of Problem 2 and have
where \(h_i = \frac{b v_i(N1)}{\sum _{i \in N} v_i k_i} \left( 1  \frac{v_i k_i(N1)}{\sum _{i \in N} v_i k_i} \right)\).
Theorem 5
There exists a unique SE for the proposed Stackelberg game among the SP and the vehicles.
Proof
The strategy space of the SP is nonempty, convex, and compact. \(U_S\) is continuous with respect to R in \([0,+\infty ]\). We take the second derivatives of Eq. (26) with respect to R and get
Thus, \(U_S\) is strictly concave with respect to R and the SP has a unique optimal strategy \(R^*\) in maximizing its utility. According to Theorem 4, given any reward from the SP, the vehicles always choose a unique set of best responses \(\epsilon ^*\) to reach the NE. Therefore, when the SP chooses \(R^*\), all players determine their optimal strategies. This satisfies the condition in Definition 2 that there exists a unique SE point. The proof is now completed. \(\square\)
The objective function of Problem 2 is a concave function and can be solved by using the existing typical convex optimal algorithms (e.g., dual decomposition algorithm [30] ). If the SP has global information, such as \(c_i\), he can find out \(R^*\) in a centralized manner. However, to protect the privacy of each vehicle, [31] inspires us to design a distributed algorithm that performs the optimization without any private information. The proposed incentive mechanism is carried out cyclically. At each cycle, the SP and the vehicles reach an agreement by Algorithm 3. Under the agreement, the vehicles finish the coinference tasks by choosing a privacy budget and obtaining the responding rewards. In Algorithm 3, the SP updates the reward value by using a gradientassisted searching algorithm, i.e., Eq. (28), and offers it to the vehicles. Each vehicle receives the reward value, determines its privacy budget based on Eq. (15), and returns the strategy to the SP. The iterations continue until the difference of the updated reward value is less than a preset threshold. Note that the communication delay is negligible due to the small size of shared information. The frequency of update, i.e., the number of iterations to reach convergence, depends on the learning rate and the threshold. When executing the algorithm, the vehicles conduct wireless communication with an access point (AP). Each vehicular node uploads its strategy information, i.e., privacy budget \(\epsilon _i\), to the nearest AP and other vehicular nodes can query this strategy information with negligible delay.
Results and discussion
In this section, we conduct the experiments to evaluate the performance of the blackbox reconstruction attack and the proposed model perturbation defense mechanism. We also conduct the simulations to evaluate the performance of the proposed incentive mechanism.
Attack and defense evaluation
Experimental setup
We conduct experiments on the GTSRB dataset for road sign recognition that consists of 39208 samples for training and 12630 samples for testing. We adopt a Convolution Neuron Network (CNN) as deep model with 6 convolution layers and 2 fully connected layers. Each convolution layer has 64 channels and the kernel size is 3. There is a maxpooling layer after every two convolution layers. The model is partitioned at the 2nd, 4th, and 6th convolution layers. We use ADAM as our optimizer and set the learning rate as 0.001. The adopted inverse model consists of two deconvolution layers and one ReLU layer between them. Each deconvolution layer has 64 channels and the kernel size is 3.
Measurement metrics
We use three metrics to measure the attack and defense performance. MeanSquare Error (MSE) measures pixelwise similarity. Peak SignaltoNoise Ratio (PSNR) quantifies the pixellevel reconstruction quality of the images. Structural Similarity Index (SSIM) reflects the human perceptual similarity of two images according to their luminance, contrast, and structure. It ranges from [0, 1], where 1 denotes the most similar.
Attack performance
Figure 3 and Table 1 show the recovered performance via blackbox reconstruction attack. As shown in Fig. 3, when the deep model is split in a shallower layer, the reconstructed images have high fidelity. When the deep model is split in a deeper layer, the reconstructed images lose some details and become blurry. Even if the split point is in the 6th layer, the details of road signs can still be clearly identified. Table 1 shows that the reconstructed images have higher MSE, PSNR, and lower SSIM, when the deep model is split in a deeper layer. Thus, with a deeper split layer, the blackbox reconstruction attack becomes harder.
Defense performance
Figure 4 and Table 2 show the recovered performance under model perturbation defense mechanism with different privacy budget \(\epsilon\). We set the split point in the 4th layer and randomly sample noise with privacy budget \(\epsilon\) as 5, 10, 500, respectively. As shown in Fig. 4, with a lower \(\epsilon\), the reconstructed images become blurrier and lose more details. When \(\epsilon =5\), the details of road signs are hard to be identified. Table 2 shows that the recovered images under defense with lower \(\epsilon\) have higher MSE, PSNR, and lower SSIM. The deep model inference accuracy decreases when \(\epsilon\) becomes smaller. The reason is that the injected noise also perturbs the inference results. Generally, the model perturbation defense mechanism reduces the quality of image reconstruction while slightly decreasing the inference performance. These results offer an intuitive guide for the DP and the vehicles for balancing inference performance and privacy protection.
Inference performance
For better characterizing the influence caused by the model perturbation mechanism on the inference performance, we set different privacy budgets to observe the inference accuracy depression. Figure 5 shows that the inference accuracy drops with the decrease of \(\epsilon\). We also fit the curve based on the observed results.
Incentive mechanism performance
Simulation setup
We consider that there are 5–30 vehicles being recruited for executing VCS and coinference. The driving speed is randomly chosen in the range of [5,15] m/s. The profit coefficient is \(\lambda \in [1000,1250]\). The expected value of privacy is \(c_i \in [5,10]\) and the expense for joining the crowdsensing task is \(e=10\).
Performance comparison
Figure 6 shows the performance comparison among the centralized approach, the distributed approach, and the linear approach. The centralized approach assumes that the SP knows the estimated value of collected data of each vehicle so that the SP can use a convex algorithm to directly calculate \(R^*\) in a centralized manner. Our proposed distributed algorithm allows the SP to approach the SE point in a distributed manner without the need for any private information. The linear approach also considers that the SP has no knowledge of the vehicles’ private information but the given rewards are linear to the privacy budget of vehicles. As shown in Fig. 6, with the centralized approach, the SP obtains the highest utility. The reason is that the SP knows the estimated value of collected data so that it can directly find out the optimal solution. By using the linear approach, the SP obtains the lowest utility, while the vehicle obtains the highest utilities. The reason is that in the linear approach, the vehicle’s reward is linear to its own privacy budget, without relation to other vehicles’ strategies. The performance of our proposed distributed algorithm is much better than linear approach but slightly worse than that of the centralized approach with an average distance of \(0.05\%\). In general, our distributed algorithm enables the SP to obtain the highest utility when it has no knowledge of the vehicles’ private information. In addition, the result also shows that the utilities of both the SP \(U_S\) and the vehicle increase with the growing profit coefficient \(\lambda\). It is because a higher \(\lambda\) allows the SP to gain more considerable profit from deep model inference so that the SP provides a higher reward.
The impact of privacy value
Figure 7 shows the performance of incentive mechanism with respect to the estimated value of privacy \(c_i\). When the evaluated privacy value is higher, the utilities of both the SP and the vehicle decrease. The reason is that when a vehicle estimates a higher value for its privacy, it will choose a lower \(\epsilon\) to protect its privacy. The profit of the SP becomes lower accordingly.
The impact of vehicles’ size
Figure 8 shows the performance of incentive mechanism with respect to the number of vehicles. When the number of vehicles grows, \(U_S\) increases, while \(U_i\) decreases. The reason is that the more vehicles can collect more data to perform deep model inference for the SP. But the increasing number of vehicles brings about more strict competition among them.
The impact of velocity
Figure 9 shows the performance of incentive mechanism with respect to the velocity of vehicles. When the velocity of vehicles becomes higher, the utilities of both the SP and the vehicle decrease. The reason is that when the vehicles drive at a high speed, their weight in the deep model inference decreases. The SP obtains a lower profit and gives lower rewards to the vehicles.
Conclusion
In this paper, we adopted the deviceedge coinference paradigm to improve the inference efficiency in VCS networks and studied its privacy preservation. We evaluated the blackbox reconstruction attack, which recovers the input data of the vehicular devices, and proposed a model perturbation defense mechanism based on DP theory against the attack. We designed a Stackelberg gamebased incentive mechanism that encourages the vehicular devices to participate in the coinference by compensating their privacy loss. Experimental results demonstrated the effectiveness of our proposed defense mechanism and incentive mechanism.
Abbreviations
 VCS:

Vehicular CrowdSensing
 SP:

Service Provider
 DP:

Differential Privacy
References
 1.
H. Zhou, W. Xu, J. Chen, W. Wang, Evolutionary v2x technologies toward the internet of vehicles: challenges and opportunities. Proc. IEEE 108(2), 308–323 (2020)
 2.
X. Liu, X. Zhang, Nomabased resource allocation for clusterbased cognitive industrial internet of things. IEEE Trans. Ind. Inf. 16(8), 5379–5388 (2019)
 3.
X. Liu, X. Zhang, Rate and energy efficiency improvements for 5gbased IoT with simultaneous transfer. IEEE Internet Things J. 6(4), 5971–5980 (2018)
 4.
X. Liu, X. Zhang, M. Jia, L. Fan, W. Lu, X. Zhai, 5gbased green broadband communication system design with simultaneous wireless information and power transfer. Phys. Commun. 28, 130–137 (2018)
 5.
M. Wu, X. Huang, B. Tan, R. Yu, Hybrid sensor network with edge computing for AI applications of connected vehicles. J. Internet Technol. 21(5), 1503–1516 (2020)
 6.
X. Huang, P. Li, R. Yu, Y. Wu, K. Xie, S. Xie, Fedparking: a federated learning based parking space estimation with parked vehicle assisted edge computing. IEEE Trans. Veh. Technol. 70(9), 9355–9368 (2021)
 7.
L. He, K. He, Towards optimally efficient search with deep learning for largescale MIMO systems. IEEE Trans. Commun. PP(99), 1–12 (2022)
 8.
S. Tang, L. Chen, Computational intelligence and deep learning for nextgeneration edgeenabled industrial IoT. IEEE Trans. Netw. Sci. Eng. PP(99), 1–12 (2022)
 9.
X. Huang, R. Yu, D. Ye, L. Shu, S. Xie, Efficient workload allocation and usercentric utility maximization for task scheduling in collaborative vehicular edge computing. IEEE Trans. Veh. Technol. 70(4), 3773–3787 (2021)
 10.
L. Chen, Physicallayer security on mobile edge computing for emerging cyber physical systems. Comput. Commun. PP(99), 1–12 (2022)
 11.
J. Xia, D. Deng, D. Fan, A note on implementation methodologies of deep learningbased signal detection for conventional MIMO transmitters. IEEE Trans. Broadcast. 66(3), 744–745 (2020)
 12.
K. He, Ultrareliable MUMIMO detector based on deep learning for 5G/B5Genabled IoT. Phys. Commun. 43, 1–7 (2020)
 13.
J. Xia, L. Fan, W. Xu, X. Lei, X. Chen, G.K. Karagiannidis, A. Nallanathan, Secure cacheaided multirelay networks in the presence of multiple eavesdroppers. IEEE Trans. Commun. 67(11), 7672–7685 (2019)
 14.
Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, L. Tang, Neurosurgeon: collaborative intelligence between the cloud and mobile edge. ACM SIGARCH Computer Archit. News 45(1), 615–629 (2017)
 15.
E. Li, L. Zeng, Z. Zhou, X. Chen, Edge AI: ondemand accelerating deep neural network inference via edge computing. IEEE Trans. Wireless Commun. 19(1), 447–457 (2019)
 16.
C. Shi, L. Chen, C. Shen, L. Song, J. Xu, Privacyaware edge computing based on adaptive DNN partitioning, in 2019 IEEE Global Communications Conference (GLOBECOM), pp. 1–6 (2019). IEEE
 17.
M. Wu, X. Zhang, J. Ding, H. Nguyen, R. Yu, M. Pan, S.T. Wong, Evaluation of inference attack models for deep learning on medical data. arXiv preprint arXiv:2011.00177 (2020)
 18.
Z. He, T. Zhang, R.B. Lee, Model inversion attacks against collaborative inference, in Proceedings of the 35th Annual Computer Security Applications Conference, pp. 148–162 (2019)
 19.
Z. He, T. Zhang, R.B. Lee, Attacking and protecting data privacy in edgecloud collaborative inference systems. IEEE Internet Things J. 8(12), 9706–9716 (2020)
 20.
J. Wang, J. Zhang, W. Bao, X. Zhu, B. Cao, P.S. Yu, Not just privacy: Improving performance of private deep learning in mobile cloud, in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, pp. 2407–2416 (2018)
 21.
T. Titcombe, A.J. Hall, P. Papadopoulos, D. Romanini, Practical defences against model inversion attacks for split neural networks. arXiv preprint arXiv:2104.05743 (2021)
 22.
J. Ryu, Y. Zheng, Y. Gao, S. Abuadbba, J. Kim, D. Won, S. Nepal, H. Kim, C. Wang, Can differential privacy practically protect collaborative deep learning inference for the internet of things? arXiv preprint arXiv:2104.03813 (2021)
 23.
M. Abadi, A. Chu, I. Goodfellow, H.B. McMahan, I. Mironov, K. Talwar, L. Zhang, Deep learning with differential privacy, in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pp. 308–318 (2016)
 24.
M. Wu, D. Ye, J. Ding, Y. Guo, R. Yu, M. Pan, Incentivizing differentially private federated learning: a multidimensional contract approach. IEEE Internet Things J. 8(13), 10639–10651 (2021)
 25.
D. Ye, R. Yu, M. Pan, Z. Han, Federated learning in vehicular edge computing: a selective model aggregation approach. IEEE Access 8, 23920–23935 (2020)
 26.
D. Yang, G. Xue, X. Fang, J. Tang, Incentive mechanisms for crowdsensing: crowdsourcing with smartphones. IEEE/ACM Trans. Netw. 24(3), 1732–1744 (2015)
 27.
X. Kang, S. Sun, J. Yang, Incentive mechanisms for motivating mobile data offloading in heterogeneous networks: A salaryplusbonus approach. arXiv preprint arXiv:1802.02954 (2018)
 28.
Z. Xiong, S. Feng, D. Niyato, P. Wang, Z. Han, Edge computing resource management and pricing for mobile blockchain. arXiv preprint arXiv:1710.01567 (2017)
 29.
J. Lee, J. Guo, J.K. Choi, M. Zukerman, Distributed energy trading in microgrids: a gametheoretic model and its equilibrium analysis. IEEE Trans. Ind. Electron. 62(6), 3524–3533 (2015)
 30.
S. Boyd, S.P. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)
 31.
W. Tushar, B. Chai, C. Yuen, D.B. Smith, K.L. Wood, Z. Yang, H.V. Poor, Threeparty energy management with distributed energy resources in smart grid. IEEE Trans. Ind. Electron. 62(4), 2487–2498 (2014)
Acknowledgements
Not applicable
Funding
The work is supported in part by National Key R&D Program of China (No. 2020YFB1807802, 2020YFB1807800), National Natural Science Foundation of China (No. 61971148), Guangxi Natural Science Foundation, China (No. 2018GXNSFDA281013), and Foundation for Science and Technology Project of Guilin City (No. 201902143).
Author information
Affiliations
Contributions
MW and DY designed the incentive mechanism and conducted the simulations; MW designed the defense mechanisms, performed the privacy attack and defense experiments, and then wrote the manuscript. All authors discussed the results and revised the manuscript. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Wu, M., Ye, D., Zhang, C. et al. Spears and shields: attacking and defending deep model coinference in vehicular crowdsensing networks. EURASIP J. Adv. Signal Process. 2021, 114 (2021). https://doi.org/10.1186/s13634021008227
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634021008227
Keywords
 Deep model coinference
 Differential privacy
 Vehicular crowdsensing network
 Stackelberg game