Spears and shields: attacking and defending deep model co-inference in vehicular crowdsensing networks

Vehicular CrowdSensing (VCS) network is one of the key scenarios for future 6G ubiquitous artificial intelligence. In a VCS network, vehicles are recruited for collecting urban data and performing deep model inference. Due to the limited computing power of vehicles, we deploy a device-edge co-inference paradigm to improve the inference efficiency in the VCS network. Specifically, the vehicular device and the edge server keep a part of the deep model separately, but work together to perform the inference through sharing intermediate results. Although vehicles keep the raw data locally, privacy issues still exist once attackers obtain the shared intermediate results and recover the raw data in some way. In this paper, we validate the possibility by conducting a systematic study on the privacy attack and defense in the co-inference of VCS network. The main contributions are threefold: (1) We take the road sign classification task as an example to demonstrate how an attacker reconstructs the raw data without any knowledge of deep models. (2) We propose a model-perturbation defense to defend against such attacks by injecting some random Laplace noise into the deep model. A theoretical analysis is given to show that the proposed defense mechanism achieves ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon$$\end{document}-differential privacy. (3) We further propose a Stackelberg game-based incentive mechanism to attract the vehicles to participate in the co-inference by compensating their privacy loss in a satisfactory way. The simulation results show that our proposed defense mechanism can significantly reduce the effects of the attacks and the proposed incentive mechanism is very effective.

in the future 6G ubiquitous artificial intelligence. In a VCS network, Service Providers (SPs) always require vehicular devices to collect image data of urban regions as the input of deep models and carry out the model inference [5]. With the inference results, the SPs are able to make better decisions and provide higher quality services [6][7][8].
However, the existing device-only and edge-only inference paradigms are hard to support the deployment of deep model inference in VCS networks. On one hand, the vehicular device has to collect the streaming image data quickly and use them to perform the model inference when driving at a high velocity. On the other hand, a more complicated deep model consumes more computation resources and energy. The device-only inference paradigm that runs the model inference in vehicular devices is difficult to meet the two requirements due to the limited computation resources and battery capacity of the vehicular devices [9,10]. Meanwhile, the edge-only inference paradigm allows the vehicular devices to upload their collected data and executes the model inference in edge servers, but brings about considerable communication costs because of the transmission of large-volume raw data [11][12][13]. Besides, privacy disclosure risk hinders the vehicles from sharing their raw data and being willing to join the VCS networks.
To solve the above disadvantages of model inference paradigms, the device-edge coinference paradigm was proposed [14]. In this paradigm, a deep model is partitioned into two parts. One part is stored in the vehicular device, while the other part is kept by the edge server. The vehicular device runs the first part of the deep model and uploads the intermediate output. The edge server uses the intermediate data as the input of the rest of the deep model and obtains the final result [15]. Previous works focused on finding out an appropriate partitioning way that has a small-size intermediate output and puts the model layers of large computation load on the side of edge server [14,16]. This can largely reduce communication costs and improve model inference efficiency. Besides, sharing intermediate model output instead of raw data alleviates the privacy disclosure issues to a certain extent [17].
Nonetheless, the device-edge co-inference paradigm still exists privacy issues. The attacker can reconstruct the raw data by obtaining and analyzing the intermediate model output [18]. Thus, designing defense mechanisms against privacy attacks is necessary. The work in [16,19] chose a deeper layer as the partitioning point which outputs a smaller-size and less-information intermediate result. The work in [19,20] used a dropout mechanism to randomly set some pixel of input data or intermediate data into zero, which reduces the information carried in the intermediate output. The work in [19][20][21] injected randomly generated noise into input data or intermediate output, which perturbs the reconstruction performance. These defense mechanisms heavily rely on experimental experience and lack theoretical guidance.
In this paper, we introduce the device-edge co-inference paradigm into VCS networks. Through the collaboration of vehicular devices and edge servers, the execution efficiency of deep model inference applications in VCS networks is significantly improved. Besides, we use a black box reconstruction attack, which is able to recover the input raw data only based on the intermediate output, to validate the privacy vulnerability of the co-inference. We then design a model-perturbation defense mechanism against such attacks by adding randomly generated noise to perturb the intermediate output. A differential privacy (DP) theoretical analysis is provided to verify that the proposed mechanism can guarantee ǫ-DP. Compared with the common defense approach that directly adds noise into intermediate data [19,21,22], our proposed mechanism enables a lower privacy budget, i.e., a higher privacy protection level. We further design a Stackelberg gamebased incentive mechanism that motivates vehicular devices to join the deep model inference and compensate for their economic loss from potential privacy leakage. The experimental results on the road sign classification dataset demonstrate that our proposed defense mechanism can significantly defend against the reconstruction attack and that the proposed incentive mechanism is effective.
In summary, the main contributions of this paper are as follows.
• We introduce the device-edge co-inference paradigm into VCS networks. The vehicular devices and edge servers work together to improve the efficiency of deep model inference and reduce the communication costs in VCS networks. • We adopt a black-box reconstruction attack to recover the input image in the road sign classification task. This demonstrates the privacy vulnerability of the co-inference paradigm, which limits its deployment in VCS networks. • We then propose a model perturbation mechanism that perturbs the model parameters to defend against the reconstruction attack. A DP theoretical analysis is provided as a theoretical guidance to alleviate privacy breaches in the co-inference of VCS networks. • We further propose a Stackelberg game-based incentive mechanism. The mechanism quantifies the privacy loss of each vehicle by using DP properties and compensates them in a satisfactory way, thus attracting vehicles to join the co-inference in VCS networks.
The remainder of this paper is organized as follows. Section 2 introduces the co-inference paradigm for VCS networks and the reconstruction attack upon it. Section 3 describes the proposed model perturbation defense and the related analysis. Section 4 formulates the incentive mechanism design problem as a Stackelberg game. Section 5 provides a detailed description of game theory analysis. The simulation results and performance evaluation are shown in Sect. 6. Finally, the concluding remarks are made in Sect. 7.

Privacy vulnerability of co-inference
In this section, we first present the device-edge co-inference paradigm in VCS networks and then adopt the black-box reconstruction attack to demonstrate its privacy vulnerability. Figure 1 gives an overview of a co-inference paradigm of VCS networks over one urban region. We describe the main entities as follows.

Co-inference in vehicular crowdsensing networks
• Vehicles are running across the urban area and recruited by the SP to execute crowdsensing and deep model inference. Each vehicle is equipped with sensors and a vehicular device. The sensors are used to collect data at a high rate while the vehicular device has the processing and storage resources to execute a portion of the deep model using the acquired data as input. • Edge Server is rented by the SP to carry out the deep model inference. The edge server has significantly more computational capabilities and capacity than vehicular devices, allowing it to operate the more complex parts of the deep model. The deep model's final calculation outputs assist the SP in making intelligent decisions. • Deep Model is partitioned by the SP into two parts. The first part, which has a lower computation load, is kept in vehicular devices, while the remainder, which has a higher computation load, is kept in the edge server. The intermediate data, which is the result of the deep model of vehicular devices, is sent to the edge server. The intermediate data is used as input by the edge server to run its deep model section and obtain the final results.
The cooperation between vehicular devices and edge servers can reduce communication costs and deep model inference delay. The vehicular devices do not share raw data in the co-inference paradigm, but they are still vulnerable to privacy leakage, as illustrated by the following privacy attack.

Spears: black-box reconstruction attack
As shown in Fig. 2, the vehicular device stores the first part of layers f θ 1 , while the edge server keeps the remainder f θ 2 . The vehicular device inputs raw image data x 0 and obtains the intermediate output v 0 = f θ 1 (x 0 ) . The attacker tries to be an eavesdropper in VCS networks and intercepts the vehicular device's shared v 0 . We consider a black-box setting that the attacker doesn't know the structure and parameters of the deep model f θ1 . But it could query the model, i.e., use arbitrary data X as input to run the model and observe the intermediate outputs V = f θ 1 (X) . This assumption happens when the SP releases its APIs to other users. The black-box setting is more realistic than the whitebox setting in which the structure and parameters of the deep model f θ 1 are accessible. It is harder for the attacker to reconstruct the image data under the black-box setting than under the white-box setting [18]. To this, the attacker can train an inverse model The detailed attack process is shown in Algorithm 1 and includes three phases. In the observation phase, the attacker uses a set of samples X = {x 1 , . . . , x n } to query f θ 1 and gets Here we consider that X follows the same distribution of x 0 . In the learning phase, the attacker trains the inverse model g ω with V as inputs and X as targets. The loss function is given as Note that the structure of g ω needs not to be related to that of f θ 1 . In our experiment, we use a totally different structure. In the reconstruction phase, the attacker inputs v 0 into the trained inverse model and obtains the recovered image

Differential privacy method
In this section, we first introduce preliminaries on DP method and then describe the proposed model perturbation defense mechanism. A theoretical analysis is given to show that the proposed defense mechanism provides ǫ-DP.

Preliminaries on differential privacy
DP is a statistical framework for measuring the privacy risk. The definition of ǫ-DP is as follows [23].
Definition 1 (ǫ-DP) Given two neighboring inputs X and X ′ which differ in at least one sample, a randomized mechanism f provides ǫ-DP if According to the above definition, given any neighboring inputs X and X ′ into the mechanism f, the probability of their outputs being in the same range S is characterized by ǫ . The parameter ǫ denotes the privacy budget. A smaller ǫ leads to a better privacy protection for the vehicular device. That is to say, given any output, the attacker cannot tell if it is generated by inputting X or X ′ .
A common defense approach is to introduce randomly generated noise of some specific probability distribution into the output f (·) [24]. One probability distribution used widely in DP is Laplace distribution denoted by Lap(0, σ ) , where 0 is the mean and σ is the scale. The Laplace Mechanism [20] is defined by It provides ǫ-DP when the added noise is sampled from Lap(0, σ ) with σ ≥ f ǫ . Here f is the global sensitivity indicating that the maximum difference between the outputs ||f (X) − f (X ′ )|| 1 with any pair of inputs X and X ′ .

Shields: model perturbation defense
Instead of adding noise directly into the intermediate output f θ 1 (X) [19,21,22], we introduce noise into the deep model parameters θ 1 as shown in Fig. 2. This can avoid drastic change of the intermediate output and reduce the negative effect on the following inference. The challenge is that it is difficult to calculate the sensitivity. Hence, we limit the maximum value of the parameters within a fixed bound G to calculate the sensitivity. The clipping operation is carried out during the deep model training [23,24].
Here, the parameters θ1 is bounded by θ1/ max 1, ||θ 1|| ∞ G . It means that the value of the parameter is not larger than G. Thus, the sensitivity can be approximately calculated as Next, we add the noise randomly sampled from the Laplace distribution Lap(0, 2G ǫ ) into the bounded parameters. The process of defense mechanism is shown in Algorithm 2. We show that Algorithm 2 gives the ǫ-DP guarantee in Theorem 1.

Theorem 1 Given the sensitive data X and the deep model
Given any adjacent inputs X and X ′ , According to Definition 1, we have ǫ = 2G σ and Algorithm 2 satisfies ǫ-DP. The proof is now completed.

Incentive mechanism for co-inference
In this section, we first describe the utility functions of the vehicular devices and the edge server. Then, we formulate the incentive mechanism design problem as a twostage Stackelberg game problem. We theoretically prove that the game has a unique equilibrium.

Utility of vehicle and edge server
We consider that there is a set of vehicles N running across the urban area and being recruited by the SP for collecting data of targets. Each vehicle i is running at a constant speed v i ∈ [20, 60] km/h. According to [5], a slower vehicle could stay in an area longer and capture more image data. According to [25], the quality of the captured image data from a slower vehicle is higher, i.e., the image data has less vagueness occurred by the shake of sensors. In other words, the collected data of a vehicle with lower v i has more considerable quality and quantity. After that, the vehicles input the collected data to run the deep model on their vehicular devices.
As aforementioned, the model perturbation defense mechanism provides a privacy protection for vehicular devices. In practice, there are a lot of deep model inference scenarios that need to protect the privacy of vehicular devices. For example, in the scenarios of recognizing target recognition, such as vehicle license plate, pedestrian, road sign, etc., the images contain sensitive information that may expose the drivers' driving habits or the vehicles' moving path. It may cause economic loss to the drivers if without the privacy protection. In this paper we consider the deep model inference for the road sign classification scenario and conduct experiments to measure the inference performance under the defense mechanism. The result in Fig. 5 shows that with a larger privacy budget ǫ , the inference accuracy increases. We fit the inference accuracy curve as which is used to measure a vehicular device's inference performance with its chosen ǫ i .
We can see that the SP expects the vehicles to choose a higher ǫ i for a higher inference accuracy. Thus, the SP would design a reward R to compensate the privacy loss of vehicles. Given the reward from the SP, each vehicle's profit is related to its contribution characterized by ǫ i and v i . Similar to [5,26], the profit of i is denoted as . The cost of i is defined as its potential privacy loss. Base on the DP analysis, a lower ǫ means a higher privacy protection level. If the privacy is breached, the economical loss of i is denoted as c i + e v i , where e v i is the expense on executing crowdsensing tasks under driving speed v i , e is the unit expense, and c i is the estimated value of collected data by i. Thus, the utility function of i is given as ).
The SP needs to aggregate the inference results from all the vehicles to alleviate individual error from crowdsensing. Here we consider that the aggregated inference performance is the weighted sum of all the vehicles, which is given as where 1 v i is the weight. The SP puts higher weight on the slower vehicles since they usually collect data with higher quality and quantity for model inference [5,25]. Thus, the utility function of the SP is given as where is the conversion factor from inference performance to profits, and R is the reward that the SP offers to the vehicles.

Game formulation
A Stackelberg game is a decision-making tool that contains a leader player and several follower players [26]. Each player is rational and only wants to maximize its utility. The follower players can observe the decision made by the leader player and choose their strategies accordingly [27]. A non-cooperative game is a decision-making tool that contains several rational players competing with each other [28]. They make the decisions at the same time.
In this paper, the problem is how the SP designs the reward R to compensate the privacy loss of vehicles while each vehicle chooses the privacy budget ǫ i to complete co-inference tasks of VCS networks under privacy protection. We formulate the problem as a Stackelberg game, where the SP is the leader player, while the vehicles are the follower players. Each vehicle has to decide its optimal response ǫ * i given R and other vehicles' privacy strategies. Mathematically, the problem is written as Problem1: The value of ǫ i affects both the inference accuracy and the privacy protection level. A higher ǫ i brings to a higher inference accuracy but a higher risk of privacy leakage. Here, ǫ min ensures that the inference accuracy under model perturbation defense is acceptable and ǫ max ensures that the least privacy protection level requirement is satisfied. The SP can command the expected inference performance by controlling R and aims to find the optimal reward R * to balance the gained profit from deep model inference and expense on rewarding. Mathematically, the problem is written as Problem2: The Stackelberg game is made up of Problem 1 and 2. The objective of this game is to find a Stackelberg Equilibrium (SE) point from which the SP and the vehicles have no motivation to deviate.. The definition of SE is as follows [28].

Definition 2 Let ǫ *
i be the optimal solution for Problem 1 and R * is the optimal solution for Problem2. The point (R * , ǫ * ǫ * ǫ * ) is an SE for the proposed Stackelberg game, if it satisfies where ǫ * ǫ * ǫ * with entry ǫ * i is the set of best responses of the vehicles.
The vehicles compete with each other on the reward and thus form a non-cooperative subgame. There may exist a Nash Equilibrium (NE) point where no vehicle can enhance its utility by changing its strategy unilaterally. The definition of NE is as follows [28].

Game theory method
In this section, we use the backward induction method of game theory to analyze the two games and find the NE and the SE.

Subgame nash equilibrium
We use the backward induction method to analyze the existence and uniqueness of the NE in the subgame.

Proof
The strategy space of each vehicle is non-empty, convex, and compact. From Eq. (7), U i is continuous with respect to ǫ in [ǫ min , ǫ max ] . We take the first and second derivatives of U i with respect to ǫ i and obtain (12) We prove that U i is strictly concave with respect to ǫ i . Thus, the NE point exists. The proof is now completed.
Let ∂U i ∂ǫ i = 0 and we get the best response function of i as

Proof
According to Eqs. (17) and (18), we have j∈N \{i} Given R offered by the SP and privacy strategies ǫ −i ǫ −i ǫ −i offered by other vehicles, the best response function in Eq. (15) is denoted as ǫ B 1 , B 2 , . . . , B N ) can be proved to be the standard function which meets the following conditions [5,29].

and thus conclude that
We further conclude that j∈N \{i} which satisfies the positivity condition.
We then analyze the monotonicity. Taking the first derivative of B i (ǫ −i , R) with respect to ǫ j , j ∈ N \{i} , we have According to Eq. (22) that j∈N \{i} the monotonicity condition is satisfied.
Finally we analyze the scalability. We have is always satisfied for µ > 1 . The scalability condition is satisfied. B(ǫ −i , R) meets the three conditions and is a standard function. Thus, uniqueness of the NE is proved. The proof is now completed.
Generally, we can obtain the NE point by using the best response dynamics [29] . Problem 1 is resolved and then we analyze the SE in the following.

Stackelberg equilibrium
We substitute ǫ * i into the objective function of Problem 2 and have Theorem 5 There exists a unique SE for the proposed Stackelberg game among the SP and the vehicles.

Proof
The strategy space of the SP is non-empty, convex, and compact. U S is continuous with respect to R in [0, +∞] . We take the second derivatives of Eq. (26)

with respect to R and get
Thus, U S is strictly concave with respect to R and the SP has a unique optimal strategy R * in maximizing its utility. According to Theorem 4, given any reward from the SP, the vehicles always choose a unique set of best responses ǫ * to reach the NE. Therefore, when the SP chooses R * , all players determine their optimal strategies. This satisfies the condition in Definition 2 that there exists a unique SE point. The proof is now completed.
The objective function of Problem 2 is a concave function and can be solved by using the existing typical convex optimal algorithms (e.g., dual decomposition algorithm [30] ). If the SP has global information, such as c i , he can find out R * in a centralized manner. However, to protect the privacy of each vehicle, [31] inspires us to design a distributed algorithm that performs the optimization without any private information. The proposed incentive mechanism is carried out cyclically. At each cycle, the SP and the vehicles reach an agreement by Algorithm 3. Under the agreement, the vehicles finish the co-inference tasks by choosing a privacy budget and obtaining the responding rewards. In Algorithm 3, the SP updates the reward value by using a gradient-assisted searching algorithm, i.e., Eq. (28), and offers it to the vehicles. Each vehicle receives the reward value, determines its privacy budget based on Eq. (15), and returns the strategy to the SP. The iterations continue until the difference of the updated reward value is less than a preset threshold. Note that the communication delay is negligible due to the small size of shared information. The frequency of update, i.e., the number of iterations to reach convergence, depends on the learning rate and the threshold. When executing the algorithm, the vehicles conduct wireless communication with an access point (AP). Each vehicular node uploads its strategy information, i.e., privacy budget ǫ i , to the nearest AP and other vehicular nodes can query this strategy information with negligible delay.

Results and discussion
In this section, we conduct the experiments to evaluate the performance of the blackbox reconstruction attack and the proposed model perturbation defense mechanism. We also conduct the simulations to evaluate the performance of the proposed incentive mechanism.

Experimental setup
We conduct experiments on the GTSRB dataset for road sign recognition that consists of 39208 samples for training and 12630 samples for testing. We adopt a Convolution Neuron Network (CNN) as deep model with 6 convolution layers and 2 fully connected layers. Each convolution layer has 64 channels and the kernel size is 3. There is a maxpooling layer after every two convolution layers. The model is partitioned at the 2nd, 4th, and 6th convolution layers. We use ADAM as our optimizer and set the learning rate as 0.001. The adopted inverse model consists of two deconvolution layers and one ReLU layer between them. Each deconvolution layer has 64 channels and the kernel size is 3.

Measurement metrics
We use three metrics to measure the attack and defense performance. Mean-Square Error (MSE) measures pixel-wise similarity. Peak Signal-to-Noise Ratio (PSNR) quantifies the pixel-level reconstruction quality of the images. Structural Similarity Index (SSIM) reflects the human perceptual similarity of two images according to their luminance, contrast, and structure. It ranges from [0, 1], where 1 denotes the most similar.   Figure 3 and Table 1 show the recovered performance via black-box reconstruction attack. As shown in Fig. 3, when the deep model is split in a shallower layer, the reconstructed images have high fidelity. When the deep model is split in a deeper layer, the reconstructed images lose some details and become blurry. Even if the split point is in the 6th layer, the details of road signs can still be clearly identified. Table 1 shows that the reconstructed images have higher MSE, PSNR, and lower SSIM, when the deep model is split in a deeper layer. Thus, with a deeper split layer, the black-box reconstruction attack becomes harder. Figure 4 and Table 2 show the recovered performance under model perturbation defense mechanism with different privacy budget ǫ . We set the split point in the 4th layer and randomly sample noise with privacy budget ǫ as 5, 10, 500, respectively. As shown in Fig. 4, with a lower ǫ , the reconstructed images become blurrier and lose more details. When ǫ = 5 , the details of road signs are hard to be identified. Table 2 shows that the  recovered images under defense with lower ǫ have higher MSE, PSNR, and lower SSIM. The deep model inference accuracy decreases when ǫ becomes smaller. The reason is that the injected noise also perturbs the inference results. Generally, the model perturbation defense mechanism reduces the quality of image reconstruction while slightly decreasing the inference performance. These results offer an intuitive guide for the DP and the vehicles for balancing inference performance and privacy protection.

Inference performance
For better characterizing the influence caused by the model perturbation mechanism on the inference performance, we set different privacy budgets to observe the inference accuracy depression. Figure 5 shows that the inference accuracy drops with the decrease of ǫ . We also fit the curve based on the observed results.

Simulation setup
We consider that there are 5-30 vehicles being recruited for executing VCS and co-inference. The driving speed is randomly chosen in the range of [5,15] m/s. The profit coefficient is ∈ [1000, 1250] . The expected value of privacy is c i ∈ [5, 10] and the expense for joining the crowdsensing task is e = 10. Figure 6 shows the performance comparison among the centralized approach, the distributed approach, and the linear approach. The centralized approach assumes that the SP knows the estimated value of collected data of each vehicle so that the SP can use a convex algorithm to directly calculate R * in a centralized manner. Our proposed distributed algorithm allows the SP to approach the SE point in a distributed manner without the need for any private information. The linear approach also considers that the SP has no knowledge of the vehicles' private information but the given rewards are linear to the privacy budget of vehicles. As shown in Fig. 6, with the centralized approach, the SP obtains the highest utility. The reason is that the SP knows the estimated value of collected data so that it can directly find out the optimal solution. By using the linear approach, the SP obtains the lowest utility, while the vehicle obtains the highest utilities. The reason is that in the linear approach, the vehicle's reward is linear to its own Fig. 7 The performance of incentive mechanism with respect to the estimated value of privacy privacy budget, without relation to other vehicles' strategies. The performance of our proposed distributed algorithm is much better than linear approach but slightly worse than that of the centralized approach with an average distance of 0.05% . In general, our distributed algorithm enables the SP to obtain the highest utility when it has no knowledge of the vehicles' private information. In addition, the result also shows that the utilities of both the SP U S and the vehicle increase with the growing profit coefficient . It is because a higher allows the SP to gain more considerable profit from deep model inference so that the SP provides a higher reward. Figure 7 shows the performance of incentive mechanism with respect to the estimated value of privacy c i . When the evaluated privacy value is higher, the utilities of both the SP and the vehicle decrease. The reason is that when a vehicle estimates a higher value for its privacy, it will choose a lower ǫ to protect its privacy. The profit of the SP becomes lower accordingly.   Figure 8 shows the performance of incentive mechanism with respect to the number of vehicles. When the number of vehicles grows, U S increases, while U i decreases. The reason is that the more vehicles can collect more data to perform deep model inference for the SP. But the increasing number of vehicles brings about more strict competition among them. Figure 9 shows the performance of incentive mechanism with respect to the velocity of vehicles. When the velocity of vehicles becomes higher, the utilities of both the SP and the vehicle decrease. The reason is that when the vehicles drive at a high speed, their weight in the deep model inference decreases. The SP obtains a lower profit and gives lower rewards to the vehicles.

Conclusion
In this paper, we adopted the device-edge co-inference paradigm to improve the inference efficiency in VCS networks and studied its privacy preservation. We evaluated the black-box reconstruction attack, which recovers the input data of the vehicular devices, and proposed a model perturbation defense mechanism based on DP theory against the attack. We designed a Stackelberg game-based incentive mechanism that encourages the vehicular devices to participate in the co-inference by compensating their privacy loss. Experimental results demonstrated the effectiveness of our proposed defense mechanism and incentive mechanism.