 Research
 Open access
 Published:
Privatized graph federated learning
EURASIP Journal on Advances in Signal Processing volume 2023, Article number: 87 (2023)
Abstract
Federated learning is a semidistributed algorithm, where a server communicates with multiple dispersed clients to learn a global model. The federated architecture is not robust and is sensitive to communication and computational overloads due to its onemaster multiclient structure. It can also be subject to privacy attacks targeting personal information on the communication links. In this work, we introduce graph federated learning, which consists of multiple federated units connected by a graph. We then show how graphhomomorphic perturbations can be used to ensure the algorithm is differentially private on the server level. While on the client level, we show that improvement in the differentially private federated learning algorithm can be attained through the addition of random noise to the updates, as opposed to the models. We conduct both convergence and privacy theoretical analyses and illustrate performance by means of computer simulations.
1 Introduction
Federated learning (FL) [1] is one particular distributed structure where users no longer need to send their data to a server for training. Instead, data remains local, and training happens in collaboration between different clients and the server. Compared to a fully decentralized solution, communication occurs between the server and the clients (or agents), instead of directly between the agents themselves. Such a solution is advantageous in the sense that users no longer need to worry about sharing their data with an unknown party, and the high cost of sending all their raw data is eliminated. In this way, the data stays locally safe on a user’s device, and no extra communication cost is incurred for transferring the data remotely. However, such a distributed architecture is not robust to communication failures and computational overloads, nor it is immune to privacy attacks when agents are required to share their local updates. In standard FL, millions of users can be connected to one server at a time. This means one server will need to be responsible for the communication with all clients with significant computational burden, thus rendering the system susceptible to communication failures. Furthermore, whether clients send their gradient updates or their local models, information about their data can be inferred from the exchanges and leaked [2,3,4,5]. Consider for instance the logistic risk; the gradient of the loss function is a constant multiple of the feature vector. Thus, even though the actual data samples are not sent to the server, information about them can still be inferred from the gradient updates or the models.
These considerations motivate us to propose an architecture for federated learning with privacy guarantees. In particular, we introduce the graph federated architecture, which consists of multiple servers, and we privatize the algorithm by ensuring the communication occuring between the servers and the clients is secure. Graphhomomorphic perturbations, which were initially introduced in [6], focus on the communication between servers. They are based on adding correlated noise to the messages sent between servers such that the noise cancels out if we were to take the average of all messages across all servers. As for the privatization between the clients and their servers, we share noisy updates as opposed to models. The two protocols make sure the effect of the added noise is reduced.
Other works have also contributed to addressing the same challenges we are considering in this work, albeit differently. For example, the work [7] introduces a hierarchical architecture, where it is assumed there are multiple servers connected in a tree structure. Such a solution still has one main server and thus faces the same robustness problem as FL. The graph federated learning architecture in this work (and which appeared in the earlier conference publication [8]) is a more general structure. The work [9] generalizes the standard distributed learning framework to include local updates, while [10] has a similar architecture to the GFL architecture proposed earlier in [8], it nevertheless does not deal with privacy and employs different objective functions and a different learning algorithm based on the alternating direction method of multipliers. Likewise, a plethora of solutions exist that relate to privacy issues. These methods may be split into two subgroups: those using random perturbations to ensure a certain level of differential privacy [11,12,13,14,15,16,17,18,19,20], or those that rely on cryptographic methods [21,22,23,24,25]. Both have their advantages and disadvantages. While differential privacy is easy to implement, it hinders the performance of the algorithm by reducing the model utility. As for cryptographic methods, they are generally harder to implement since they require more computational and communication power [26, 27]. Furthermore, they restrict the number of participating users. Moving forward, we go ahead with the study of differentially private methods.
The main contribution in this work is threefold. We introduce a new generalized and more realistic architecture for the federated setting where we now consider multiple servers connected by some graph structure. Furthermore, many earlier works have proposed adding Laplacian noise sources to the shared information among agents in order to ensure some level of privacy. However, these works have largely ignored the fact that these noises degrade the meansquare error (MSE) performance of the network from \(O(\mu )\) down to \(O(\mu ^{1})\), where \(\mu\) is the small learning parameter. To resolve this issue, we define a new noise generation scheme that mantains the MSE at O(1) while ensuring privacy. Although the work [20] proposed a noisydistributed consensus strategy, this reference lacks a useful construction method for the perturbations. In this work, we devise a construction scheme. Therefore, the main difference between our proposed method and previous works is that we devise a noise construction scheme that ensures the total sum of the added noise cancels out centrally. This results in the improved MSE bound of O(1). Finally, we prove that clients sharing noisy updates as opposed to noisy models lead to improved performance relative to what is commonly done in the prior literature. Moreover, we do not assume bounded gradients, as commonly assumed in previous works [12, 15, 16], since this condition does not actually hold in most situations in practice. Note, for instance, that even quadratic risks do not have bounded gradients. For this reason, we will not rely on this condition, and will instead be able to show that our noise construction is able to ensure differential privacy with high probability for most cases of interest. The main results shown in this work are as follows:

1.
Privatized GFL under graphhomomorphic perturbations converges in the MSE sense to an O(1) neighbourhood of the true model \(w^o\) as opposed to \(O(\mu ^{1})\) when random perturbations are used instead.

2.
Privatized FL under perturbed gradients converges in the MSE sense to an \(O(\mu )\) neighbourhood of the true model \(w^o\) as opposed to \(O(\mu ^{1})\) when perturbed models are shared instead.

3.
GFL with graphhomomorphic perturbations and perturbed gradients is \(\epsilon (i)\)differentially private with high probability.
2 Graph federated architecture
In the graph federated architecture, which we initially introduced in [8], we consider P federated units connected by a graph structure. Each federated unit consists of a server and a set of K agents. Thus, the overall architecture can be represented as a graph depicted in Fig. 1. We denote the combination matrix connecting the servers by \(A \in {\mathbb {R}}^{P\times P }\), and we write \(a_{mp}\) to refer to the elements of A. We assume each agent of every server has its own dataset \(\{x_{p,k,n}\}_{n=1}^{N_{p,k}}\) that is noniid when compared to the other agents. The subscript p refers to the federated unit, k to the agent, and n to the data sample. We note the difference between our proposed architecture and a fully distributed setting. The graph federated architecture consists of a network of federated units while a fully distributed network removes the need for servers and assumes clients are connected to each other based on some graph structure. Such an architecture is an improvement on the original federated architecture and not necessarily on the fully distributed architecture. Instead of clients communicating with the same server, we split the load among multiple servers.
With this architecture, we associate a convex optimization problem that will take into account the cost function at each federated unit. Thus, the optimization goal is to find the optimal global model \(w^o\) that minimizes an average empirical risk:
where each individual cost is an empirical risk defined over the local loss functions \(Q_{p,k}(\cdot ;\cdot )\):
To solve problem (1) each federated unit p runs the standard federated averaging (FedAvg) algorithm [1]. An iteration i of the algorithm consists of the server p selecting a subset of L participating agents \({\mathcal {L}}_{p,i}\). Then, in parallel, each agent runs a series of stochastic gradient descent (SGD) steps. We call these local steps epochs, and denote an epoch by the letter e and the total number of epochs by \(E_{p,k}\). The sampled data point at an agent k in the federated unit p during the \(e^{th}\) epoch of iteration i is denoted by b. Thus, during an iteration i, each participating agent \(k \in {\mathcal {L}}_{p,i}\) updates the last model \({\varvec{w}}_{p,i1}\) and sends its new model \({\varvec{w}}_{p,k,E_{p,k}}\) to the server after \(E_{p,k}\) epochs. During a single epoch e, the agent updates its current local model \(w_{p,k,e1}\) by running a single SGD step. Thus, an agent repeats the following adaptation step for \(e=1,2,\ldots , E_{p,k}\):
with \({\varvec{x}}_{p,k,b}\) be the sampled data of agent k in federated unit p, and \({\varvec{w}}_{p,k,0} = {\varvec{w}}_{p,i1}\). After all the participating agents \(k \in {\mathcal {L}}_{p,i}\) run all their epochs, the server aggregates their final models \({\varvec{w}}_{p,k,E_{p,k}}\), which we rename as \({\varvec{w}}_{p,k,i}\) since it is the final local model at iteration i:
Next, at the server level, these estimates are combined across neighbourhoods using a diffusion type strategy, where we first consider the previous steps (3) and (4) as the adaptation step and the following step as the combination step:
To introduce privacy, the models communicated at each round between the agents and the servers need to be encrypted in some way. We could either apply secure multiparty computation (SMC) tools, like secret sharing, or use differential privacy. We focus on differential privacy or masking tools that can be represented by added noise. Thus, we let agent 1 in federated unit 2 add a noise component \({\varvec{g}}_{2,1,i}\) to its final model \({\varvec{w}}_{2,1,i}\) at iteration i, and then let serever 2 add \({\varvec{g}}_{12,i}\) to the message \({\varvec{\psi }}_{2,i}\) it sends to server 1. More generally, we denote by \({\varvec{g}}_{pm,i}\) the noise added to the message sent by server m to server p at iteration i. Similarly, we denote by \({\varvec{g}}_{p,k,i}\) the noise added to the model sent by agent k to server p during the ith iteration. We use unseparated subscripts pm for the interserver noise components to point out their ability to be combined into a matrix structure. Contrarily, the agentserver noise components’ subscripts are separated by a comma to highlight a hierarchical structure. Thus, the privatized algorithm can be written as a client update step (6), a server aggregation step (7), and a server combination step (8):
The client update step (6) follows from (3) by combining the multiple epochs for \(e=1,2,\ldots , E_{p,k}\) into one update step, with \({\varvec{w}}_{p,k,i} = {\varvec{w}}_{p,k,E_{p,k}}\) and \({\varvec{w}}_{p,k,0} = {\varvec{w}}_{p,i1}\), namely:
3 Performance analysis
In this section, we show a list of results on the performance of the algorithm. We study the convergence of the privatized algorithm (6)–(8), and examine the effect of privatization on performance.
3.1 Modeling conditions
To go forward with our analysis, we require certain reasonable assumptions on the graph structure and cost functions.
Assumption 1
(Combination matrix) The combination matrix A describing the graph is symmetric and doublystochastic, i.e.:
Furthermore, the graph is stronglyconnected and A satisfies:
\(\square\)
Assumption 2
(Convexity and smoothness) The empirical risks \(J_{p,k}(\cdot )\) are \(\nu strongly\) convex, and the loss functions \(Q_{p,k}(\cdot ;\cdot )\) are convex, namely for \(\nu > 0\),:
Furthermore, the loss functions have \(\delta\)Lipschitz continuous gradients, meaning there exists \(\delta >0\) such that for any data point \(x_{p,n}\):
\(\square\)
We also require a bound on the difference between the global optimal model \(w^o\) and the local optimal models \(w^o_{p,k}\) that optimize \(J_{p,k}(\cdot )\). This assumption is used to bound the gradient noise and the incremental noise defined further ahead. It is not a restrictive assumption, and it imposes a condition on when collaboration is sensical among different agents. In other words, since the agents have noniid data, sometimes their optimal models are too different and collaboration would hurt their individual performance. For example, when considering recommender systems, people in the same country are more likely to get the same movie recommended as opposed to across different countries. This means, people of the same country might have different models but relatively close contrary to different countries.
Assumption 3
(Model drifts) The distance of each local model \(w_{p,k}^o\) to the global model \(w^o\) is uniformly bounded, i.e., there exists \(\xi \ge 0\) such that \(\Vert w^o  w_p^o\Vert \le \xi\).
3.2 Network centroid convergence
We study the convergence of the algorithm from the network centroid’s \({\varvec{w}}_{c,i}\) perspective:
We write the central recursion as:
Next, we define the model error as \({\widetilde{{\varvec{w}}}}_{c,i} \,\overset{\Delta }{=}\,w^o  {\varvec{w}}_{c,i}\) and the average gradient noise:
with the perunit gradient noise \({\varvec{s}}_{p,i}\):
and
We introduce the average incremental noise \({\varvec{q}}_i\) and the local incremental noise \({\varvec{q}}_{p,i}\), which capture the error introduced by the multiple local update steps:
We then arrive at the following error recursion:
where \({\varvec{g}}_{i}\) is the total added noise at iteration i:
We estimate the first and secondorder moments of the gradient noise in the following lemma. To do so, we use the fact, shown in previous work (Lemma 1 in [28]), that the individual gradient noise is zeromean with a bounded second order moment:
where the constants are defined as:
and \({\mathcal {F}}_{i1}\) is the filtration defined over the randomness introduced by all the past subsampling of the data for the calculation of the stochastic gradient. Using Assumption 3, we can guarantee that \(\sigma _{s,p}^2\) is bounded by bounding:
Lemma 1
(Estimation of first and secondorder moments of the gradient noise) The gradient noise defined in (17) is zeromean and has a bounded secondorder moment:
where the constants \(\beta _s^2\) and \(\sigma _s^2\) are given by:
Proof
The above result follows from applying the Jensen’s inequality and the bounds on the perunit gradient noise \({\varvec{s}}_{p,i}\). \(\square\)
The new term found in the bound of the gradient term is what we call the network disagreement:
It captures the difference in the path taken by the individual models versus the network centroid. We bound this difference in Lemma 3. However, before doing so, we show that the second order moment of the incremental noise is on the order of \(O(\mu )\). From Lemma 5 in [28], we can bound the individual incremental noise:
where the constants are given by:
The following result follows.
Lemma 2
(Estimation of secondorder moment of the incremental noise) The incremental noise defined in (20) has a bounded secondorder moment:
where the constant \(\sigma _q^2\) is the average of \(\sigma _{q,p,k}^2\):
Proof
The above result follows from applying the Jensen inequality and the bounds on the perunit incremental noise \({\varvec{q}}_{p,i}\). Furthermore, \(a = O(\mu ^{1}), b_k = O(\mu ^{1}),\) and \(c_k = O(1)\) reduce the expression to (37). \(\square\)
We now bound the network disagreement. To do so, we first introduce the eigendecomposition of \(A = QH Q^\textsf{T}\):
where \(H_{\theta }\) is a diagonal matrix that includes the last \((P1)\) eigenvalues of A and \(Q_{\theta }\) their corresponding eigenvectors.
Lemma 3
(Network disagreement) The average deviation from the centroid is bounded during each iteration i:
where \({\varvec{{ {\mathcal {W}}}}}_{0} \,\overset{\Delta }{=}\,\text{ col }\left\{ {\varvec{w}}_{p,0}\right\} _{p=1}^P\) and \(\lambda _p \,\overset{\Delta }{=}\,\sqrt{12\nu \mu + \delta ^2\mu ^2} + \beta _{s,p}^2 \mu ^2 + O(\mu ^2) \in (0,1)\). Then, in the limit:
Proof
See “Appendix 2”. \(\square\)
Thus, from the above lemma, we see that the individual models gravitate to the centroid model with an error introduced due to the added privatization. The effect of the added noise overpowers that of the gradient and incremental noise, since the later is on the order of the stepsize.
Then, using the above result, we can establish the convergence of the centroid model to a neighbourhood of the true optimal model \(w^o\) in the meansquareerror (MSE) sense.
Theorem 1
(Centroid MSE convergence) Under Assumptions 1, 2 and 3, the network centroid converges to the optimal point \(w^o\) exponentially fast for a sufficiently small stepsize \(\mu\):
where \(\lambda _c = \sqrt{12\nu \mu + \delta ^2\mu ^2} +\beta _s^2\mu ^2 + O(\mu ^{2}) \in (0,1)\). Then, letting i tend to infinity, we get:
Proof
See “Appendix 3”. \(\square\)
The main term in the above bound is the variance of the added noise with a dominating factor of \(\mu ^{1}\), since:
which allows us to rewrite the bound as follows:
with \({\mathbb {E}}\Vert {\varvec{g}}\Vert ^2\) representing the variance of the total added noise, independent of time. While in general decreasing the stepsize improves performance, the above result shows that this need not be the case with privatization. Thus, since the added noise impacts the model utility negatively, it is important to choose a privatization scheme that reduces the effect. In what follows, we look closely at such a scheme.
3.3 Graphhomomorphic perturbations
We consider a specific privatization scheme and specialize the above results. The goal of the scheme is to remove the \(O(\mu ^{1})\) term from the MSE bounds. Thus, focusing on the centroid model expression (16), we wish to cancel out the total added noise amongst servers, i.e.,
To achieve this, we introduce graphhomomorphic perturbations defined as follows [6]. We assume each server p draws a sample \({\varvec{g}}_{p,i}\) independently from the Laplace distribution \(Lap(0,\sigma _g/\sqrt{2})\) with variance \(\sigma _{g}^2\). Server p then sets the noise \({\varvec{g}}_{mp,i}\) added to the message sent to its neighbour m as:
With such a construction, condition (46) is satisfied:
Thus, with such a scheme, the noise components proportional to \(O(\mu ^{1})\) resulting from the noise added between the servers cancel out in the error recursions, however since gradients are evaluated at the local models \({\varvec{w}}_{p,i}\) and not at the centroid \({\varvec{w}}_{c,i}\), thus the effect of the noise is still evident. Yet, this remaining error introduced by the noise is controlled by the stepsize. Thus, its effect can be mitigated by using a smaller stepsize. In the next corollary, we show that if no noise is added amongst the clients and graphhomomorphic perturbations are used amongst servers, then the error converges to \(O(1)\sigma _g^2\).
Corollary 1
(Centroid MSE convergence under graphhomomorphic perturbations) Under Assumptions 1, 2 and 3, the network centroid with graphhomomorphic perturbations converges to the optimal point \(w^o\) exponentially fast for a sufficiently small stepsize \(\mu\):
Then, letting i tend to infinity, we get:
Proof
Starting from (43), and replacing \({\mathbb {E}} \Vert {\varvec{g}}\Vert ^2 = 0\) because \({\varvec{g}}_{i} = 0\), we get the final result. \(\square\)
3.4 Sharing gradients as opposed to weight estimates
We next show that sharing gradients versus models is better for the performance under added noise. In the remainder of this section and for the sake of simplicity, we illustrate this conclusion by considering one federated unit, say for \(p=1\). Thus, if we were to introduce differential privacy to federated learning, then a random Laplacian noise should be added to each model by the client before aggregation by the server, and the new privatized aggregation step will become:
However, if we were to study the MSE convergence of this privatized algorithm, we would notice a new \(O(\mu ^{1})\sigma _g^2\) term in the bound (Theorem 1). To address this degradation, we now describe an alternative implementation that shares gradients as opposed to weight estimates. Note first that the FL algorithm can be expressed in a single step taken from the server’s perspective:
This suggests that instead of every agent sharing its final model \({\varvec{w}}_{1,k,i}\), they could share the total update:
The server then aggregates the updates from all participating agents and updates the previous model \({\varvec{w}}_{1,i1}\). In this case, if we were to privatize this new version of the algorithm, we would add random noise to the updates which are then scaled by the stepsize:
We show in the following theorem the effect of the added noise to the new FL algorithm. It turns out, the noise introduces an \(O(\mu )\) error instead of \(O(\mu ^{1})\).
Theorem 2
(MSE convergence of privatized FL) Under Assumptions 2 and 3, the privatized FL algorithm (54)–(55) converges exponentially fast for a small enough stepsize to a neighbourhood of the optimal model:
where \(\lambda = \sqrt{12\nu \mu + (\beta _{s,1}^2+\delta ^2)\mu ^2} + O(\mu ^2) \in (0,1)\). Then, in the limit:
Proof
See “Appendix 6”. \(\square\)
Thus, sharing the updates instead of the models is advantageous since the effect of the added noise on the performance is reduced. The \(O(\mu )\) factor allows us to increase the value of the noise variance while ensuring the model utility does not deteriorate significantly. Therefore, to guarantee an \(\epsilon (i)\)DP algorithm, we let the added noise be a zeromean Laplacian random variable with \(\sigma _{g}^2\) variance.
4 Privacy analysis
We study the privacy of the algorithm (6)–(8) in terms of differential privacy. We focus on graphhomomorphic perturbations and show that the adopted scheme is differentially private. To do so, we first define what it means for an algorithm to be \(\epsilon\)differentially private. Therefore, without loss of generality, assume agent 1 in federated unit 1 decides to not participate, and its data samples \(x_{1,1}\) are replaced by a new set \(x'_{1,1}\) with a different distribution. Then, with the new data, the algorithm takes a different path. We denote the new models by \({\varvec{w}}'_{p,k,i}\). The idea behind differential privacy is that an outside observant should not be able to distinguish between the two trajectories \({\varvec{w}}_{p,k,i}\) and \({\varvec{w}}'_{p,k,i}\) and conclude whether agent one participated in the training. More formally, differential privacy is defined bellow.
Definition 1
(\(\epsilon (i)\)Differential Privacy) We say that the algorithm given in (6)–(8) is \(\epsilon (i)\)differentially private for server p at time i if the following condition holds on the joint distribution \(f(\cdot )\):
\(\square\)
Thus, the above definition states that minimaly varried trajectories have comparable probabilities. In addition, the smaller the value of \(\epsilon\) is, the higher the privacy guarantee will be. Thus, the goal will be to decrease \(\epsilon\) as long as the model utility is not strongly affected.
Next, in order to show that the algorithm is differentially private, we require the sensitivity of the algorithm to be bounded. The sensitivity at time i is defined as:
It measures the distance between the original and perturbed weight vectors. It is shown in “Appendix 4” that \(\Delta (i)\) can be bounded as follows:
for constants B and \(B'\) chosen by the designer. Moreover, the above bound holds with high probability given by:
This result shows that the sensitivity can be bounded with high probability, which in turn is dependent on the values chosen for B and \(B'\). Larger values for these constants increase the probability, but nevertheless lead to a looser bound for privacy (as shown in Theorem 3). Therefore, the choice of B and \(B'\) needs to be balanced judiciously to ensure the desired level of privacy.
Using the bound on the sensitivity and from the definition of differential privacy, we can finally show that the algorithm is differentially private with high probability.
Theorem 3
(Privacy of GFL algorithm) If the algorithm (6)–(8) adopts graphhomomorphic perturbations, then it is \(\epsilon (i)\)differentially private with high probability, at time i for a standard deviation of \(\sigma _g = \sqrt{2}(B+B'\sqrt{P}\Vert w^ow'^o\Vert )(i+1) / \epsilon (i)\).
Proof
See “Appendix 5”. \(\square\)
Thus, the above theorem suggests, if we wish the algorithm to be \(\epsilon (i)\)differentially private, then we need to choose the noise variance accordingly. The larger the variance is, the more private the algorithm will be. However, the longer the algorithm is run, we will require a larger noise variance to keep the same level of privacy guarantee. Said differently, if we fix the added noise, then as time passes, the algorithm becomes less private, and more information is leaked. However, with graphhomomorphic perturbations, we can afford to increase the variance since its effect is constant on the MSE, and thus decreases the leakage.
Moreover, we study the effect of the model drift on the privacy of the algorithm. Thus, if we examine closely the probability that the sensitivity is bounded, the model drift \(\xi\) appears in the \(O(\mu )\) term. The smaller the model drift is, we note that the higher the probability that the sensitivity is bounded. This in turn implies that the algorithm is differentially private with higher probability. Furthermore, if we study the average \(\epsilon (i)\), we see that:
as the model drift decreases, so does \(\epsilon (i)\) on average. Therefore, with smaller model drift we can achieve higher privacy with more certainty.
5 Experimental analysis
We conduct a series of experiments to study the influence of privatization on the GFL algorithm. The aim of the experiments is to show the superior performance of graphhomomorphic perturbations to random perturbations and perturbations to gradients versus models, and to study the effect of different parameters on the performance of the algorithm.
5.1 Regression
We first start by studying a regression problem on simulated data. We do so for the tractability of the problem. We consider the quadratic loss that has a closed form solution, i.e., a formal expression for the true model \(w^o\) is known, which makes the calculation of the mean square error feasible and more accurate.
Therefore, consider a streaming feature vector \({\varvec{u}}_{p,k,n} \in {\mathbb {R}}^M\) with output variable \({\varvec{d}}_{p,k}(n) \in {\mathbb {R}}\) given by:
where \(w^{\star }\in {\mathbb {R}}^M\) is some generating model, and \({\varvec{v}}_{p,k}(n)\) is some zeromean Guassian random variable with \(\sigma _{v_{p,k}}^2\) variance and independent of \({\varvec{u}}_{p,k,n}\). Then, the optimal model that solves the following problem:
is found to be:
where \({\widehat{R}}_u\) and \({\widehat{r}}_{uv}\) are defined as:
We consider \(P = 10\) units, each with \(K= 100\) total agents. We assume, \(N_{p,k}=100\) for each agent. We randomly generate twodimensional feature vectors \({\varvec{u}}_{p,k}(n)\) from a Gaussian random vector with zeromean and a randomly generated covariance matrix \(R_{u_{p,k}}\). We then calculate the corresponding outputs according to (63). To make the data noniid across agents, we assume the covariance matrix \(R_{u_{p,k}}\) is different for each agent, as well as the variance \(\sigma _{v_{p,k}}^2\) of the added noise. When running the algorithm, we assume each unit samples at random \(L = 11\) agents, and each agent runs \(E_{p,k} \in [1,10]\) epochs and uses a minibatch of \(B_{p,k} \in [5,10]\) samples.
We compare three algorithms: the standard GFL algorithm, the privatized GFL algorithm with random perturbations, and the privatized GFL with homomorphic perturbations. We do not add noise between the clients and their server to focus on the effect of the perturbations between the servers. In the first set of simulations, we fix the stepsize \(\mu =0.7\) and the regularization parameter \(\rho = 0.1\). We fix the variance of the added noise for privatization in both schemes to \(\sigma _g^2 = 0.1\). We then plot the meansquaredeviation (MSD) at each time step for the centroid model:
as seen in Fig. 2. We observe that the privatized GFL with random perturbations has lower performance compared to the other two algorithms. While, using homomorphic perturbations does not result in such a decay in performance. Thus, our suggested scheme does a good job at tracking the performance of the original GFL algorithm, while not compromising with the privacy level.
We next study the extent of the effect of the noise on the model utility. Thus, we run a series of experiments with varying added noise \(\sigma _g^2 = \{0.001, 0.01, 0.1,1,2,10\}\) for the two privatized GFL algorithms. We plot the resulting MSD curves in Fig. 3a. We observe for a fixed stepsize, as we increase the variance, the MSD of the algorithm with random perturbations increases significantly as opposed to the algorithm with homomorphic perturbations. Thus, we conclude that the algorithm with random perturbations is more sensitive to the variance of the added noise. In fact, at some point, while using random perturbations, for some variance, the algorithm breaks down. While using graphhomomorphic perturbations, delays that effect for much larger variance. In addition, as long as the stepsize is small enough, we can always control the effect of the graphhomomorphic perturbations.
However, if we were to look at the individual MSD for one federated unit, we would discover that the performance of the algorithm decays as the noise variance is increased. Nonetheless, it is not to the extent of random perturbations. We plot in Fig. 3b the average individual MSD for the varying noise variance:
We observe that for a fixed noise variance, homomorphic perturbations results in a better performance. Furthermore, as we increase the noise variance, the network disagreement increases for both schemes. This comes as no surprise and is in accordance with Lemma 3. Furthermore, as previously mentioned, graphhomomorphic perturbations have the added value of not being negatively affected by the decrease in the stepsize. In addition, even though the improvement does not seem significant, the source of the error of the two schemes is different. Furthermore, the information of the true model is distributed in the network and can be retrieved by running at the end of the learning algorithm a consensustype step. At that point, the local models no longer contain information about the local data, and thus agents can safely share their models. However, when random perturbations are used, reconstruction is not possible since the information has been lost in the network due to the added perturbations.
We next fix the noise variance \(\sigma _{g}^2 = 0.1\) and varying the stepsize \(\mu = \{0.1, 0.5, 1, 5 \}\). According to Theorem 4, the MSD resulting from random perturbations includes an \(O(\mu ^{1})\) term, which is not the case when using graphhomomorphic perturbations. Thus, we expect a decrease in the stepsize will not significantly affect the privatized algorithm with graphhomomorphic perturbations as opposed to random perturbations. Indeed, as seen in Fig. 4, as \(\mu\) is increased, the final MSD increases; this is probably due to the \(O(\mu )\sigma _s^2\) term in the bound. In contrast, for significantly small or large \(\mu\), the performance of the privatized algorithm with random perturbations decreases. In addition, what we observe for both privacy schemes, is that the rate of convergence slows down as we decrease the stepsize. Thus, there exists an optimal stepsize that achieves a good compromise between a fast convergence and a low MSD.
5.2 Privatized federated learning
We focus on the single server FL setting (i.e., \(P = 1\)), where we assume we have \(K=1000\) agents of which we choose \(L=30\) at a time. We generate noniid datasets of varying size for each agent as in the previous section. We allow each agent to run varying epochs \(E_k \in [1,10]\) during an iteration of the algorithm. We set the stepsize \(\mu = 0.2\), \(\rho = 0.007\) and \(\sigma _g^2 = 0.02\). We compare three algorithms: the standard FL algorithm, the privatized FL algorithm with sharing of models, and the privatized FL algorithm with sharing of updates. We plot the average MSD curves after repeating the experiment 100 times. As expected, the effect of the added noise is worse when models are shared (yellow curve Fig. 5) than when updates are shared (red curve Fig. 5).
We next study the effect of the stepsize on the MSD of the privatized FL algorithm. We expect that as \(\mu\) is increased the MSD increases for the FL algorithm when updates are shared. While, when models are shared, since the gradient noise variance is tuned by \(\mu\) and the added noise variance by \(\mu ^{1}\), we expect to observe a tradeoff. On one hand, as \(\mu\) is increased the effect of the gradient noise is increased while that of the added noise is diminished. On the other hand, as \(\mu\) is decreased, the effect of the added noise overpowers that of the gradient noise. Indeed, we observe this phenomenon in (a) and (b) of Fig. 6.
Finally, we study the effect of the variance of the added noise. We fix the stepsize at \(\mu =0.2\) and vary the noise variance \(\sigma _g^2 = \{0.01,0.05, 0.1,0.5\}\). In the two cases, as we increase \(\sigma _g^2\) the performance diminishes ((c), (d) of Fig. 6). However, the larger values of the added noise variance affect the perturbed models more than the perturbed gradients. The algorithm diverges for lower values of \(\sigma _g^2\) in the case when models are shared as opposed to when gradients are shared. Thus, sharing updates can handle larger values of \(\sigma _g^2\) before the algorithm diverges. In addition, since the variance is tuned by the stepsize, we can always find a suitable \(\mu\) to decrease its effect.
5.3 Classification
We now focus on a classification problem applied to a dataset on click rate prediction of ads. We consider the Avazu click through dataset [29]. We split the 5101 data unequally among a total of 50 agents. We assume there are \(P = 5\) units each with \(K = 10\) agents. We add nonidd noise to the data at each agent to change their distributions. We again compare three algorithms: standard GFL, privatized GFL with homomorphic perturbations, and privatized GFL with random perturbations. We use a regularized logistic risk with regularization parameter \(\rho = 0.03\). We set the stepsize \(\mu = 0.5\). We repeat the algorithms for multiple levels of privacy. We then settle on a noise variance \(\sigma _g^2 = 0.6\) for which the privatized algorithm with random perturbations still converges. We plot in Fig. 7 the testing error on a set of 256 clean samples that were not perturbed with noise to change their distributions. We use the centriod model learned during each iteration to calculate the corresponding testing error. We observe that the graphhomomorphic perturbations do not hinder the performance of the privatized model. As for random perturbations, they significantly reduce the utility of the learnt model.
6 Conclusion
In this work, we introduced graph federated learning and implemented an algorithm that guarantees privacy of the data in a differential privacy sense. We showed general privatization based on adding random perturbations to updates in federated learning have a negative effect on the performance of the algorithm. Random perturbations drive the algorithm farther away from the true optimal model. However, we showed by adding graphhomomorphic perturbations, which exploit the graph structure, performance can be recovered with guaranteed privacy. We also showed that using dependent perturbations does not result in the same tradeoff between privacy and efficiency. In federated learning, we proved that sharing perturbed gradients versus perturbed models significantly reduces the effect of the added noise on the model utility. Thus, we no longer have to choose what to prioritize, and instead, we can have both a highly privatized algorithm with a good model utility.
Availability of data and materials
The generated data is not available since it is randomly generated everytime the experiment is ran. The Avazu dataset used as a real world example can be found at http://www.csie.ntu.edu.tw/cj1in/libsvmtools/.
Abbreviations
 GFL:

Graph Federated Learning
 FL:

Federated Learning
 FedAvg:

Federated Averaging
 SGD:

Stochastic Gradient Descent
 SMC:

Secure Multiparty Computation
 MSE:

Meansquareerror
 MSD:

Meansquaredeviation
References
H.B. McMahan, E. Moore, D. Ramage, S. Hampson, Communicationefficient learning of deep networks from decentralized data, in Proceedings of the International Conference on Artificial Intelligence and Statistics, vol. 54 (2017), pp. 1273–1282
B. Hitaj, G. Ateniese, F. PerezCruz, Deep models under the GAN: information leakage from collaborative deep learning, in Proceedings of ACM SIGSAC Conference on Computer and Communications Security, New York, NY, USA (2017), pp. 603–618
L. Melis, C. Song, E. De Cristofaro, V. Shmatikov, Exploiting unintended feature leakage in collaborative learning, in IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA (2019), pp. 691–706
M. Nasr, R. Shokri, A. Houmansadr, Comprehensive privacy analysis of deep learning: Passive and active whitebox inference attacks against centralized and federated learning, in IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA (2019), pp. 739–753
L. Zhu, S. Han, Deep leakage from gradients, in Advances in Neural Information Processing Systems, Vancouver, Canada (2019), pp. 17–31
S.Vlaski, A.H. Sayed, Graphhomomorphic perturbations for private decentralized learning, in Proceedings of the ICASSP, Toronto, Canada (2021), pp. 1–5
L. Liu, J. Zhang, S.H. Song, K.B. Letaief, Clientedgecloud hierarchical federated learning, in IEEE International Conference on Communications (ICC) (2020), pp. 1–6
E. Rizk, A.H. Sayed, A graph federated architecture with privacy preserving learning, in IEEE International Workshop on Signal Processing Advances in Wireless Communications, Lucca, Italy (2021), pp. 1–5. arxiv:2104.13215
W. Liu, L. Chen, W. Zhang, Decentralized federated learning: balancing communication and computing costs. IEEE Trans. Signal Inf. Process. Over Netw. 8, 131–143 (2022)
B. Wang, J. Fang, H. Li, X. Yuan, Q. Ling, Confederated learning: federated learning with decentralized edge servers. arXiv:2205.14905 (2022)
R.C. Geyer, T. Klein, M. Nabi, Differentially private federated learning: a client level perspective. arXiv:1712.07557 (2017)
R. Hu, Y. Guo, H. Li, Q. Pei, Y. Gong, Personalized federated learning with differential privacy. IEEE Internet Things J. 7(10), 9530–9539 (2020)
A. Triastcyn, B. Faltings, Federated learning with Bayesian differential privacy, in IEEE International Conference on Big Data, Los Angeles, California, USA (2019), pp. 2587–2596
S. Truex, L. Liu, K.H. Chow, M.E. Gursoy, W. Wei, LDPFED: federated learning with local differential privacy, in Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking (2020), pp. 61–66
K. Wei, J. Li, M. Ding, C. Ma, H.H. Yang, F. Farokhi, S. Jin, T.Q. Quek, H.V. Poor, Federated learning with differential privacy: algorithms and performance analysis. IEEE Trans. Inf. Forensics Secur. 15, 3454–3469 (2020)
B. Jayaraman, L. Wang, D. Evans, Q. Gu, Distributed learning without distress: privacypreserving empirical risk minimization, in Advances in Neural Information Processing Systems, vol. 31. Montreal, Canada (2018)
C. Li, P. Zhou, L. Xiong, Q. Wang, T. Wang, Differentially private distributed online learning. IEEE Trans. Knowl. Data Eng. 30(8), 1440–1453 (2018)
J. Zhu, C. Xu, J. Guan, D.O. Wu, Differentially private distributed online algorithms over timevarying directed networks. IEEE Trans. Signal Inf. Process. Over Netw. 4(1), 4–17 (2018)
M.A. Pathak, S. Rane, B. Raj, Multiparty differential privacy via aggregation of locally trained classifiers, in Advances in Neural Information Processing Systems, Vancouver, Canada (2010), pp. 1876–1884
S. Gade, N.H. Vaidya, Private learning on networks. arXiv:1612.05236 (2016)
K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H.B. McMahan, S. Patel, D. Ramage, A. Segal, K. Seth, Practical secure aggregation for privacypreserving machine learning, in Proceedings of ACM SIGSAC Conference on Computer and Communications Security, New York, USA (2017), pp. 1175–1191
A. Gascón, P. Schoppmann, B. Balle, M. Raykova, J. Doerner, S. Zahur, D. Evans, Privacypreserving distributed linear regression on highdimensional data. Proc. Priv. Enhanc. Technol. 2017(4), 345–364 (2017)
P. Mohassel, Y. Zhang, SecureML: a system for scalable privacypreserving machine learning, in IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA (2017), pp. 19–38
V. Nikolaenko, U. Weinsberg, S. Ioannidis, M. Joye, D. Boneh, N. Taft, Privacypreserving ridge regression on hundreds of millions of records, in IEEE Symposium on Security and Privacy, Berkeley, CA, USA (2013), pp. 334–348
W. Zheng, R.A. Popa, J.E. Gonzalez, I. Stoica, Helen: Maliciously secure coopetitive learning for linear models, in IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA (2019), pp. 724–738
Y. Ishai, E. Kushilevitz, R. Ostrovsky, A. Sahai, Cryptography with constant computational overhead, in Proceedings Annual ACM Symposium on Theory of Computing, Victoria British Columbia Canada (2008), pp. 433–442
I. Damgård, Y. Ishai, M. Krøigaard, Perfectly secure multiparty computation and the computational overhead of cryptography, in Annual International Conference on the Theory and Applications of Cryptographic Techniques, France (2010), pp. 445–465
E. Rizk, S. Vlaski, A.H. Sayed, Federated learning under importance sampling (2020). arXiv:2012.07383
K. Avazu, Avazu’s ClickThrough Rate Prediction (2014). http://www.csie.ntu.edu.tw/cj1in/libsvmtools/
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
ER is the first author and the corresponding author of this paper. Her main contributions include the mathematical analysis, derivation, simulations and writing. SV is the second author, and his contribution is conceptualization and reviewing. AHS is the third author, and his contribution is conceptualization, reviewing and supervision. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Secondary result on individual MSE performance
We first introduce the following theorem, which will be used to bound the network disagreement. We loosely bound the individual MSE for each federated unit. A tighter bound can be found, however, it is not needed.
Theorem 4
(Individual MSE convergence) Under Assumptions 1, 2 and 3, the individual models converge to the optimal model \(w^o\) exponentially fast for a sufficiently small stepsize:
where \(\preceq\) is the elementwise comparison, \(\Lambda\) is a diagonal matrix with the \(p^{th}\) entry given by \(\lambda _p = \sqrt{12\nu \mu + \delta ^2\mu ^2} + \beta _{s,p}^2\mu ^2 + O(\mu ^2)\in (0,1)\), \(\sigma _{q,p}^2\) the average of \(\sigma _{q,p,k}^2\), and \(\sigma _{g,p}^2\) is the total variance introduced by the noise added at server p. Then, taking the limit of i to infinity:
Proof
Focusing on the error of a single server p, we can verify that:
where we define \(\sigma _{q,m}^2\) to be the average of \(\sigma _{q,m,k}^2\). Step (a) follows from independence of random variables and the zeromean of the gradient noise and the added noise, (b) from Jensens’ inequality and the bound on the gradient noise (24) and the incremental noise (37), (c) from \(\nu\)strong convexity and \(\delta\)Lipschtz continuity. Then, choosing \(\alpha = \sqrt{12\nu \mu + \delta ^2 \mu ^2} = 1O(\mu )\) and taking the expectation over the filtration, we get:
where we introduce the constants \(\lambda _m\) and \(\sigma _{g,m}^2\):
Next, taking the column vector of every local meansquareerror, we get the following bound in which we drop the indexing from the column vectors:
where we define the diagonal matrix \(\Lambda\) with \(\lambda _p\) as entries on the diagonal. Then choosing \(\mu\) small enough such that \(\lambda _p < 1\) for every p, we know the limit of \(\Lambda ^i\) as i goes to infinity is zero. Furthermore, if the eigenvalues of \(\Lambda\) are less than 1, which they are, then the geometric series converges to \((I  \Lambda )^{1}\). Thus, we get the desired result. \(\square\)
Appendix 2: Proof of Lemma 3
Consider the aggregate model vector, i.e., \({\varvec{{ {\mathcal {W}}}}}_{i} \,\overset{\Delta }{=}\,\text{ col }\left\{ {\varvec{w}}_{p,i}\right\} _{p=1}^P\), for which we write the model recursion as:
where \({\varvec{{\mathcal {G}}}}_i\) is a matrix whose entries are the noise \({\varvec{g}}_{pm,i}\), and the diag\((\cdot )\) function extracts the diagonal entries of a matrix and transforms them into a column vector.
Since A is doublystochastic, then it admits an eigendecomposition of the form \(A = QH Q^\textsf{T}\), with the first eigenvalue equal to 1 and its corresponding eigenvector equal to \(\mathbbm {1}/\sqrt{P}\).
Next, we define the extended centroid model \({\varvec{{ {\mathcal {W}}}}}_{c,i} \,\overset{\Delta }{=}\,\left( \frac{1}{P} \mathbbm {1}\mathbbm {1}^\textsf{T}\otimes I\right) {\varvec{{ {\mathcal {W}}}}}_{i}\), and write:
Then, taking the conditional expectation given the past models of \(\Vert (Q_{\epsilon } \otimes I){\varvec{{ {\mathcal {W}}}}}_{i}\Vert ^2\), we can split the gradient noise and the added privacy noise from the model and the true gradient. Taking again the expectation over the past data, and then using the submultiplicity property of the norm followed by Jensen’s inequality, we have:
Next, we focus on each individual term. Using Jensen for some constant \(\alpha \) and then the Lipschitz condition and the bound on the incremental noise, we can bound the below norm as follows:
Using the bound on the gradient noise (24), we get another \({\mathbb {E}} \Vert {\widetilde{{\varvec{w}}}}_{p,i1}\Vert ^2\) term, which can be bounded by the result in Theorem 4. Thus, we write:
The noise term can be witten in a more compact way, \( \Vert Q_{\epsilon } \otimes I \Vert ^2 \sum \limits _{p=1}^P \sigma _{g,p}^2.\) Thus, putting everything together, we get:
Going back to the network disagreement, it is bounded by the above bound multiplied by \(\Vert Q_{\epsilon }^\textsf{T}\otimes I\Vert ^2/P\). If we were to drive i to infinity, since \(\Vert H_{\epsilon }\Vert = \iota _2< 1\), with \(\iota _2\) being the second eigenvalue of A, and choosing \(\alpha = \iota _2\) we would have:
Appendix 3: Proof of Theorem 1
First taking the conditional mean of the \(\ell _2\)norm of the centroid error given the past models, splits the mean into three independent terms: the centralized recursion, the gradient noise and the added noise. Then, taking the expectation again, we get:
where inequality (a) follows from Jensen with constant \(\alpha \in (0,1)\) and Lipshcitz, and (b) from applying Lemma 1. Then, choosing \(\alpha = \root 4 \of {12\nu \mu + \delta ^2 \mu ^2} = 1  O(\mu )\), the bound becomes:
Finally, using the result on the network disagreement, recusrively bounding the error, and taking the limit of i, we get the final result:
Appendix 4: Secondary result involving a bound on \({\mathbb {E}}\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}_{i}\Vert ^2\)
To show the sensitivity of the algorithm is bounded with high probability, we require a bound on \({\mathbb {E}}\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}_{i}\Vert ^2\) and \({\mathbb {E}}\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}'_{i}\Vert ^2\). From Theorem 4 we can bound the individual errors by:
where \(\lambda _{\max } = \max _p \lambda _p\). Then, \({\mathbb {E}}\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}_i\Vert ^2\) can be bounded as follows:
It follows that for some constants B and \(B'\), the probability that \({\mathbb {E}}\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}_i\Vert\) and \({\mathbb {E}}\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}'_i\Vert\) are unbounded can be bounded using Markov’s inequality by:
and similarly for \({\mathbb {P}}(\Vert {\widetilde{{\varvec{{ {\mathcal {W}}}}}}}'_i\Vert \ge B')\).
Appendix 5: Proof of Theorem 3
To evaluate the probability distribution in Definition 1, we note that the randomness of the models \({\varvec{\psi }}_{p,j}\) arises from the subsampling of the data for the calculation of the stochastic gradient at each iteration. Thus, given the subsampled dataset, the models are now deterministic and since the added noises \({\varvec{g}}_{pm,j}\) are Laplacian random variables, the distribution of the added noise over the neighbourhood of agent p and over the iterations is given by:
where \({\varvec{y}}_{j} = \left\{ {\varvec{\psi }}_{p,j} + {\varvec{g}}_{pm,j} \right\} _{m\in {\mathcal {N}}_p {\setminus } \{p\} }\) and the ratio in Definition 1 is bounded with high probability:
where the inequalities follow from the triangle inequality and the bound on the sensitivity of the algorithm.
Appendix 6: Proof of Theorem 2
We start by writing the error recursion:
where we introduce the gradient noise \({\varvec{s}}_{1,i}\) and the incremental noise \({\varvec{q}}_{1,i}\):
We have already shown in previous work that the gradient noise is zeromean and has bounded second ordermoment [28, Lemma 1], while the incremental noise has bounded second ordermoment [28, Lemma 5]:
where the constants \(\beta _{s,1}^2, \sigma _{s,1}^2, \sigma _{q,1}^2\) are given by:
Taking the conditional mean of the \(\ell _2\)norm of the error, we can split the noise term from the rest and then apply Jensen’s inequality with some constant \(\alpha \in (0,1)\):
Using strong convexity and Lipschitz continuity of the functions we can bound the first term as:
Then, taking the expectations again over the past models and the selected agents, and using the bound on the gradient noise and incremental noise:
Then, recursively bounding the error with \(\alpha = \sqrt{12\nu \mu + (\beta _{s,1}^2+\delta ^2)\mu ^2}\):
and taking the limit of i:
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rizk, E., Vlaski, S. & Sayed, A.H. Privatized graph federated learning. EURASIP J. Adv. Signal Process. 2023, 87 (2023). https://doi.org/10.1186/s13634023010494
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13634023010494