Published January 1, 2023 | Version v1
Journal article Open

A security-friendly privacy-preserving solution for federated learning

  • 1. Ericsson Res, TR-34367 Istanbul, Turkiye
  • 2. Katholieke Univ Leuven, B-3000 Leuven, Belgium
  • 3. Ericsson Prod Secur, S-16483 Stockholm, Sweden

Description

Federated learning is a privacy-aware collaborative machine learning method where the clients collaborate on constructing a global model by performing local model training using their training data and sending the local model updates to the server. Although it enhances privacy by letting the clients collaborate without sharing their training data, it is still prone to sophisticated privacy attacks because of possible information leakage from the local model updates sent to the server. To prevent such attacks, generally secure aggregation protocols are proposed so that the server will not be able to access the individual local model updates but the aggregated result. However, such secure aggregation approaches may not allow the execution of security mechanisms against some security attacks to model training, such as poisoning and backdoor attacks, because the server cannot access the individual local model updates and; therefore, cannot analyze them to detect anomalies resulting from these attacks. Thus, solutions that satisfy privacy and security at the same time or new privacy-preserving solutions that allow the server to execute some analysis on the local model updates without violating privacy are needed for federated learning. In this paper, we introduce a novel security-friendly privacy solution for federated learning based on multi-hop communication to hide clients' identities. Our solution ensures that the forwardee clients in the path between the source client and the server cannot execute malicious activities by altering model updates and contributing to the global model construction with more than one local model update in one FL round. We then propose two different approaches to make the solution also robust against possible malicious packet drop behaviors by the forwardee clients.

Files

bib-1c983614-0746-463a-a849-280951cc54e3.txt

Files (200 Bytes)

Name Size Download all
md5:e7f5ca27d065a7834d925f8f445c92a7
200 Bytes Preview Download