PFLM: Privacy-preserving federated learning with membership proof
【Author】 Jiang, Changsong; Xu, Chunxiang; Zhang, Yuan
【Source】INFORMATION SCIENCES
【影响因子】8.233
【Abstract】Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages crypto-graphic accumulators to generate membership proofs by accumulating user IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication. (c) 2021 Elsevier Inc. All rights reserved.
【Keywords】Privacy-preserving; Federated learning; Machine learning; Membership proof
【发表时间】2021 OCT
【收录时间】2022-01-01
【文献类型】
【主题类别】
--
评论