【Author】 Jiang, Changsong; Xu, Chunxiang; Zhang, Yuan
【Source】INFORMATION SCIENCES
【Abstract】Privacy-preserving federated learning is distributed machine learning where multiple collaborators train a model through protected gradients. To achieve robustness to users dropping out, existing practical privacy-preserving federated learning schemes are based on (t, N)-threshold secret sharing. Such schemes rely on a strong assumption to guarantee security: the threshold t must be greater than half of the number of users. The assumption is so rigorous that in some scenarios the schemes may not be appropriate. Motivated by the issue, we first introduce membership proof for federated learning, which leverages crypto-graphic accumulators to generate membership proofs by accumulating user IDs. The proofs are issued in a public blockchain for users to verify. With membership proof, we propose a privacy-preserving federated learning scheme called PFLM. PFLM releases the assumption of threshold while maintaining the security guarantees. Additionally, we design a result verification algorithm based on a variant of ElGamal encryption to verify the correctness of aggregated results from the cloud server. The verification algorithm is integrated into PFLM as a part. Security analysis in a random oracle model shows that PFLM guarantees privacy against active adversaries. The implementation of PFLM and experiments demonstrate the performance of PFLM in terms of computation and communication. (c) 2021 Elsevier Inc. All rights reserved.
【Keywords】Privacy-preserving; Federated learning; Machine learning; Membership proof
【标题】PFLM:具有成员资格证明的隐私保护联邦学习
【摘要】保护隐私的联邦学习是分布式机器学习,其中多个合作者通过受保护的梯度训练模型。为了实现对用户退出的鲁棒性,现有的实用隐私保护联邦学习方案基于 (t, N) 阈值秘密共享。这种方案依赖于一个强有力的假设来保证安全性:阈值 t 必须大于用户数量的一半。该假设非常严格,以至于在某些情况下该方案可能不合适。受这个问题的启发,我们首先引入了联邦学习的成员证明,它利用密码累加器通过累积用户 ID 来生成成员证明。证明在公共区块链中发布,供用户验证。通过成员资格证明,我们提出了一种称为 PFLM 的隐私保护联邦学习方案。 PFLM 在保持安全保证的同时释放了阈值的假设。此外,我们设计了一种基于 ElGamal 加密变体的结果验证算法,以验证来自云服务器的聚合结果的正确性。验证算法作为一部分集成到 PFLM 中。随机预言机模型中的安全性分析表明,PFLM 可以保证针对活跃对手的隐私。 PFLM 的实现和实验证明了 PFLM 在计算和通信方面的性能。 (c) 2021 Elsevier Inc. 保留所有权利。
【关键词】隐私保护;联邦学习;机器学习;会员证明
【发表时间】2021
【收录时间】2022-07-06
【文献类型】Article
【论文大主题】区块链联邦学习
【论文小主题】联邦学习为主体
【影响因子】8.233
【翻译者】石东瑛
评论