【Author】 Zhang, Zhebin; Dong, Dajie; Ma, Yuhang; Ying, Yilong; Jiang, Dawei; Chen, Ke; Shou, Lidan; Chen, Gang
【Source】PROCEEDINGS OF THE VLDB ENDOWMENT
【Abstract】Modern mobile applications often produce decentralized data, i.e., a huge amount of privacy-sensitive data distributed over a large number of mobile devices. Techniques for learning models from decentralized data must properly handle two natures of such data, namely privacy and massive engagement. Federated learning (FL) is a promising approach for such a learning task since the technique learns models from data without exposing privacy. However, traditional FL methods assume that the participating mobile devices are honest volunteers. This assumption makes traditional FL methods unsuitable for applications where two kinds of participants are engaged: 1) self-interested participants who, without economical stimulus, are reluctant to contribute their computing resources unconditionally, and 2) malicious participants who send corrupt updates to disrupt the learning process. This paper proposes Refiner, a reliable federated learning system for tackling the challenges introduced by massive engagements of self-interested and malicious participants. Refiner is built upon Ethereum, a public blockchain platform. To engage self-interested participants, we introduce an incentive mechanism which rewards each participant in terms of the amount of its training data and the performance of its local updates. To handle malicious participants, we propose an audit scheme which employs a committee of randomly chosen validators for punishing them with no reward and preclude corrupt updates from the global model. The proposed incentive and audit scheme is implemented with cryptocurrency and smart contract, two primitives offered by Ethereum. This paper demonstrates the main features of Refiner by training a digit classification model on the MNIST dataset.
【Keywords】
【标题】Refiner:基于区块链的可靠的激励驱动的联邦学习系统
【摘要】现代移动应用程序通常会产生分散的数据,即分布在大量移动设备上的大量隐私敏感数据。从分散数据中学习模型的技术必须正确处理此类数据的两种性质,即隐私和大规模参与。联邦学习 (FL) 是此类学习任务的一种很有前途的方法,因为该技术从数据中学习模型而不暴露隐私。然而,传统的 FL 方法假设参与的移动设备是诚实的志愿者。这一假设使得传统的 FL 方法不适用于有两种参与者参与的应用程序:1)自利的参与者,在没有经济刺激的情况下,不愿意无条件地贡献他们的计算资源,以及 2)发送损坏更新以破坏网络的恶意参与者学习过程。本文提出了 Refiner,这是一种可靠的联邦学习系统,用于应对自利和恶意参与者的大规模参与所带来的挑战。 Refiner 建立在公共区块链平台以太坊之上。为了吸引自利的参与者,我们引入了一种激励机制,根据每个参与者的训练数据量和本地更新的性能来奖励每个参与者。为了处理恶意参与者,我们提出了一个审计方案,该方案使用一个随机选择的验证者委员会来惩罚他们而没有奖励,并从全局模型中排除损坏的更新。提议的激励和审计计划是通过以太坊提供的两种原语加密货币和智能合约来实施的。本文通过在 MNIST 数据集上训练一个数字分类模型来展示 Refiner 的主要功能。
【关键词】无
【发表时间】2021
【收录时间】2022-07-06
【文献类型】Article; Proceedings Paper
【论文大主题】区块链联邦学习
【论文小主题】联邦学习为主体
【影响因子】3.557
【翻译者】石东瑛
评论