【Author】 Li, Jun; Shao, Yumeng; Wei, Kang; Ding, Ming; Ma, Chuan; Shi, Long; Han, Zhu; Poor, H. Vincent
【Source】IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
【Abstract】Federated learning (FL), as a distributed machine learning paradigm, promotes personal privacy by local data processing at each client. However, relying on a centralized server for model aggregation, standard FL is vulnerable to server malfunctions, untrustworthy servers, and external attacks. To address these issues, we propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL). In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, aggregates its own model with received ones, and then competes to generate a block before its local training on the next round. We evaluate the learning performance of BLADE-FL, and develop an upper bound on the global loss function. Then we verify that this bound is convex with respect to the number of overall aggregation rounds K, and optimize the computing resource allocation for minimizing the upper bound. We also note that there is a critical problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to disguise their cheating behaviors. Focusing on this problem, we explore the impact of lazy clients on the learning performance of BLADE-FL and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients. Based on the MNIST and Fashion-MNIST datasets, we see that the experimental results are consistent with the analytical ones. To be specific, the gap between the developed upper bound and experimental results is lower than 5%, and the optimized K based on the upper bound can effectively minimize the loss function.
【Keywords】Federated learning; blockchain; lazy client; computing resource allocation
【标题】区块链辅助去中心化联邦学习(BLADE-FL):性能分析和资源分配
【摘要】联邦学习 (FL) 作为一种分布式机器学习范式,通过在每个客户端进行本地数据处理来促进个人隐私。然而,依靠集中式服务器进行模型聚合,标准 FL 容易受到服务器故障、不可信服务器和外部攻击的影响。为了解决这些问题,我们通过将区块链集成到 FL 中提出了一个去中心化 FL 框架,即区块链辅助去中心化联邦学习(BLADE-FL)。在提议的 BLADE-FL 一轮中,每个客户端将其训练的模型广播给其他客户端,将自己的模型与接收到的模型聚合,然后在下一轮的本地训练之前竞争生成一个块。我们评估了 BLADE-FL 的学习性能,并制定了全局损失函数的上限。然后我们验证这个边界相对于总聚合轮数 K 是凸的,并优化计算资源分配以最小化上限。我们还注意到,由于懒惰的客户抄袭他人的训练模型并添加人工噪音来掩饰他们的作弊行为,因此存在一个严重的训练缺陷问题。针对这个问题,我们探讨了惰性客户端对 BLADE-FL 学习性能的影响,并刻画了最优 K、学习参数和惰性客户端比例之间的关系。基于 MNIST 和 Fashion-MNIST 数据集,我们看到实验结果与分析结果一致。具体来说,开发的上界与实验结果的差距小于5%,基于上界优化的K可以有效地最小化损失函数。
【关键词】联邦学习;区块链;懒惰的客户;计算资源分配
【发表时间】2022
【收录时间】2022-07-06
【文献类型】Article
【论文大主题】区块链联邦学习
【论文小主题】联邦学习为主体
【影响因子】3.757
【翻译者】石东瑛
评论