【Author】
Shayan, Muhammad; Fung, Clement; Yoon, Chris J. M.; Beschastnikh, Ivan
【Source】IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
【Abstract】Federated Learning is the current state-of-the-art in supporting secure multi-party machine learning (ML): data is maintained on the owner's device and the updates to the model are aggregated through a secure protocol. However, this process assumes a trusted centralized infrastructure for coordination, and clients must trust that the central service does not use the byproducts of client data. In addition to this, a group of malicious clients could also harm the performance of the model by carrying out a poisoning attack. As a response, we propose Biscotti: a fully decentralized peer to peer (P2P) approach to multi-party ML, which uses blockchain and cryptographic primitives to coordinate a privacy-preserving ML process between peering clients. Our evaluation demonstrates that Biscotti is scalable, fault tolerant, and defends against known attacks. For example, Biscotti is able to both protect the privacy of an individual client's update and maintain the performance of the global model at scale when 30 percent adversaries are present in the system.
【Keywords】Peer-to-peer computing; Data models; Collaborative work; Training; Privacy; Machine learning; Training data; Distributed machine learning; blockchain; privacy; security
【标题】Biscott:私有和安全联邦学习的区块链系统
【摘要】联邦学习是当前支持安全多方机器学习 (ML) 的最先进技术:数据保存在所有者的设备上,模型的更新通过安全协议进行汇总。然而,这个过程假设一个可信的集中式基础设施进行协调,客户必须相信中央服务不会使用客户数据的副产品。除此之外,一群恶意客户端还可能通过执行中毒攻击来损害模型的性能。作为回应,我们提出了 Biscotti:一种完全分散的对等 (P2P) 多方 ML 方法,它使用区块链和加密原语来协调对等客户端之间的隐私保护 ML 过程。我们的评估表明,Biscotti 具有可扩展性、容错性和防御已知攻击的能力。例如,当系统中存在 30% 的对手时,Biscotti 既能够保护单个客户端更新的隐私,又能够大规模地维持全局模型的性能。
【关键词】对等计算;数据模型;协作工作;训练;隐私;机器学习;训练数据;分布式机器学习;区块链;隐私;安全
评论