论文标题
对联邦学习的威胁:一项调查
Threats to Federated Learning: A Survey
论文作者
论文摘要
随着数据孤岛和流行隐私意识的出现,传统的集中式培训人工智能方法(AI)模型正面临着巨大的挑战。在这一新现实下,联邦学习(FL)最近成为有希望的解决方案。现有的FL协议设计已显示出表现出漏洞,这些漏洞可以由内部和没有系统内部和不妥协数据隐私的对手利用。因此,使FL系统设计师意识到未来FL算法设计对隐私保护的含义至关重要。目前,没有关于此主题的调查。在本文中,我们弥合了FL文献中的这一重要差距。通过对FL的概念提供简洁的介绍,并提供了涵盖威胁模型的独特分类法和对FL的两次主要攻击:1)中毒攻击和2)推理攻击,本文提供了对这一重要主题的无障碍评论。我们强调了各种攻击所采用的直觉,关键技术以及基本假设,并讨论了佛罗里达州更强大的隐私保护的有希望的未来研究指示。
With the emergence of data silos and popular privacy awareness, the traditional centralized approach of training artificial intelligence (AI) models is facing strong challenges. Federated learning (FL) has recently emerged as a promising solution under this new reality. Existing FL protocol design has been shown to exhibit vulnerabilities which can be exploited by adversaries both within and without the system to compromise data privacy. It is thus of paramount importance to make FL system designers to be aware of the implications of future FL algorithm design on privacy-preservation. Currently, there is no survey on this topic. In this paper, we bridge this important gap in FL literature. By providing a concise introduction to the concept of FL, and a unique taxonomy covering threat models and two major attacks on FL: 1) poisoning attacks and 2) inference attacks, this paper provides an accessible review of this important topic. We highlight the intuitions, key techniques as well as fundamental assumptions adopted by various attacks, and discuss promising future research directions towards more robust privacy preservation in FL.