论文标题
带有图像尺度攻击的后门和中毒神经网络
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
论文作者
论文摘要
后门和中毒攻击是对机器学习和视觉系统安全的主要威胁。但是,这些攻击通常会在图像中留下可见的伪影,这些伪影可以在视觉上检测到并削弱攻击的功效。在本文中,我们提出了一种隐藏后门和中毒攻击的新型策略。我们的方法基于最近针对图像缩放的攻击。这些攻击能够操纵图像,从而使它们在缩放到特定分辨率时会更改内容。通过结合中毒和图像刻度攻击,我们可以掩盖后门的扳机,并掩盖清洁标签中毒的叠加。此外,我们考虑检测图像尺度攻击并获得适应性攻击。在经验评估中,我们证明了战略的有效性。首先,我们表明后门和中毒与图像刻度攻击相结合时同样效果很好。其次,我们证明了针对图像尺度攻击的当前检测防御措施不足以揭示我们的操纵。总体而言,我们的工作为隐藏操作痕迹的新方法提供了一种新颖的手段,适用于不同的中毒方法。
Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a specific resolution. By combining poisoning and image-scaling attacks, we can conceal the trigger of backdoors as well as hide the overlays of clean-label poisoning. Furthermore, we consider the detection of image-scaling attacks and derive an adaptive attack. In an empirical evaluation, we demonstrate the effectiveness of our strategy. First, we show that backdoors and poisoning work equally well when combined with image-scaling attacks. Second, we demonstrate that current detection defenses against image-scaling attacks are insufficient to uncover our manipulations. Overall, our work provides a novel means for hiding traces of manipulations, being applicable to different poisoning approaches.