论文标题
对ASR系统的对抗性攻击:概述
Adversarial Attacks on ASR Systems: An Overview
论文作者
论文摘要
随着硬件和算法的开发,ASR(自动语音识别)系统发展了很多。随着模型变得越来越简单,开发和部署的困难变得更加容易,ASR系统正越来越接近我们的生活。一方面,我们经常使用ASR的应用程序或API来生成字幕和记录会议。另一方面,智能扬声器和自动驾驶汽车依靠ASR系统来控制Aiot设备。在过去的几年中,有很多关于对ASR系统攻击的对抗示例的作品。通过向波形添加小的扰动,识别结果会带来很大的不同。在本文中,我们描述了ASR系统的发展,攻击的不同假设以及如何评估这些攻击。接下来,我们介绍了两个攻击假设的对抗示例攻击的当前作品:白框攻击和黑框攻击。与其他调查不同,我们更多地关注它们在ASR系统中扰动波形,这些攻击之间的关系及其实现方法。我们专注于他们作品的效果。
With the development of hardware and algorithms, ASR(Automatic Speech Recognition) systems evolve a lot. As The models get simpler, the difficulty of development and deployment become easier, ASR systems are getting closer to our life. On the one hand, we often use APPs or APIs of ASR to generate subtitles and record meetings. On the other hand, smart speaker and self-driving car rely on ASR systems to control AIoT devices. In past few years, there are a lot of works on adversarial examples attacks against ASR systems. By adding a small perturbation to the waveforms, the recognition results make a big difference. In this paper, we describe the development of ASR system, different assumptions of attacks, and how to evaluate these attacks. Next, we introduce the current works on adversarial examples attacks from two attack assumptions: white-box attack and black-box attack. Different from other surveys, we pay more attention to which layer they perturb waveforms in ASR system, the relationship between these attacks, and their implementation methods. We focus on the effect of their works.