论文标题

使用混合对抗训练的对抗辩护,以进行深度扬声器认可

Adversarial defense for deep speaker recognition using hybrid adversarial training

论文作者

Pal, Monisankha, Jati, Arindam, Peri, Raghuveer, Hsu, Chin-Cheng, AbdAlmageed, Wael, Narayanan, Shrikanth

论文摘要

深层神经网络的扬声器识别系统可以使用微小的对输入语音样本的微不足道的扰动很容易被对手欺骗。这些对抗性攻击对使用语音生物识别的说话者识别系统构成了严重的安全威胁。为了解决这一问题,在这项工作中,我们提出了一种基于混合对抗训练(HAT)设置的新防御机制。与深入说话者认识的现有有关反对对抗攻击的对抗攻击的作品相比,只有通过有监督的跨凝结损失(CE)损失使用阶级的信息,我们建议从监督和无人监督的提示中利用其他信息来制造多样化和强大的扰动来进行对抗性训练。具体来说,我们使用CE,功能散装(FS)和边缘损失采用多任务目标来创建对抗性扰动,并将其包括在内以进行对抗训练以增强模型的鲁棒性。我们在Librispeech数据集上进行了说话者的识别实验,并将其性能与最先进的预计梯度下降(PGD)基于基于CE目标的对抗性训练进行了比较。 PGD​​和Carlini-Wagner(CW)的攻击分别提高了拟议的帽子,将对抗精度提高了3.29%和3.18%,同时在良性示例中保持了高准确性。

Deep neural network based speaker recognition systems can easily be deceived by an adversary using minuscule imperceptible perturbations to the input speech samples. These adversarial attacks pose serious security threats to the speaker recognition systems that use speech biometric. To address this concern, in this work, we propose a new defense mechanism based on a hybrid adversarial training (HAT) setup. In contrast to existing works on countermeasures against adversarial attacks in deep speaker recognition that only use class-boundary information by supervised cross-entropy (CE) loss, we propose to exploit additional information from supervised and unsupervised cues to craft diverse and stronger perturbations for adversarial training. Specifically, we employ multi-task objectives using CE, feature-scattering (FS), and margin losses to create adversarial perturbations and include them for adversarial training to enhance the robustness of the model. We conduct speaker recognition experiments on the Librispeech dataset, and compare the performance with state-of-the-art projected gradient descent (PGD)-based adversarial training which employs only CE objective. The proposed HAT improves adversarial accuracy by absolute 3.29% and 3.18% for PGD and Carlini-Wagner (CW) attacks respectively, while retaining high accuracy on benign examples.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源