论文标题
为什么这么有毒?测量和触发开放域聊天机器人中的有毒行为
Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots
论文作者
论文摘要
聊天机器人用于许多应用程序,例如自动化代理,智能家庭助理,在线游戏中的互动角色等。因此,确保它们不会以不希望的方式行事,对用户提供令人反感或有毒的反应。这并不是一项琐碎的任务,因为最先进的聊天机器人模型是从互联网公开收集的大型公共数据集上培训的。本文提出了对聊天机器人中毒性的首次大规模测量。我们表明,公开可用的聊天机器人很容易提供有毒的疑问时提供有毒的反应。更令人担忧的是,一些无毒的查询也会触发有毒反应。然后,我们着手设计和实验攻击,即Toxicbuddy,该攻击依赖于微调的GPT-2来产生无毒的查询,从而使聊天机器人以有毒的方式做出反应。我们广泛的实验评估表明,我们的攻击对公共聊天机器人模型有效,并且优于先前工作提出的手动制作的恶意查询。我们还评估了针对毒性的三种防御机制,表明它们要么以影响聊天机器人的效用而降低攻击性能,要么仅有效地减轻了一部分攻击。这凸显了对计算机安全和在线安全社区进行更多研究的需求,以确保聊天机器人模型不会伤害其用户。总体而言,我们有信心有毒可以用作审计工具,并且我们的工作将为设计更有效的聊天机器人安全措施铺平道路。
Chatbots are used in many applications, e.g., automated agents, smart home assistants, interactive characters in online games, etc. Therefore, it is crucial to ensure they do not behave in undesired manners, providing offensive or toxic responses to users. This is not a trivial task as state-of-the-art chatbot models are trained on large, public datasets openly collected from the Internet. This paper presents a first-of-its-kind, large-scale measurement of toxicity in chatbots. We show that publicly available chatbots are prone to providing toxic responses when fed toxic queries. Even more worryingly, some non-toxic queries can trigger toxic responses too. We then set out to design and experiment with an attack, ToxicBuddy, which relies on fine-tuning GPT-2 to generate non-toxic queries that make chatbots respond in a toxic manner. Our extensive experimental evaluation demonstrates that our attack is effective against public chatbot models and outperforms manually-crafted malicious queries proposed by previous work. We also evaluate three defense mechanisms against ToxicBuddy, showing that they either reduce the attack performance at the cost of affecting the chatbot's utility or are only effective at mitigating a portion of the attack. This highlights the need for more research from the computer security and online safety communities to ensure that chatbot models do not hurt their users. Overall, we are confident that ToxicBuddy can be used as an auditing tool and that our work will pave the way toward designing more effective defenses for chatbot safety.