论文标题
网络入侵检测系统中的对抗机器学习
Adversarial Machine Learning in Network Intrusion Detection Systems
论文作者
论文摘要
对抗性示例是攻击者故意制作的机器学习系统的输入,以欺骗模型产生不正确的输出。这些示例在图像识别,语音识别和垃圾邮件检测等多个领域取得了很大的成功。在本文中,我们研究网络入侵检测系统(NIDS)中对抗性问题的性质。我们专注于攻击观点,其中包括生成能够逃避各种机器学习模型的对抗性示例的技术。更具体地说,我们探讨了进化计算(粒子群优化和遗传算法)和深度学习(生成对抗网络)作为对抗性示例生成的工具。为了评估这些算法在逃避NID时的性能,我们将它们应用于两个公开可用的数据集,即NSL-KDD和UNSW-NB15,并将它们与基线扰动方法:Monte Carlo Simulation进行对比。结果表明,我们的对抗性示例生成技术在11种不同的机器学习模型以及投票分类器中导致了很高的错误分类率。我们的工作强调了面对对抗性扰动,基于机器学习的NID的脆弱性。
Adversarial examples are inputs to a machine learning system intentionally crafted by an attacker to fool the model into producing an incorrect output. These examples have achieved a great deal of success in several domains such as image recognition, speech recognition and spam detection. In this paper, we study the nature of the adversarial problem in Network Intrusion Detection Systems (NIDS). We focus on the attack perspective, which includes techniques to generate adversarial examples capable of evading a variety of machine learning models. More specifically, we explore the use of evolutionary computation (particle swarm optimization and genetic algorithm) and deep learning (generative adversarial networks) as tools for adversarial example generation. To assess the performance of these algorithms in evading a NIDS, we apply them to two publicly available data sets, namely the NSL-KDD and UNSW-NB15, and we contrast them to a baseline perturbation method: Monte Carlo simulation. The results show that our adversarial example generation techniques cause high misclassification rates in eleven different machine learning models, along with a voting classifier. Our work highlights the vulnerability of machine learning based NIDS in the face of adversarial perturbation.