论文标题
对抗机器学习研究的法律风险
Legal Risks of Adversarial Machine Learning Research
论文作者
论文摘要
ML研究人员越来越针对商业ML系统(例如Facebook,Tesla,Microsoft,IBM,Google)来证明脆弱性的ML研究人员正在蓬勃发展。在本文中,我们问:“当对抗性ML研究人员攻击ML系统时,有什么可能的法律风险?”研究或测试任何运营系统的安全性可能会在《计算机欺诈与滥用法》(CFAA)之外进行,这是美国主要的联邦法规,造成黑客攻击。我们声称对抗性ML研究可能没有什么不同。我们的分析表明,由于CFAA的解释方式存在分歧,因此对抗性ML攻击的各个方面,例如模型反转,成员推理,模型窃取,重新编程ML系统和中毒攻击,可能会在某些司法管辖区受到批准,而在其他司法管辖区也不受到惩罚。最后,我们进行了一项分析,预测美国最高法院如何解决CFAA在Van Buren诉美国诉美国案中的申请中的某些不一致之处,预计将在2021年裁定。我们认为法院可能会采用狭窄的CFAA构建,这实际上将导致更好的对抗性ML ML在长期中的更好。
Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA's application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.