论文标题
EYEDAS:确保对立体综合症的自动驾驶汽车的感知
EyeDAS: Securing Perception of Autonomous Cars Against the Stereoblindness Syndrome
论文作者
论文摘要
检测物体是2D或3D对象的能力在自动驾驶中非常重要,因为检测错误可能会带来威胁生命的后果,危害驾驶员,乘客,行人和其他人的安全性。建议区分2和3D对象的方法(例如,LIVISICTION检测方法)不适合自主驾驶,因为它们是对象依赖性的,或者不考虑与自主驾驶相关的约束(例如,在车辆移动时进行实时决策的需求)。在本文中,我们提出了Eyedas,这是一种基于几个基于学习的新方法,旨在确保对象检测器(OD)与立体杂交综合征所构成的威胁(即无法区分2D和3D对象)。我们使用从驾驶员座位的角度来看,通过从七个YouTube视频录像中提取的2,000个录像带中提取的2,000个对象来评估Eyedas的实时性能。当将Eyedas应用于七个最先进的ODS作为对策时,Eyedas能够将2D错误分类率从71.42-100%降低到2.4%,而3D错误分类率为0%(TPR为1.0)。我们还表明,Eyedas的表现优于基线方法,并以0.024为0.024的AUC超过0.999,TPR为1.0。
The ability to detect whether an object is a 2D or 3D object is extremely important in autonomous driving, since a detection error can have life-threatening consequences, endangering the safety of the driver, passengers, pedestrians, and others on the road. Methods proposed to distinguish between 2 and 3D objects (e.g., liveness detection methods) are not suitable for autonomous driving, because they are object dependent or do not consider the constraints associated with autonomous driving (e.g., the need for real-time decision-making while the vehicle is moving). In this paper, we present EyeDAS, a novel few-shot learning-based method aimed at securing an object detector (OD) against the threat posed by the stereoblindness syndrome (i.e., the inability to distinguish between 2D and 3D objects). We evaluate EyeDAS's real-time performance using 2,000 objects extracted from seven YouTube video recordings of street views taken by a dash cam from the driver's seat perspective. When applying EyeDAS to seven state-of-the-art ODs as a countermeasure, EyeDAS was able to reduce the 2D misclassification rate from 71.42-100% to 2.4% with a 3D misclassification rate of 0% (TPR of 1.0). We also show that EyeDAS outperforms the baseline method and achieves an AUC of over 0.999 and a TPR of 1.0 with an FPR of 0.024.