论文标题

事件摄像机和DCNN的交通标志检测

Traffic Sign Detection With Event Cameras and DCNN

论文作者

Wzorek, Piotr, Kryjak, Tomasz

论文摘要

近年来,事件摄像机(DVS - 动态视觉传感器)已在视觉系统中用作传统摄像机的替代或补充。它们的特征是高动态范围,高时间分辨率,低潜伏期和在有限的照明条件下可靠的性能 - 在高级驾驶员辅助系统(ADAS)和自动驾驶汽车的背景下尤其重要的参数。在这项工作中,我们测试这些相当新颖的传感器是否可以应用于流行的交通符号检测任务。为此,我们分析事件数据的不同表示:事件框架,事件频率和指数衰减的时间表面,并使用称为FireNet的深神经网络应用视频框架重建。我们将深度卷积神经网络Yolov4用作检测器。对于特定表示形式,我们获得了86.9-88.9%[email protected]的检测准确性。使用融合所考虑的表示形式的融合使我们能够获得更高准确性的探测器为89.9%[email protected]。相比之下,使用Firenet重建的框架的检测器的特征是麦上@0.5的精度为72.67%。获得的结果说明了汽车应用中事件摄像机的潜力,无论是独立传感器还是与典型的基于框架的摄像机密切合作。

In recent years, event cameras (DVS - Dynamic Vision Sensors) have been used in vision systems as an alternative or supplement to traditional cameras. They are characterised by high dynamic range, high temporal resolution, low latency, and reliable performance in limited lighting conditions -- parameters that are particularly important in the context of advanced driver assistance systems (ADAS) and self-driving cars. In this work, we test whether these rather novel sensors can be applied to the popular task of traffic sign detection. To this end, we analyse different representations of the event data: event frame, event frequency, and the exponentially decaying time surface, and apply video frame reconstruction using a deep neural network called FireNet. We use the deep convolutional neural network YOLOv4 as a detector. For particular representations, we obtain a detection accuracy in the range of 86.9-88.9% [email protected]. The use of a fusion of the considered representations allows us to obtain a detector with higher accuracy of 89.9% [email protected]. In comparison, the detector for the frames reconstructed with FireNet is characterised by an accuracy of 72.67% [email protected]. The results obtained illustrate the potential of event cameras in automotive applications, either as standalone sensors or in close cooperation with typical frame-based cameras.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源