论文标题
LDNET:使用动态视觉传感器的端到端车道标记检测方法
LDNet: End-to-End Lane Marking Detection Approach Using a Dynamic Vision Sensor
论文作者
论文摘要
现代车辆配备了各种驾驶员辅助系统,包括自动车道保存,可防止意外的车道出发。传统的车道检测方法结合了手工制作或基于深度学习的功能,然后是使用基于框架的RGB摄像机采用车道提取的后处理技术。基于框架的RGB摄像机用于车道检测任务的利用很容易进行照明变化,阳光和运动模糊,这限制了车道检测方法的性能。将事件摄像头纳入自动驾驶的感知堆栈中,这是减轻基于框架的RGB摄像机遇到的挑战的最有希望的解决方案之一。这项工作的主要贡献是采用动态视觉传感器的车道标记检测模型的设计。本文通过设计卷积编码器,然后是注意引导的解码器,探讨了使用事件摄像头的新型巷道标记检测的新应用。编码特征的空间分辨率通过致密的空间金字塔池(ASPP)块保留。解码器中的附加注意力机制可改善高维输入编码特征的性能,从而促进车道定位并减轻后处理计算。使用DVS数据集(DET)评估了拟议工作的功效。实验结果表明,在多类和二进制级别的$ f1 $中,$ 5.54 \%$ $和5.03 \%$在$ f1 $中的分数标记了检测任务。此外,拟议方法的联合交叉点($ iou $)得分分别超过了表现最佳的最先进方法的交叉点,分别超过了$ 6.50 \%$和$ 9.37 \%\%\%\%\%$ $。
Modern vehicles are equipped with various driver-assistance systems, including automatic lane keeping, which prevents unintended lane departures. Traditional lane detection methods incorporate handcrafted or deep learning-based features followed by postprocessing techniques for lane extraction using frame-based RGB cameras. The utilization of frame-based RGB cameras for lane detection tasks is prone to illumination variations, sun glare, and motion blur, which limits the performance of lane detection methods. Incorporating an event camera for lane detection tasks in the perception stack of autonomous driving is one of the most promising solutions for mitigating challenges encountered by frame-based RGB cameras. The main contribution of this work is the design of the lane marking detection model, which employs the dynamic vision sensor. This paper explores the novel application of lane marking detection using an event camera by designing a convolutional encoder followed by the attention-guided decoder. The spatial resolution of the encoded features is retained by a dense atrous spatial pyramid pooling (ASPP) block. The additive attention mechanism in the decoder improves performance for high dimensional input encoded features that promote lane localization and relieve postprocessing computation. The efficacy of the proposed work is evaluated using the DVS dataset for lane extraction (DET). The experimental results show a significant improvement of $5.54\%$ and $5.03\%$ in $F1$ scores in multiclass and binary-class lane marking detection tasks. Additionally, the intersection over union ($IoU$) scores of the proposed method surpass those of the best-performing state-of-the-art method by $6.50\%$ and $9.37\%$ in multiclass and binary-class tasks, respectively.