论文标题

用于汽车雷达干扰的资源有效的深神经网络

Resource-efficient Deep Neural Networks for Automotive Radar Interference Mitigation

论文作者

Rock, Johanna, Roth, Wolfgang, Toth, Mate, Meissner, Paul, Pernkopf, Franz

论文摘要

雷达传感器对于对驾驶员援助系统以及自动驾驶汽车的环境感知至关重要。随着雷达传感器数量的增加和到目前为止不受管制的汽车雷达频带,相互干扰是不可避免的,必须处理。需要在雷达数据上运行的算法和模型才能在专门的雷达传感器硬件上运行早期处理步骤。该专业硬件通常具有严格的资源构成,即记忆力低和计算能力低。基于卷积神经网络(CNN)的方法,用于降级和干扰缓解的方法在性能方面产生了雷达处理的有希望的结果。但是,关于Resource-constraints,CNN通常超过硬件的容量。 在本文中,我们研究了基于CNN的降解和干扰雷达信号的量化技术。我们分析了(i)权重和(ii)基于CNN的模型体系结构的激活的量化。此量化导致模型存储和推断期间的内存需求减少。我们将模型与固定和学习的位宽度进行比较,并将两种用于训练量化的CNN的方法进行比较,即直通梯度估计器和离散权重的训练分布。我们说明了结构上小的实价基本模型对量化的重要性,并表明学习的位宽度产生了最小的模型。与实值基线相比,我们的记忆降低约为80 \%。但是,由于实际原因,我们建议将8位用于重量和激活,这导致仅需要0.2兆字节的内存模型。

Radar sensors are crucial for environment perception of driver assistance systems as well as autonomous vehicles. With a rising number of radar sensors and the so far unregulated automotive radar frequency band, mutual interference is inevitable and must be dealt with. Algorithms and models operating on radar data are required to run the early processing steps on specialized radar sensor hardware. This specialized hardware typically has strict resource-constraints, i.e. a low memory capacity and low computational power. Convolutional Neural Network (CNN)-based approaches for denoising and interference mitigation yield promising results for radar processing in terms of performance. Regarding resource-constraints, however, CNNs typically exceed the hardware's capacities by far. In this paper we investigate quantization techniques for CNN-based denoising and interference mitigation of radar signals. We analyze the quantization of (i) weights and (ii) activations of different CNN-based model architectures. This quantization results in reduced memory requirements for model storage and during inference. We compare models with fixed and learned bit-widths and contrast two different methodologies for training quantized CNNs, i.e. the straight-through gradient estimator and training distributions over discrete weights. We illustrate the importance of structurally small real-valued base models for quantization and show that learned bit-widths yield the smallest models. We achieve a memory reduction of around 80\% compared to the real-valued baseline. Due to practical reasons, however, we recommend the use of 8 bits for weights and activations, which results in models that require only 0.2 megabytes of memory.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源