论文标题
基于特征提取和集群的黑盒安全分析和DNN的重新培训
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering
论文作者
论文摘要
深度神经网络(DNN)表现出优于经典机器学习的表现,以支持安全至关重要的系统中的许多功能。尽管现在DNN在此类系统(例如自动驾驶汽车)中被广泛使用,但在基于DNN的系统中对功能安全分析的自动支持方面的进展有限。例如,识别错误的根本原因,以实现风险分析和DNN再培训,这仍然是一个开放的问题。在本文中,我们提出了安全的黑框方法,以自动表征DNN错误的根本原因。 SAFE依靠在ImageNet上预先训练的转移学习模型来从引起错误的图像中提取特征。然后,它应用基于密度的聚类算法来检测图像的任意形状簇建模可行的误差原因。最后,簇用于有效地重新训练和改进DNN。安全的黑框性质是由我们的目标激励的,不需要更改,甚至不需要访问DNN内部设备才能促进采用。实验结果表明,基于汽车域中的案例研究,安全在识别DNN错误不同根本原因方面具有较高的能力。与替代方案相比,它还可以在重新培训后的DNN准确性方面显着提高,同时节省了大量的执行时间和内存。
Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this paper, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption. Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives.