论文标题

受控辍学以进行不确定性估计

Controlled Dropout for Uncertainty Estimation

论文作者

Hasan, Mehedi, Khosravi, Abbas, Hossain, Ibrahim, Rahman, Ashikur, Nahavandi, Saeid

论文摘要

神经网络中的不确定性量化是针对安全至关重要应用的讨论最多的主题之一。尽管神经网络(NNS)在许多应用程序中都达到了最先进的性能,但它们仍然提供不可靠的点预测,这些预测缺乏有关不确定性估计的信息。在使神经网络能够估计不确定性的各种方法中,蒙特卡洛(MC)辍学在短时间内由于其简单性而在短时间内广受欢迎。在这项研究中,我们提出了传统辍学层的新版本,我们可以在其中修复辍学配置的数量。因此,每一层都可以在MC方法中采用并应用新的辍学层,以量化与NN预测相关的不确定性。我们在玩具和逼真的数据集上进行了实验,并使用传统辍学层将结果与MC方法进行比较。利用不确定性评估指标的性能分析证实了我们的辍学层在大多数情况下提供了更好的性能。

Uncertainty quantification in a neural network is one of the most discussed topics for safety-critical applications. Though Neural Networks (NNs) have achieved state-of-the-art performance for many applications, they still provide unreliable point predictions, which lack information about uncertainty estimates. Among various methods to enable neural networks to estimate uncertainty, Monte Carlo (MC) dropout has gained much popularity in a short period due to its simplicity. In this study, we present a new version of the traditional dropout layer where we are able to fix the number of dropout configurations. As such, each layer can take and apply the new dropout layer in the MC method to quantify the uncertainty associated with NN predictions. We conduct experiments on both toy and realistic datasets and compare the results with the MC method using the traditional dropout layer. Performance analysis utilizing uncertainty evaluation metrics corroborates that our dropout layer offers better performance in most cases.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源