论文标题

神经自举

Neural Bootstrapper

论文作者

Shin, Minsuk, Cho, Hyungjoo, Min, Hyun-seok, Lim, Sungbin

论文摘要

引导一直是机器学习和统计数据中集合和不确定性量化的主要工具。但是,由于其多次培训和重新采样的性质,引导深度神经网络在计算上是繁重的。因此,它在不确定性估计和相关任务上的实际应用中存在困难。为了克服这种计算瓶颈,我们提出了一种新的方法,称为\ emph {Neural Bootstrapper}(Neuboots),该方法学会通过单个模型训练来生成自举神经网络。 Neuboots将引导权重注入骨干网络的高级特征层,并输出目标的自举预测,而无需其他参数和从头开始的重复计算。我们将Neuboots应用于与不确定性量化有关的各种机器学习任务,包括图像分类和语义分割中的预测校准,主动学习和检测到分布样本。我们的经验结果表明,Neuboot在计算成本低得多的情况下优于其他基于包装的方法,而不会失去引导的有效性。

Bootstrapping has been a primary tool for ensemble and uncertainty quantification in machine learning and statistics. However, due to its nature of multiple training and resampling, bootstrapping deep neural networks is computationally burdensome; hence it has difficulties in practical application to the uncertainty estimation and related tasks. To overcome this computational bottleneck, we propose a novel approach called \emph{Neural Bootstrapper} (NeuBoots), which learns to generate bootstrapped neural networks through single model training. NeuBoots injects the bootstrap weights into the high-level feature layers of the backbone network and outputs the bootstrapped predictions of the target, without additional parameters and the repetitive computations from scratch. We apply NeuBoots to various machine learning tasks related to uncertainty quantification, including prediction calibrations in image classification and semantic segmentation, active learning, and detection of out-of-distribution samples. Our empirical results show that NeuBoots outperforms other bagging based methods under a much lower computational cost without losing the validity of bootstrapping.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源