论文标题
深度直接密度比估计的非负Bregman差异最小化
Non-Negative Bregman Divergence Minimization for Deep Direct Density Ratio Estimation
论文作者
论文摘要
密度比估计(DRE)是各种机器学习任务的核心,例如异常检测和域的适应性。在现有的关于DRE的研究中,基于Bregman Divergence(BD)最小化的方法已经进行了广泛的研究。但是,当用高度灵活的模型(例如深神经网络)应用时,BD最小化倾向于遭受我们所谓的火车损害黑客攻击,这是由经验BD估计器的典型特征引起的过度拟合来源。在本文中,为了减轻火车造成的黑客攻击,我们提出了对经验BD估计器的非负校正。从理论上讲,我们通过构成的概括误差确认了所提出方法的声音。通过我们的实验,提出的方法在基于内部的离群值检测中表现出了有利的性能。
Density ratio estimation (DRE) is at the core of various machine learning tasks such as anomaly detection and domain adaptation. In existing studies on DRE, methods based on Bregman divergence (BD) minimization have been extensively studied. However, BD minimization when applied with highly flexible models, such as deep neural networks, tends to suffer from what we call train-loss hacking, which is a source of overfitting caused by a typical characteristic of empirical BD estimators. In this paper, to mitigate train-loss hacking, we propose a non-negative correction for empirical BD estimators. Theoretically, we confirm the soundness of the proposed method through a generalization error bound. Through our experiments, the proposed methods show a favorable performance in inlier-based outlier detection.