论文标题
鲁棒表示学习的内容丰富的辍学:形状偏见的观点
Informative Dropout for Robust Representation Learning: A Shape-bias Perspective
论文作者
论文摘要
众所周知,卷积神经网络(CNN)在做出决策时更多地依赖于本地纹理而不是全球形状。最近的工作还表明,CNN的纹理偏见与其稳健性之间存在密切的关系,以防止分配转移,对抗性扰动,随机腐败等。在这项工作中,我们试图通过减轻CNN的质地偏见来普遍普遍地改善各种鲁棒性。借助人类视觉系统的灵感,我们提出了一种轻巧的模型 - 不足的方法,即信息丰富的辍学方法(Infodrop),以提高解释性并减少质地偏见。具体而言,我们根据图像中的本地自我信息将纹理与形状区分开,并采用类似辍学的算法来将模型输出从本地纹理中解散。通过广泛的实验,我们在各种情况下观察到增强的鲁棒性(域的概括,很少的分类,图像腐败和对抗性扰动)。据我们所知,这项工作是在统一模型中提高各种鲁棒性的最早尝试之一,从而对形状偏见和稳健性之间的关系发明了新的启示,也是对可信赖的机器学习算法的新方法。代码可在https://github.com/bfshi/infodrop上找到。
Convolutional Neural Networks (CNNs) are known to rely more on local texture rather than global shape when making decisions. Recent work also indicates a close relationship between CNN's texture-bias and its robustness against distribution shift, adversarial perturbation, random corruption, etc. In this work, we attempt at improving various kinds of robustness universally by alleviating CNN's texture bias. With inspiration from the human visual system, we propose a light-weight model-agnostic method, namely Informative Dropout (InfoDrop), to improve interpretability and reduce texture bias. Specifically, we discriminate texture from shape based on local self-information in an image, and adopt a Dropout-like algorithm to decorrelate the model output from the local texture. Through extensive experiments, we observe enhanced robustness under various scenarios (domain generalization, few-shot classification, image corruption, and adversarial perturbation). To the best of our knowledge, this work is one of the earliest attempts to improve different kinds of robustness in a unified model, shedding new light on the relationship between shape-bias and robustness, also on new approaches to trustworthy machine learning algorithms. Code is available at https://github.com/bfshi/InfoDrop.