论文标题

弱监督细分的边界框:全球限制接近完全监督

Bounding boxes for weakly supervised segmentation: Global constraints get close to full supervision

论文作者

Kervadec, Hoel, Dolz, Jose, Wang, Shanshan, Granger, Eric, Ayed, Ismail Ben

论文摘要

我们提出了一个基于框注释的几个全球约束条件,提出了一个弱监督的学习细分。特别是,我们通过对网络输出施加一组约束来在深度学习之前利用经典的紧密度。如此强大的拓扑先验可以防止解决方案通过在边界框中执行任何水平或垂直线,至少包含前景区域的一个像素,从而阻止了解决方案的过度收缩。此外,我们将深度的紧密度与全球背景的空虚限制结合在一起,并将培训与边界框外的信息一起指导培训。我们从实验上证明,这种全球约束比背景类别的标准跨凝结功能强大得多。我们的优化问题具有挑战性,因为它采取了深网的输出的一系列不平等约束的形式。我们基于最新的对数挡仪方法的强大扩展,以一系列不受约束的损失来解决它,该方法在内部方法的背景下是众所周知的。这可容纳用于训练深网的标准随机梯度下降(SGD),同时避免计算昂贵且不稳定的拉格朗日双重步骤和预测。对两个不同的公共数据集和应用程序(前列腺和脑部病变)进行的广泛实验表明,我们的全球紧密度和空虚先验之间的协同作用产生了非常有竞争力的表现,接近了全面的监督,并且胜过胜过较高的表现。此外,我们的方法消除了对计算昂贵的提案产生的需求。我们的代码是匿名共享的。

We propose a novel weakly supervised learning segmentation based on several global constraints derived from box annotations. Particularly, we leverage a classical tightness prior to a deep learning setting via imposing a set of constraints on the network outputs. Such a powerful topological prior prevents solutions from excessive shrinking by enforcing any horizontal or vertical line within the bounding box to contain, at least, one pixel of the foreground region. Furthermore, we integrate our deep tightness prior with a global background emptiness constraint, guiding training with information outside the bounding box. We demonstrate experimentally that such a global constraint is much more powerful than standard cross-entropy for the background class. Our optimization problem is challenging as it takes the form of a large set of inequality constraints on the outputs of deep networks. We solve it with sequence of unconstrained losses based on a recent powerful extension of the log-barrier method, which is well-known in the context of interior-point methods. This accommodates standard stochastic gradient descent (SGD) for training deep networks, while avoiding computationally expensive and unstable Lagrangian dual steps and projections. Extensive experiments over two different public data sets and applications (prostate and brain lesions) demonstrate that the synergy between our global tightness and emptiness priors yield very competitive performances, approaching full supervision and outperforming significantly DeepCut. Furthermore, our approach removes the need for computationally expensive proposal generation. Our code is shared anonymously.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源