论文标题

通过交叉任务一致性进行强大的学习

Robust Learning Through Cross-Task Consistency

论文作者

Zamir, Amir, Sax, Alexander, Yeo, Teresa, Kar, Oğuzhan, Cheerla, Nikhil, Suri, Rohan, Cao, Zhangjie, Malik, Jitendra, Guibas, Leonidas

论文摘要

视觉感知需要解决广泛的任务,例如,对象检测,深度估计等。对来自同一图像的多个任务做出的预测并非独立,因此,预计将是一致的。我们提出了一种以交叉任务一致性来增强学习的广泛适用且完全计算的方法。提出的公式基于对任意任务图的推理路径不变性。我们观察到,具有交叉任务一致性的学习会导致更准确的预测和更好地概括分布输入。该框架还导致基于测量系统的固有一致性的无用的无监督数量,称为一致性能量。一致性能量与监督误差(r = 0.67)良好相关,因此可以用作无监督的置信度度量,并用于检测分布外输入(ROC-AUC = 0.95)。评估是在多个数据集上进行的,包括Taskomony,副本,Cocodoom和Apolloscape,它们基于基准的交叉任务一致性与各种基准,包括常规的多任务学习,周期一致性和分析一致性。

Visual perception entails solving a wide set of tasks, e.g., object detection, depth estimation, etc. The predictions made for multiple tasks from the same image are not independent, and therefore, are expected to be consistent. We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency. The proposed formulation is based on inference-path invariance over a graph of arbitrary tasks. We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs. This framework also leads to an informative unsupervised quantity, called Consistency Energy, based on measuring the intrinsic consistency of the system. Consistency Energy correlates well with the supervised error (r=0.67), thus it can be employed as an unsupervised confidence metric as well as for detection of out-of-distribution inputs (ROC-AUC=0.95). The evaluations are performed on multiple datasets, including Taskonomy, Replica, CocoDoom, and ApolloScape, and they benchmark cross-task consistency versus various baselines including conventional multi-task learning, cycle consistency, and analytical consistency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源