论文标题
弗雷伯特:一种简单而灵活的变压器,用于视觉关系检测
VReBERT: A Simple and Flexible Transformer for Visual Relationship Detection
论文作者
论文摘要
视觉关系检测(VRD)促使计算机视觉模型“看到”超出单个对象实例,并“了解”场景中的不同对象是如何相关的。 VRD的传统方式首先检测图像中的对象,然后单独预测检测到的对象实例之间的关系。这种不相交的方法很容易预测具有相似语义含义的同一对象对之间的冗余关系标签(即谓词),或者具有与地面真实含义相似但在语义上不正确的含义相似的语义含义。为了解决这个问题,我们建议与视觉对象特征和语义关系特征共同训练VRD模型。为此,我们提出了Vrebert,这是一种类似于BERT的变压器模型,用于通过多阶段训练策略进行视觉关系检测,以共同处理视觉和语义特征。我们表明,我们简单的类似BERT的模型能够超越谓词预测中最先进的VRD模型。此外,我们表明,通过使用预先训练的Vrebert模型,我们的模型通过明显的余量(+8.49 R@50和+8.99 R@100)推动了最新的零射击谓词预测。
Visual Relationship Detection (VRD) impels a computer vision model to 'see' beyond an individual object instance and 'understand' how different objects in a scene are related. The traditional way of VRD is first to detect objects in an image and then separately predict the relationship between the detected object instances. Such a disjoint approach is prone to predict redundant relationship tags (i.e., predicate) between the same object pair with similar semantic meaning, or incorrect ones that have a similar meaning to the ground truth but are semantically incorrect. To remedy this, we propose to jointly train a VRD model with visual object features and semantic relationship features. To this end, we propose VReBERT, a BERT-like transformer model for Visual Relationship Detection with a multi-stage training strategy to jointly process visual and semantic features. We show that our simple BERT-like model is able to outperform the state-of-the-art VRD models in predicate prediction. Furthermore, we show that by using the pre-trained VReBERT model, our model pushes the state-of-the-art zero-shot predicate prediction by a significant margin (+8.49 R@50 and +8.99 R@100).