论文标题
VIPHY:探测“可见”的物理常识知识
VIPHY: Probing "Visible" Physical Commonsense Knowledge
论文作者
论文摘要
近年来,视觉模型(VLMS)在视觉推理任务(例如属性,位置)上表现出色。尽管此类任务衡量了在给定的视觉实例上以基础和理性的必要知识,但它们并不能衡量VLMS保留和概括此类知识的能力。在这项工作中,我们评估了他们获取“可见”物理知识的能力 - 从静态场景的图像,尤其是在对象颜色,大小和空间的范围内,很容易从静态场景中访问的信息。我们建立了自动管道,以获取综合知识资源,用于校准和探索这些模型。我们的结果表明,在所有三个任务中,模型和人类绩效之间存在严重的差距。此外,我们的字幕预测的基线(Capbert)在大小和空间任务上都大大优于VLM,这强调了,尽管有足够的访问具有视觉方式的地面语言,但它们仍在努力保留此类知识。数据集和代码可在https://github.com/axe--/viphy上找到。
In recent years, vision-language models (VLMs) have shown remarkable performance on visual reasoning tasks (e.g. attributes, location). While such tasks measure the requisite knowledge to ground and reason over a given visual instance, they do not, however, measure the ability of VLMs to retain and generalize such knowledge. In this work, we evaluate their ability to acquire "visible" physical knowledge -- the information that is easily accessible from images of static scenes, particularly across the dimensions of object color, size and space. We build an automatic pipeline to derive a comprehensive knowledge resource for calibrating and probing these models. Our results indicate a severe gap between model and human performance across all three tasks. Furthermore, our caption pretrained baseline (CapBERT) significantly outperforms VLMs on both size and spatial tasks -- highlighting that despite sufficient access to ground language with visual modality, they struggle to retain such knowledge. The dataset and code are available at https://github.com/Axe--/ViPhy .