论文标题

在文本引导的图像生成中测试关系理解

Testing Relational Understanding in Text-Guided Image Generation

论文作者

Conwell, Colin, Ullman, Tomer

论文摘要

关系是人类认知的基本基础。经典和最近的工作表明,许多关系正在尽早发展,并迅速被感知。渴望人类水平的感知和推理的机器模型应反映出对关系的认识和推理的能力。我们使用文献中研究或提出的一组15组基本的物理和社会关系以及人类参与者的判断(n = 169),对最近的文本制定图像生成模型(DALL-E 2)进行了系统的经验检查(DALL-E 2)。总体而言,我们发现只有约22%的图像匹配基本关系提示。根据对人们判断的定量检查,我们建议当前的图像生成模型尚未掌握涉及简单对象和代理的基本关系。我们研究了模型成功和失败的原因,并根据在生物智能中观察到的计算提出了可能的改进。

Relations are basic building blocks of human cognition. Classic and recent work suggests that many relations are early developing, and quickly perceived. Machine models that aspire to human-level perception and reasoning should reflect the ability to recognize and reason generatively about relations. We report a systematic empirical examination of a recent text-guided image generation model (DALL-E 2), using a set of 15 basic physical and social relations studied or proposed in the literature, and judgements from human participants (N = 169). Overall, we find that only ~22% of images matched basic relation prompts. Based on a quantitative examination of people's judgments, we suggest that current image generation models do not yet have a grasp of even basic relations involving simple objects and agents. We examine reasons for model successes and failures, and suggest possible improvements based on computations observed in biological intelligence.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源