论文标题
在剪辑中删除视觉和书面概念
Disentangling visual and written concepts in CLIP
论文作者
论文摘要
剪辑网络衡量自然文本和图像之间的相似性;在这项工作中,我们研究了其图像编码器中单词图像和自然图像的表示的纠缠。首先,我们发现图像编码器具有将单词图像与这些单词描述的场景的自然图像匹配的能力。这与先前的研究一致,该研究表明,单词的含义和拼写可能会纠缠在网络中。另一方面,我们还发现剪辑具有强大的匹配无意识单词的能力,这表明字母的处理与其含义的处理分开。为了明确确定剪辑的拼写能力是否可分离,我们设计了一个步骤来识别有选择性隔离或消除拼写功能的表示子空间。我们根据一系列检索任务进行基准测试方法,并通过测量夹子引导的生成图像中的文本外观进行测试。我们发现我们的方法能够与自然图像的视觉处理清晰地分开剪辑的拼写功能。
The CLIP network measures the similarity between natural text and images; in this work, we investigate the entanglement of the representation of word images and natural images in its image encoder. First, we find that the image encoder has an ability to match word images with natural images of scenes described by those words. This is consistent with previous research that suggests that the meaning and the spelling of a word might be entangled deep within the network. On the other hand, we also find that CLIP has a strong ability to match nonsense words, suggesting that processing of letters is separated from processing of their meaning. To explicitly determine whether the spelling capability of CLIP is separable, we devise a procedure for identifying representation subspaces that selectively isolate or eliminate spelling capabilities. We benchmark our methods against a range of retrieval tasks, and we also test them by measuring the appearance of text in CLIP-guided generated images. We find that our methods are able to cleanly separate spelling capabilities of CLIP from the visual processing of natural images.