论文标题

强大的声音引导图像操纵

Robust Sound-Guided Image Manipulation

论文作者

Lee, Seung Hyun, Oh, Gyeongrok, Byeon, Wonmin, Yoon, Sang Ho, Kim, Jinkyu, Kim, Sangpil

论文摘要

最近的成功表明,可以通过文本提示来操纵图像,例如,在雨天的晴天,在一个雨天中被操纵到同一场景中,这是由文本输入“下雨”驱动的雨天。这些方法通常使用基于样式的图像生成器,该图像生成器利用多模式(文本和图像)嵌入空间。但是,我们观察到,这种文本输入通常在提供和综合丰富的语义提示时被瓶颈瓶颈,例如将大雨与雨雨区分开。为了解决这个问题,我们主张利用另一种方式,声音,在图像操作中具有显着的优势,因为它可以传达出比文本更多样化的语义提示(生动的情感或自然世界的动态表达)。在本文中,我们提出了一种新颖的方法,该方法首先使用声音扩展了图像文本嵌入空间,并采用了直接的潜在优化方法来根据音频输入(例如雨的声音)操纵给定的图像。我们的广泛实验表明,我们的声音引导的图像操纵方法在语义和视觉上产生的操作结果比最新的文本和声音指导的图像操纵方法更合理,这通过我们的人类评估进一步证实。我们的下游任务评估还表明,我们学到的图像文本单嵌入空间有效地编码声音输入。

Recent successes suggest that an image can be manipulated by a text prompt, e.g., a landscape scene on a sunny day is manipulated into the same scene on a rainy day driven by a text input "raining". These approaches often utilize a StyleCLIP-based image generator, which leverages multi-modal (text and image) embedding space. However, we observe that such text inputs are often bottlenecked in providing and synthesizing rich semantic cues, e.g., differentiating heavy rain from rain with thunderstorms. To address this issue, we advocate leveraging an additional modality, sound, which has notable advantages in image manipulation as it can convey more diverse semantic cues (vivid emotions or dynamic expressions of the natural world) than texts. In this paper, we propose a novel approach that first extends the image-text joint embedding space with sound and applies a direct latent optimization method to manipulate a given image based on audio input, e.g., the sound of rain. Our extensive experiments show that our sound-guided image manipulation approach produces semantically and visually more plausible manipulation results than the state-of-the-art text and sound-guided image manipulation methods, which are further confirmed by our human evaluations. Our downstream task evaluations also show that our learned image-text-sound joint embedding space effectively encodes sound inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源