论文标题

准确性与复杂性:视觉问题回答模型的权衡

Accuracy vs. Complexity: A Trade-off in Visual Question Answering Models

论文作者

Farazi, Moshiur R., Khan, Salman H., Barnes, Nick

论文摘要

视觉问题回答(VQA)已成为视觉图灵测试,以验证AI剂的推理能力。现有VQA模型的枢轴是通过将图像中的视觉特征和来自给定问题的语义特征组合在一起来学习的关节嵌入。因此,大量文献集中于制定复杂的关节嵌入策略以及视觉注意机制,以有效捕获这两种方式之间的相互作用。但是,在高维(关节嵌入)空间中对视觉和语义特征进行建模在计算上是昂贵的,并且更复杂的模型通常会导致VQA准确性的微不足道改善。在这项工作中,我们系统地研究了模型复杂性与VQA任务的性能之间的权衡。 VQA模型具有多样化的体系结构,包括预处理,特征提取,多模式融合,注意力和最终分类阶段。我们特别关注“多模式融合”在VQA模型中的效果,这通常是VQA管道中最昂贵的步骤。我们彻底的实验评估使我们提出了两个建议,一项提案优化了最小的复杂性,另一个针对最先进的VQA性能进行了优化。

Visual Question Answering (VQA) has emerged as a Visual Turing Test to validate the reasoning ability of AI agents. The pivot to existing VQA models is the joint embedding that is learned by combining the visual features from an image and the semantic features from a given question. Consequently, a large body of literature has focused on developing complex joint embedding strategies coupled with visual attention mechanisms to effectively capture the interplay between these two modalities. However, modelling the visual and semantic features in a high dimensional (joint embedding) space is computationally expensive, and more complex models often result in trivial improvements in the VQA accuracy. In this work, we systematically study the trade-off between the model complexity and the performance on the VQA task. VQA models have a diverse architecture comprising of pre-processing, feature extraction, multimodal fusion, attention and final classification stages. We specifically focus on the effect of "multi-modal fusion" in VQA models that is typically the most expensive step in a VQA pipeline. Our thorough experimental evaluation leads us to two proposals, one optimized for minimal complexity and the other one optimized for state-of-the-art VQA performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源