论文标题

Manymodalqa:多种投入的模式歧义和质量检查

ManyModalQA: Modality Disambiguation and QA over Diverse Inputs

论文作者

Hannan, Darryl, Jain, Akshay, Bansal, Mohit

论文摘要

我们提出了一个新的多模式问题回答挑战Manymodalqa,其中代理必须通过考虑三种不同的方式来回答问题:文本,图像和表格。我们通过刮擦Wikipedia,然后利用众包收集问答对,收集我们的数据。我们的问题是模棱两可的,因为包含答案的方式不容易基于问题。为了证明这种歧义,我们构建了一个模态选择器(或disamiguator)网络,并且与现有数据集相比,该模型在挑战集上的准确性大大降低,这表明我们的问题更模棱两可。通过分析该模型,我们研究了问题中的哪些单词表示模式。接下来,我们构建了一个简单的基线Manymodalqa模型,该模型基于模态选择器的预测,将启动相应的预训练的最先进的单峰QA模型。我们专注于为社区提供新的多种模式评估集,并且仅提供微调集,并期望在大多数培训中转移现有的数据集和方法,以鼓励低资源概括,而无需为每个新任务提供大型,单层的培训集。我们的基线模型和人类绩效之间存在很大的差距。因此,我们希望这项挑战鼓励端到端的模式歧义和多模式质量质量质量质量质量质量模型以及转移学习的研究。代码和数据可用:https://github.com/hannandarryl/manymodalqa

We present a new multimodal question answering challenge, ManyModalQA, in which an agent must answer a question by considering three distinct modalities: text, images, and tables. We collect our data by scraping Wikipedia and then utilize crowdsourcing to collect question-answer pairs. Our questions are ambiguous, in that the modality that contains the answer is not easily determined based solely upon the question. To demonstrate this ambiguity, we construct a modality selector (or disambiguator) network, and this model gets substantially lower accuracy on our challenge set, compared to existing datasets, indicating that our questions are more ambiguous. By analyzing this model, we investigate which words in the question are indicative of the modality. Next, we construct a simple baseline ManyModalQA model, which, based on the prediction from the modality selector, fires a corresponding pre-trained state-of-the-art unimodal QA model. We focus on providing the community with a new manymodal evaluation set and only provide a fine-tuning set, with the expectation that existing datasets and approaches will be transferred for most of the training, to encourage low-resource generalization without large, monolithic training sets for each new task. There is a significant gap between our baseline models and human performance; therefore, we hope that this challenge encourages research in end-to-end modality disambiguation and multimodal QA models, as well as transfer learning. Code and data available at: https://github.com/hannandarryl/ManyModalQA

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源