论文标题

多模式讲座演示数据集:了解教育幻灯片中的多模式

Multimodal Lecture Presentations Dataset: Understanding Multimodality in Educational Slides

论文作者

Lee, Dong Won, Ahuja, Chaitanya, Liang, Paul Pu, Natu, Sanika, Morency, Louis-Philippe

论文摘要

仔细构建和介绍了一系列包含文本和数字的页面,这些页面是一系列页面,并仔细构建和介绍,以便将知识最佳地转移给学生。多媒体和心理学的先前研究将演讲的有效性归因于其多模式的性质。作为开发AI的一步,以帮助学生学习作为智能教师助理,我们将多模式演讲演示文稿数据集作为大规模的基准测试,以测试机器学习模型的能力,以在多模式理解教育内容中。我们的数据集包含一个校准的幻灯片和口语,用于180多个小时的视频和9000多个幻灯片,其中10位来自各种主题的讲师(例如计算机科学,牙科,生物学)。我们介绍了两项研究任务,它们被设计为对AI代理商的垫脚石,它们可以解释(自动为讲座演示字幕),并说明(综合视觉图形以伴随口头解释)教育内容。我们提供手动注释,以帮助执行这两个研究任务并评估其最新模型。比较基线和人类学生的表现,我们发现当前模型在(1)幻灯片和口语文本之间的较弱的跨模式对齐中挣扎,(2)学习新颖的视觉介质,(3)技术语言和(4)(4)远程序列。为了解决这个问题,我们还介绍了多元型变压器,该多模式变压器训练有多种模式学习损失,比目前的方法更有效。最后,我们阐明了对教育演讲的多模式理解的挑战和机遇。

Lecture slide presentations, a sequence of pages that contain text and figures accompanied by speech, are constructed and presented carefully in order to optimally transfer knowledge to students. Previous studies in multimedia and psychology attribute the effectiveness of lecture presentations to their multimodal nature. As a step toward developing AI to aid in student learning as intelligent teacher assistants, we introduce the Multimodal Lecture Presentations dataset as a large-scale benchmark testing the capabilities of machine learning models in multimodal understanding of educational content. Our dataset contains aligned slides and spoken language, for 180+ hours of video and 9000+ slides, with 10 lecturers from various subjects (e.g., computer science, dentistry, biology). We introduce two research tasks which are designed as stepping stones towards AI agents that can explain (automatically captioning a lecture presentation) and illustrate (synthesizing visual figures to accompany spoken explanations) educational content. We provide manual annotations to help implement these two research tasks and evaluate state-of-the-art models on them. Comparing baselines and human student performances, we find that current models struggle in (1) weak crossmodal alignment between slides and spoken text, (2) learning novel visual mediums, (3) technical language, and (4) long-range sequences. Towards addressing this issue, we also introduce PolyViLT, a multimodal transformer trained with a multi-instance learning loss that is more effective than current approaches. We conclude by shedding light on the challenges and opportunities in multimodal understanding of educational presentations.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源