论文标题
聚焦假设的方式:旨在理解跨模式知识蒸馏
The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation
论文作者
论文摘要
交叉模式知识蒸馏(KD)将传统知识蒸馏扩展到多模式学习的领域,并在各种应用中取得了巨大的成功。为了实现跨模态的知识转移,采用了一种从一种模式的审计网络作为老师,以向学生网络学习从另一种方式提供监督信号。与先前的作品中报道的经验成功相反,Crossmodal KD的工作机制仍然是一个谜。在本文中,我们对Crossmodal KD进行了彻底的了解。我们从两个案例研究开始,并证明KD不是跨模式知识转移中的普遍治疗。然后,我们介绍了维恩图的模态图,以了解模态关系和焦点的假设,从而揭示了跨模式KD功效的决定性因素。 6个多模式数据集的实验结果有助于证明我们的假设,诊断失败案例和点方向,以改善将来的跨模式知识转移。
Crossmodal knowledge distillation (KD) extends traditional knowledge distillation to the area of multimodal learning and demonstrates great success in various applications. To achieve knowledge transfer across modalities, a pretrained network from one modality is adopted as the teacher to provide supervision signals to a student network learning from another modality. In contrast to the empirical success reported in prior works, the working mechanism of crossmodal KD remains a mystery. In this paper, we present a thorough understanding of crossmodal KD. We begin with two case studies and demonstrate that KD is not a universal cure in crossmodal knowledge transfer. We then present the modality Venn diagram to understand modality relationships and the modality focusing hypothesis revealing the decisive factor in the efficacy of crossmodal KD. Experimental results on 6 multimodal datasets help justify our hypothesis, diagnose failure cases, and point directions to improve crossmodal knowledge transfer in the future.