论文标题
音频分类的多代表知识蒸馏
Multi-Representation Knowledge Distillation For Audio Classification
论文作者
论文摘要
作为多媒体分析任务的重要组成部分,音频分类旨在区分不同的音频信号类型,并由于其广泛的应用而受到了密集的关注。一般而言,可以将原始信号转换为各种表示(例如短时傅立叶变换和MEL频率cepstral系数),并且在不同表示中暗示的信息可以是互补的。结合对不同表示的训练的模型可以极大地提高分类性能,但是,使用大量型号进行推断很麻烦且计算昂贵。在本文中,我们为音频分类任务提出了一个新颖的端到端协作学习框架。该框架将多个表示形式作为输入,以并联训练模型。不同表示提供的互补信息通过知识蒸馏共享。因此,可以显着促进每个模型的性能,而不会在推理阶段增加计算开销。广泛的实验结果表明,所提出的方法可以改善分类性能,并在声学场景分类任务和一般音频标记任务上实现最新结果。
As an important component of multimedia analysis tasks, audio classification aims to discriminate between different audio signal types and has received intensive attention due to its wide applications. Generally speaking, the raw signal can be transformed into various representations (such as Short Time Fourier Transform and Mel Frequency Cepstral Coefficients), and information implied in different representations can be complementary. Ensembling the models trained on different representations can greatly boost the classification performance, however, making inference using a large number of models is cumbersome and computationally expensive. In this paper, we propose a novel end-to-end collaborative learning framework for the audio classification task. The framework takes multiple representations as the input to train the models in parallel. The complementary information provided by different representations is shared by knowledge distillation. Consequently, the performance of each model can be significantly promoted without increasing the computational overhead in the inference stage. Extensive experimental results demonstrate that the proposed approach can improve the classification performance and achieve state-of-the-art results on both acoustic scene classification tasks and general audio tagging tasks.