论文标题
部分可观测时空混沌系统的无模型预测
MAST: Multiscale Audio Spectrogram Transformers
论文作者
论文摘要
我们为音频分类提供了多尺度音频谱图变压器(MAST),这将多尺度特征层次结构的概念带入了音频谱图变压器(AST)。给定输入音频谱图,我们首先对其进行修补并将其投影到初始的时间分辨率和嵌入维度中,并在桅杆中的多个阶段逐渐扩展嵌入维度,同时减少输入的时间分辨率。我们使用金字塔结构,该金字塔结构允许以高时间分辨率运行肥大的早期层,但嵌入较低的空间来模拟简单的低级声学信息和更深的时间粗层,以模拟具有高维嵌入的高级声学信息。我们还扩展了我们提出一种称为SS-MAST的新的自学学习方法(SSL)方法的方法,该方法计算了学生和教师编码器的潜在表示之间的对称对比损失,利用贴剂滴滴,这是我们引入的新型音频增强方法。实际上,从Lape基准中的8个语音和非语音任务中,桅杆的平均准确性高于3.4%的平均精度,从而在语音命令中取得了最新的结果。此外,我们拟议的SS-MAST的绝对平均提高比先前提出的SSAST可实现2.6%。
We present Multiscale Audio Spectrogram Transformer (MAST) for audio classification, which brings the concept of multiscale feature hierarchies to the Audio Spectrogram Transformer (AST). Given an input audio spectrogram, we first patchify and project it into an initial temporal resolution and embedding dimension, post which the multiple stages in MAST progressively expand the embedding dimension while reducing the temporal resolution of the input. We use a pyramid structure that allows early layers of MAST operating at a high temporal resolution but low embedding space to model simple low-level acoustic information and deeper temporally coarse layers to model high-level acoustic information with high-dimensional embeddings. We also extend our approach to present a new Self-Supervised Learning (SSL) method called SS-MAST, which calculates a symmetric contrastive loss between latent representations from a student and a teacher encoder, leveraging patch-drop, a novel audio augmentation approach that we introduce. In practice, MAST significantly outperforms AST by an average accuracy of 3.4% across 8 speech and non-speech tasks from the LAPE Benchmark, achieving state-of-the-art results on keyword spotting in Speech Commands. Additionally, our proposed SS-MAST achieves an absolute average improvement of 2.6% over the previously proposed SSAST.