论文标题

朝着唤醒单词发现的数据有效建模

Towards Data-efficient Modeling for Wake Word Spotting

论文作者

Gao, Yixin, Mishchenko, Yuriy, Shah, Anish, Matsoukas, Spyros, Vitaladevuni, Shiv

论文摘要

唤醒单词(WW)的斑点在远场上具有挑战性,不仅是因为信号传输的干扰,而且在声学环境中的复杂性。传统的WW模型训练需要大量的内域WW特异性数据,并具有大量的人类注释,因此在没有此类数据的情况下,很难构建WW模型。 In this paper we present data-efficient solutions to address the challenges in WW modeling, such as domain-mismatch, noisy conditions, limited annotation, etc. Our proposed system is composed of a multi-condition training pipeline with a stratified data augmentation, which improves the model robustness to a variety of predefined acoustic conditions, together with a semi-supervised learning pipeline to accurately extract the WW and confusable examples from未转录的语音语料库。从仅10个小时的域名WW音频开始,我们就可以扩大和丰富训练数据集的20-100倍以捕获声学复杂性。我们在真实用户数据上的实验表明,提出的解决方案可以通过节省97 \%的WW特异性数据收集量和注释的带宽的86%来实现生产级模型的可比性。

Wake word (WW) spotting is challenging in far-field not only because of the interference in signal transmission but also the complexity in acoustic environments. Traditional WW model training requires large amount of in-domain WW-specific data with substantial human annotations therefore it is hard to build WW models without such data. In this paper we present data-efficient solutions to address the challenges in WW modeling, such as domain-mismatch, noisy conditions, limited annotation, etc. Our proposed system is composed of a multi-condition training pipeline with a stratified data augmentation, which improves the model robustness to a variety of predefined acoustic conditions, together with a semi-supervised learning pipeline to accurately extract the WW and confusable examples from untranscribed speech corpus. Starting from only 10 hours of domain-mismatched WW audio, we are able to enlarge and enrich the training dataset by 20-100 times to capture the acoustic complexity. Our experiments on real user data show that the proposed solutions can achieve comparable performance of a production-grade model by saving 97\% of the amount of WW-specific data collection and 86\% of the bandwidth for annotation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源