论文标题
MMNET:肌肉运动引导网络用于微表达识别
MMNet: Muscle motion-guided network for micro-expression recognition
论文作者
论文摘要
面部微表达(MES)是非自愿的面部动作,揭示了人们的真实感觉,并在精神疾病,国家安全和许多人类计算机互动系统的早期干预中起着重要作用。但是,现有的微表达数据集有限,通常对培训良好的分类器构成一些挑战。为了建模微妙的面部肌肉运动,我们提出了一个健壮的微表达识别(MER)框架,即肌肉运动引导网络(MMNET)。具体而言,引入了连续的注意(CA)块,以专注于建模具有很少身份信息的本地微妙的肌肉运动模式,这与大多数以前的方法不同,这些方法与直接从完整视频框架中提取具有大量身份信息的特征。此外,我们根据视觉变压器设计了位置校准(PC)模块。通过添加PC模块在两个分支末端产生的面部的位置嵌入,PC模块可以帮助将位置信息添加到MER的面部肌肉运动模式特征中。在三个公共微表达数据集上进行的广泛实验表明,我们的方法的表现优于最先进的方法。
Facial micro-expressions (MEs) are involuntary facial motions revealing peoples real feelings and play an important role in the early intervention of mental illness, the national security, and many human-computer interaction systems. However, existing micro-expression datasets are limited and usually pose some challenges for training good classifiers. To model the subtle facial muscle motions, we propose a robust micro-expression recognition (MER) framework, namely muscle motion-guided network (MMNet). Specifically, a continuous attention (CA) block is introduced to focus on modeling local subtle muscle motion patterns with little identity information, which is different from most previous methods that directly extract features from complete video frames with much identity information. Besides, we design a position calibration (PC) module based on the vision transformer. By adding the position embeddings of the face generated by PC module at the end of the two branches, the PC module can help to add position information to facial muscle motion pattern features for the MER. Extensive experiments on three public micro-expression datasets demonstrate that our approach outperforms state-of-the-art methods by a large margin.