论文标题

Hicomex:基于层次强度分布和COMEX关系学习的面部动作单位识别

HiCOMEX: Facial Action Unit Recognition Based on Hierarchy Intensity Distribution and COMEX Relation Learning

论文作者

Shi, Ziqiang, Liu, Liu, Liu, Zhongling, Liu, Rujie, Mi, Xiaoyu, Murase, and Kentaro

论文摘要

研究了面部动作单元(AUS)的检测,因为由于其广泛的应用,它具有竞争性。在本文中,我们通过掌握\ textbf {c} o- \ textbf {o} ccurrence和\ textbf {m} utual \ textbf {ex} clusion(ex} clusion(comex)以及在aus中的强度分布,我们提出了一个从单个输入图像中检测的新颖框架。我们的算法使用面部标志来检测当地AUS的特征。这些特征是用于学习强度分布的双向长短期内存(BILSTM)层的输入。之后,新的AU功能不断地通过自我发项的编码层和一个连续的现代霍普菲尔德层,以学习COMEX关系。我们对具有挑战性的BP4D和DISFA基准的实验分别产生63.7 \%和61.8%的F1分数,这表明我们提出的网络可以导致AU检测任务的性能提高。

The detection of facial action units (AUs) has been studied as it has the competition due to the wide-ranging applications thereof. In this paper, we propose a novel framework for the AU detection from a single input image by grasping the \textbf{c}o-\textbf{o}ccurrence and \textbf{m}utual \textbf{ex}clusion (COMEX) as well as the intensity distribution among AUs. Our algorithm uses facial landmarks to detect the features of local AUs. The features are input to a bidirectional long short-term memory (BiLSTM) layer for learning the intensity distribution. Afterwards, the new AU feature continuously passed through a self-attention encoding layer and a continuous-state modern Hopfield layer for learning the COMEX relationships. Our experiments on the challenging BP4D and DISFA benchmarks without any external data or pre-trained models yield F1-scores of 63.7\% and 61.8\% respectively, which shows our proposed networks can lead to performance improvement in the AU detection task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源