论文标题

面部表达凤凰(FEPH):带有手语的注释测序数据集,用于面部和情感指定的表达式

Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language

论文作者

Alaghband, Marie, Yousefi, Niloofar, Garibay, Ivan

论文摘要

面部表情是手势和手语识别系统的重要组成部分。尽管这两个领域最近都取得了进步,但在手语的背景下,带注释的面部表达数据集仍然是稀缺的资源。在此手稿中,我们在手语的背景下引入了带注释的测序面部表达数据集,其中包括从每日新闻和天气预报的$ 3000 $的面部图像中提取的$ 3000 $的面部图像。与当前现有的面部表达数据集不同,FEPH提供了带有不同头部姿势,方向和运动的测序半灌木面部图像。此外,在大多数图像中,身份正在介绍单词,这使数据更具挑战性。要注释此数据集,我们认为我们认为“悲伤”,“惊喜”,“恐惧”,“愤怒”,“中立”,“厌恶”和“厌恶”和“快乐”的七个基本情绪的主要,次要和三级二元组。如果图像的面部表达无法通过上述任何情绪描述,我们还考虑了“无”类。尽管我们将FEPH作为手语中的签名人的面部表达数据集,但它在手势识别和人类计算机交互(HCI)系统中具有更广泛的应用。

Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over $3000$ facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image's facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源