论文标题
利用多模式行为分析来进行自动化的求职面试绩效评估和反馈
Leveraging Multimodal Behavioral Analytics for Automated Job Interview Performance Assessment and Feedback
论文作者
论文摘要
行为线索在人类交流和认知感知中起着重要作用。在大多数专业领域中,员工招聘政策都是构建的,以便对专业技能和个性特征进行了充分的评估。招聘访谈的结构是为了评估潜在的员工对该职位的适用性 - 他们的专业资格,人际交往能力,在时间和资源限制的存在下,在批评和压力很大的情况下表现的能力,等等。因此,候选人需要意识到他们的积极和负面属性以及对自己成功的行为影响的正念。我们提出了一个多模式的分析框架,该框架在访谈场景中分析候选人,并为诸如参与度,口语率,眼神交流等预定标签提供反馈。我们进行了全面的分析,其中包括受访者的面部表情,语音和韵律信息,使用视频,音频和文本笔录从录制的访谈中获得。我们使用这些多模式数据源来构建复合表示形式,该表示源用于训练机器学习分类器以预测类标签。然后,这种分析用于向受访者提供其行为提示和肢体语言的建设性反馈。实验验证表明,所提出的方法可以实现有希望的结果。
Behavioral cues play a significant part in human communication and cognitive perception. In most professional domains, employee recruitment policies are framed such that both professional skills and personality traits are adequately assessed. Hiring interviews are structured to evaluate expansively a potential employee's suitability for the position - their professional qualifications, interpersonal skills, ability to perform in critical and stressful situations, in the presence of time and resource constraints, etc. Therefore, candidates need to be aware of their positive and negative attributes and be mindful of behavioral cues that might have adverse effects on their success. We propose a multimodal analytical framework that analyzes the candidate in an interview scenario and provides feedback for predefined labels such as engagement, speaking rate, eye contact, etc. We perform a comprehensive analysis that includes the interviewee's facial expressions, speech, and prosodic information, using the video, audio, and text transcripts obtained from the recorded interview. We use these multimodal data sources to construct a composite representation, which is used for training machine learning classifiers to predict the class labels. Such analysis is then used to provide constructive feedback to the interviewee for their behavioral cues and body language. Experimental validation showed that the proposed methodology achieved promising results.