论文标题

FAIRCVTEST演示:在公平自动招聘中使用测试台的多模式学习中的偏见

FairCVtest Demo: Understanding Bias in Multimodal Learning with a Testbed in Fair Automatic Recruitment

论文作者

Peña, Alejandro, Serna, Ignacio, Morales, Aythami, Fierrez, Julian

论文摘要

为了研究目前基于信息源的当前多模式AI算法如何受到数据中敏感元素和内部偏见的影响,该演示者实验是基于基于课程的自动招聘测试的vitae:faircvtest。如今,社会中决策算法的存在正在迅速增加,而对其透明度的担忧以及这些算法成为新的歧视来源的可能性正在引起。该演示显示了招聘工具背后人工智能(AI)从非结构化数据中提取敏感信息的能力,并以不良(不公平的)方式将其结合到数据偏见。在广泛的情况下,该演示包括一种用于歧视感知学习的新算法(敏化符),该算法消除了我们的多模式AI框架中的敏感信息。

With the aim of studying how current multimodal AI algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data, this demonstrator experiments over an automated recruitment testbed based on Curriculum Vitae: FairCVtest. The presence of decision-making algorithms in society is rapidly increasing nowadays, while concerns about their transparency and the possibility of these algorithms becoming new sources of discrimination are arising. This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data, and exploit it in combination to data biases in undesirable (unfair) ways. Aditionally, the demo includes a new algorithm (SensitiveNets) for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源