论文标题

可疑AI:元素,情感识别技术和算法不透明度

Suspect AI: Vibraimage, Emotion Recognition Technology, and Algorithmic Opacity

论文作者

Wright, James

论文摘要

Elaimage是一个数字系统,它通过分析头部动作的视频镜头来量化受试者的心理和情感状态。在俄罗斯,中国,日本和韩国的警察,核电站运营商,机场安全和精神病医生使用振动,并已在奥运会,FIFA世界杯和G7峰会上部署。然而,没有可靠的证据表明该技术实际上是有效的。确实,许多关于其影响的说法似乎是无法证实的。杀伤力量到底是什么,以及如何获得在整个俄罗斯和亚洲渗透最高和最敏感的安全基础设施的能力?我首先要追踪情感识别行业的发展,然后再检查《元素》开发人员的尝试,并科学地将其分支为合法的技术,得出的结论是,与透明度的纪律能力和企业价值相比,与透明社会科学的需求增加相反。我建议“可疑AI”一词来描述算法对犯罪嫌疑人 /非顾问进行分类的越来越多的系统,但本身是令人怀疑的。普及此术语可能有助于抵抗此类技术的“阅读”方法,并对情感,意图和代理发挥权威。

Vibraimage is a digital system that quantifies a subject's mental and emotional state by analysing video footage of the movements of their head. Vibraimage is used by police, nuclear power station operators, airport security and psychiatrists in Russia, China, Japan and South Korea, and has been deployed at an Olympic Games, FIFA World Cup, and G7 Summit. Yet there is no reliable evidence that the technology is actually effective; indeed, many claims made about its effects seem unprovable. What exactly does vibraimage measure, and how has it acquired the power to penetrate the highest profile and most sensitive security infrastructure across Russia and Asia? I first trace the development of the emotion recognition industry, before examining attempts by vibraimage's developers and affiliates scientifically to legitimate the technology, concluding that the disciplining power and corporate value of vibraimage is generated through its very opacity, in contrast to increasing demands across the social sciences for transparency. I propose the term 'suspect AI' to describe the growing number of systems like vibraimage that algorithmically classify suspects / non-suspects, yet are themselves deeply suspect. Popularising this term may help resist such technologies' reductivist approaches to 'reading' -- and exerting authority over -- emotion, intentionality and agency.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源