论文标题
使用VAE降低偏见的面部检测系统
De-biasing facial detection system using VAE
论文作者
论文摘要
基于AI/ML的系统的偏见是一个普遍存在的问题,AI/ML系统中的偏见可能会对社会产生负面影响。系统有偏见的原因有很多。偏差可能是由于我们用于我们的问题的算法造成的,也可能是由于我们正在使用的数据集引起的,其中某些功能过多地代表了其中。在面部检测系统中,由于数据集引起的偏差。有时,模型仅学习数据中代表过多的功能,而忽略了数据中的稀有功能,从而导致对那些代表性过多的功能有偏见。在现实生活中,这些有偏见的系统对社会是危险的。所提出的方法使用的生成模型最适合数据集中学习的基本功能(潜在变量),并使用这些学习的功能模型试图减少由于系统偏见而存在的威胁。借助算法,可以删除数据集中存在的偏差。然后,我们在两个数据集上训练模型,并比较结果。
Bias in AI/ML-based systems is a ubiquitous problem and bias in AI/ML systems may negatively impact society. There are many reasons behind a system being biased. The bias can be due to the algorithm we are using for our problem or may be due to the dataset we are using, having some features over-represented in it. In the face detection system bias due to the dataset is majorly seen. Sometimes models learn only features that are over-represented in data and ignore rare features from data which results in being biased toward those over-represented features. In real life, these biased systems are dangerous to society. The proposed approach uses generative models which are best suited for learning underlying features(latent variables) from the dataset and by using these learned features models try to reduce the threats which are there due to bias in the system. With the help of an algorithm, the bias present in the dataset can be removed. And then we train models on two datasets and compare the results.