论文标题
Fairfuse:公平共识排名的交互式视觉支持
FairFuse: Interactive Visual Support for Fair Consensus Ranking
论文作者
论文摘要
公平共识构建将多个排名者的偏好结合到单个共识排名中,同时确保与其他群体相比,由受保护属性定义的任何组(例如种族或性别)不受欢迎。手动产生公平的共识排名是耗时且不切实际的 - 即使对于少数候选人也是如此。尽管已经开发了用于审核和生成公平共识排名的算法方法,但这些方法尚未在交互式系统中进行操作。为了弥合这一差距,我们介绍了Fairfuse,这是一种可视化系统,用于生成,分析和审计公平共识排名。我们构建了一个数据模型,其中包括排名者输入的基本排名,增强了群体公平度的度量以及以不同程度的公平程度生成共识排名的算法。我们设计了新颖的可视化,以平行配位的样式可视化编码这些度量,并具有生成和探索公平共识排名的相互作用。我们描述了Fairfuse支持决策者在排名的场景中,公平性很重要的用例,并讨论了支持公平性等级分析的未来努力的新兴挑战。代码和演示视频可在https://osf.io/hd639/上找到。
Fair consensus building combines the preferences of multiple rankers into a single consensus ranking, while ensuring any group defined by a protected attribute (such as race or gender) is not disadvantaged compared to other groups. Manually generating a fair consensus ranking is time-consuming and impractical -- even for a fairly small number of candidates. While algorithmic approaches for auditing and generating fair consensus rankings have been developed, these have not been operationalized in interactive systems. To bridge this gap, we introduce FairFuse, a visualization system for generating, analyzing, and auditing fair consensus rankings. We construct a data model which includes base rankings entered by rankers, augmented with measures of group fairness, and algorithms for generating consensus rankings with varying degrees of fairness. We design novel visualizations that encode these measures in a parallel-coordinates style rank visualization, with interactions for generating and exploring fair consensus rankings. We describe use cases in which FairFuse supports a decision-maker in ranking scenarios in which fairness is important, and discuss emerging challenges for future efforts supporting fairness-oriented rank analysis. Code and demo videos available at https://osf.io/hd639/.