论文标题

NLP中的不良偏见:应对测量的挑战

Undesirable Biases in NLP: Addressing Challenges of Measurement

论文作者

van der Wal, Oskar, Bachmann, Dominik, Leidinger, Alina, van Maanen, Leendert, Zuidema, Willem, Schulz, Katrin

论文摘要

随着大型语言模型和自然语言处理(NLP)技术迅速发展并扩散到日常生活中,预期他们的使用如何损害人们变得至关重要。近年来,人们引起了很多关注的一个问题是,这项技术显示出有害的偏见,从产生贬义的刻板印象到为不同社会群体产生不同的结果。尽管已经在评估和减轻这些偏见方面进行了大量努力,但我们测量NLP模型的偏见的方法存在严重的问题,而且通常不清楚它们实际衡量了什么。在本文中,我们提供了一种跨学科的方法来讨论NLP模型偏见问题,通过采用精神计量学的角度 - 该领域专门研究无法直接观察到的偏见等概念。特别是,我们将探讨精神计量学,结构有效性和测量工具的可靠性的两个中心概念,并讨论如何在测量模型偏见的背景下应用它们。我们的目标是为NLP从业人员提供方法学工具,以设计更好的偏见措施,并在使用偏见测量工具时更普遍地探索精神计量学的工具。

As Large Language Models and Natural Language Processing (NLP) technology rapidly develop and spread into daily life, it becomes crucial to anticipate how their use could harm people. One problem that has received a lot of attention in recent years is that this technology has displayed harmful biases, from generating derogatory stereotypes to producing disparate outcomes for different social groups. Although a lot of effort has been invested in assessing and mitigating these biases, our methods of measuring the biases of NLP models have serious problems and it is often unclear what they actually measure. In this paper, we provide an interdisciplinary approach to discussing the issue of NLP model bias by adopting the lens of psychometrics -- a field specialized in the measurement of concepts like bias that are not directly observable. In particular, we will explore two central notions from psychometrics, the construct validity and the reliability of measurement tools, and discuss how they can be applied in the context of measuring model bias. Our goal is to provide NLP practitioners with methodological tools for designing better bias measures, and to inspire them more generally to explore tools from psychometrics when working on bias measurement tools.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源