论文标题
语言模型中的偏置缓解效果的可转移性微调
On Transferability of Bias Mitigation Effects in Language Model Fine-Tuning
论文作者
论文摘要
已经证明,微调语言模型在许多建模任务(例如文本分类和核心分辨率)中表现出对受保护组的偏见。以前的工作着重于检测这些偏见,减少数据表示的偏见以及使用辅助培训目标来减轻微调期间的偏见。尽管这些技术可以减少手头的任务和域,但偏见缓解的影响可能无法直接转移到新任务上,需要其他数据收集和自定义敏感属性注释,并重新评估适当的公平度量指标。我们首先通过通过微调将缓解偏差缓解并随后将其应用于下游微调,从而探索了上游偏置缓解(UBM)减少下游任务偏差的可行性和好处。在涉及言论检测,毒性检测,职业预测以及对各种偏见因素的核心分辨率任务的广泛实验中,UBM的影响确实可以通过微调来转移到新的下游任务或域,而不是直接从下游任务或从下游型号转移到下游模型的下游模型。尽管仍然存在挑战,但我们表明,UBM有望在LM微调中更有效,更容易获得的偏置缓解。
Fine-tuned language models have been shown to exhibit biases against protected groups in a host of modeling tasks such as text classification and coreference resolution. Previous works focus on detecting these biases, reducing bias in data representations, and using auxiliary training objectives to mitigate bias during fine-tuning. Although these techniques achieve bias reduction for the task and domain at hand, the effects of bias mitigation may not directly transfer to new tasks, requiring additional data collection and customized annotation of sensitive attributes, and re-evaluation of appropriate fairness metrics. We explore the feasibility and benefits of upstream bias mitigation (UBM) for reducing bias on downstream tasks, by first applying bias mitigation to an upstream model through fine-tuning and subsequently using it for downstream fine-tuning. We find, in extensive experiments across hate speech detection, toxicity detection, occupation prediction, and coreference resolution tasks over various bias factors, that the effects of UBM are indeed transferable to new downstream tasks or domains via fine-tuning, creating less biased downstream models than directly fine-tuning on the downstream task or transferring from a vanilla upstream model. Though challenges remain, we show that UBM promises more efficient and accessible bias mitigation in LM fine-tuning.