论文标题

强化在线学习以无偏奖励成型排名

Reinforcement Online Learning to Rank with Unbiased Reward Shaping

论文作者

Zhuang, Shengyao, Qiao, Zhihao, Zuccon, Guido

论文摘要

在线学习排名(OLTR)旨在直接从用户交互(例如点击)中获得的隐式反馈来学习排名。但是,点击是一个有偏见的信号:具体来说,排名最高的文档可能会吸引更多的点击,而不是排名(位置偏差)的文档。在本文中,我们提出了一种针对OLTR的新颖学习算法,该算法使用强化学习来优化排名者:加强在线学习排名(ROLTR)。在Roltr中,根据分配给单击和未删除的文档的奖励来估算排名者的梯度。为了消除奖励信号中用户的位置偏差,我们引入了无偏的奖励成型功能,这些功能利用了单击和未完成的文档的反向倾向评分。我们的方法还可以建模未点亮的文档的事实提供了进一步的优势,因为需要更少的用户交互来有效培训排名,从而提供了效率的提高。对标准OLTR数据集的经验评估表明,与其他OLTR方法相比,ROLTR可实现最先进的性能,并提供了明显更好的用户体验。为了促进实验的可重复性,我们在https://github.com/ielab/oltr上提供所有实验代码。

Online learning to rank (OLTR) aims to learn a ranker directly from implicit feedback derived from users' interactions, such as clicks. Clicks however are a biased signal: specifically, top-ranked documents are likely to attract more clicks than documents down the ranking (position bias). In this paper, we propose a novel learning algorithm for OLTR that uses reinforcement learning to optimize rankers: Reinforcement Online Learning to Rank (ROLTR). In ROLTR, the gradients of the ranker are estimated based on the rewards assigned to clicked and unclicked documents. In order to de-bias the users' position bias contained in the reward signals, we introduce unbiased reward shaping functions that exploit inverse propensity scoring for clicked and unclicked documents. The fact that our method can also model unclicked documents provides a further advantage in that less users interactions are required to effectively train a ranker, thus providing gains in efficiency. Empirical evaluation on standard OLTR datasets shows that ROLTR achieves state-of-the-art performance, and provides significantly better user experience than other OLTR approaches. To facilitate the reproducibility of our experiments, we make all experiment code available at https://github.com/ielab/OLTR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源