论文标题

自导体学习以denoise的强大建议

Self-Guided Learning to Denoise for Robust Recommendation

论文作者

Gao, Yunjun, Du, Yuntao, Hu, Yujia, Chen, Lu, Zhu, Xinjun, Fang, Ziquan, Zheng, Baihua

论文摘要

隐式反馈的无处不在使它们成为构建现代推荐系统的默认选择。一般而言,观察到的相互作用被认为是积极样本,而未观察到的相互作用被认为是负相互作用。但是,隐式反馈本质上是嘈杂的,因为无处不在的嘈杂阳性和嘈杂的阴性相互作用。最近,一些研究注意到将隐性反馈降低建议的重要性,并在某种程度上增强了建议模型的鲁棒性。但是,他们通常无法(1)捕获用于学习综合用户偏好的硬但干净的互动,并且(2)提供了一种通用的Denoising解决方案,可以应用于各种建议模型。 在本文中,我们彻底研究了建议模型的记忆效果,并提出了一种新的Denoing范式,即自助引导的Denoising学习(SGDL),该学习能够在培训的早期阶段收集记忆的相互作用(即“噪声抗性”时期),并在噪声中指导噪声(即指导噪声)(即驱动噪声)(即驱动型号)(即驱动型号)(即指导噪声)(即noige note nocers noce noce in noce noce。元学习方式。此外,我们的方法可以在记忆点从记忆中自动切换其学习阶段,并通过新颖的自适应DeNoising Schedulers选择清洁且内容丰富的记忆数据,以提高稳健性。我们将SGDL与四个代表性推荐模型(即Neumf,CDAE,NGCF和LightGCN)和不同的损失函数(即二进制跨透镜和BPR损失)合并。三个基准数据集上的实验结果证明了SGDL对T-CE,IR,DECA,甚至是最新的基于强大的基于图形的方法(如SGCN和SGL)等最先进的去核方法的有效性。

The ubiquity of implicit feedback makes them the default choice to build modern recommender systems. Generally speaking, observed interactions are considered as positive samples, while unobserved interactions are considered as negative ones. However, implicit feedback is inherently noisy because of the ubiquitous presence of noisy-positive and noisy-negative interactions. Recently, some studies have noticed the importance of denoising implicit feedback for recommendations, and enhanced the robustness of recommendation models to some extent. Nonetheless, they typically fail to (1) capture the hard yet clean interactions for learning comprehensive user preference, and (2) provide a universal denoising solution that can be applied to various kinds of recommendation models. In this paper, we thoroughly investigate the memorization effect of recommendation models, and propose a new denoising paradigm, i.e., Self-Guided Denoising Learning (SGDL), which is able to collect memorized interactions at the early stage of the training (i.e., "noise-resistant" period), and leverage those data as denoising signals to guide the following training (i.e., "noise-sensitive" period) of the model in a meta-learning manner. Besides, our method can automatically switch its learning phase at the memorization point from memorization to self-guided learning, and select clean and informative memorized data via a novel adaptive denoising scheduler to improve the robustness. We incorporate SGDL with four representative recommendation models (i.e., NeuMF, CDAE, NGCF and LightGCN) and different loss functions (i.e., binary cross-entropy and BPR loss). The experimental results on three benchmark datasets demonstrate the effectiveness of SGDL over the state-of-the-art denoising methods like T-CE, IR, DeCA, and even state-of-the-art robust graph-based methods like SGCN and SGL.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源