论文标题

S^3-Rec:以相互信息最大化的顺序推荐的自制学习

S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization

论文作者

Zhou, Kun, Wang, Hui, Zhao, Wayne Xin, Zhu, Yutao, Wang, Sirui, Zhang, Fuzheng, Wang, Zhongyuan, Wen, Ji-Rong

论文摘要

最近,通过深度学习的顺序建议取得了重大进展。现有的神经顺序推荐模型通常依赖于项目预测损失来学习模型参数或数据表示。但是,接受这种损失的模型容易遭受数据稀疏问题的困扰。由于它过分强调了最终性能,因此上下文数据和序列数据之间的关联或融合尚未被很好地捕获和用于顺序建议。为了解决这个问题,我们提出了模型S^3-REC,该模型代表基于自我牵手的神经体系结构的自制学习,以进行顺序推荐。我们方法的主要思想是利用固有的数据相关性来得出自学信号,并通过培训预先培训方法来改善顺序建议。对于我们的任务,我们通过利用相互信息最大化(MIM)原理来设计四个辅助自我监督目标,以了解属性,项目,子序列和顺序之间的相关性。 MIM提供了一种表征不同类型数据之间相关性的统一方法,这在我们的情况下特别适合。在六个现实世界数据集上进行的广泛实验证明了我们提出的方法比现有最新方法的优越性,尤其是在只有有限的培训数据时。此外,我们将自我监督的学习方法扩展到其他推荐模型,这也可以提高其性能。

Recently, significant progress has been made in sequential recommendation with deep learning. Existing neural sequential recommendation models usually rely on the item prediction loss to learn model parameters or data representations. However, the model trained with this loss is prone to suffer from data sparsity problem. Since it overemphasizes the final performance, the association or fusion between context data and sequence data has not been well captured and utilized for sequential recommendation. To tackle this problem, we propose the model S^3-Rec, which stands for Self-Supervised learning for Sequential Recommendation, based on the self-attentive neural architecture. The main idea of our approach is to utilize the intrinsic data correlation to derive self-supervision signals and enhance the data representations via pre-training methods for improving sequential recommendation. For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence by utilizing the mutual information maximization (MIM) principle. MIM provides a unified way to characterize the correlation between different types of data, which is particularly suitable in our scenario. Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods, especially when only limited training data is available. Besides, we extend our self-supervised learning method to other recommendation models, which also improve their performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源