论文标题
CCL4REC:对对比度学习的对比度的微视频推荐
CCL4Rec: Contrast over Contrastive Learning for Micro-video Recommendation
论文作者
论文摘要
微型视频推荐制度系统遭受用户行为无处不在的声音,这可能会使学习的用户表示不加区分,并导致琐碎的建议(例如,受欢迎的项目),甚至奇怪的项目远远超出了用户的利益。对比学习是一种新兴的技术,用于通过随机数据扩展来区分表示表示。但是,由于忽略了用户行为的噪音并平均处理所有增强样本,因此现有的对比学习框架不足以学习在建议中区分用户表示。为了弥合这一研究差距,我们提出了针对培训推荐模型的对比度学习框架的对比度,该模型名为CCL4REC,该框架通过进一步对比增强的阳性/负面力量与适应性拉力/推动的强度,即对造成的对比(Vanilla)相比学习来对不同的增强视图的细微差别进行建模。为了适应这些对比,我们设计了跟踪在查询用户中被替换为替换行为的重要性的硬度感知的增强,并确定了增强阳性/负面因素的质量。硬度感知的增强还允许可控制的对比度学习,从而导致表现增长和强大的培训。通过这种方式,CCL4REC捕获了给定用户的历史行为的细微差别,该行为明确屏蔽了学到的用户表示摆脱嘈杂行为的影响。我们对两个微观效力建议基准进行了广泛的实验,这些实验表明,具有较少模型参数的CCL4REC可以实现与现有最新方法可比的性能,并提高训练/推理速度的可比性。
Micro-video recommender systems suffer from the ubiquitous noises in users' behaviors, which might render the learned user representation indiscriminating, and lead to trivial recommendations (e.g., popular items) or even weird ones that are far beyond users' interests. Contrastive learning is an emergent technique for learning discriminating representations with random data augmentations. However, due to neglecting the noises in user behaviors and treating all augmented samples equally, the existing contrastive learning framework is insufficient for learning discriminating user representations in recommendation. To bridge this research gap, we propose the Contrast over Contrastive Learning framework for training recommender models, named CCL4Rec, which models the nuances of different augmented views by further contrasting augmented positives/negatives with adaptive pulling/pushing strengths, i.e., the contrast over (vanilla) contrastive learning. To accommodate these contrasts, we devise the hardness-aware augmentations that track the importance of behaviors being replaced in the query user and the relatedness of substitutes, and thus determining the quality of augmented positives/negatives. The hardness-aware augmentation also permits controllable contrastive learning, leading to performance gains and robust training. In this way, CCL4Rec captures the nuances of historical behaviors for a given user, which explicitly shields off the learned user representation from the effects of noisy behaviors. We conduct extensive experiments on two micro-video recommendation benchmarks, which demonstrate that CCL4Rec with far less model parameters could achieve comparable performance to existing state-of-the-art method, and improve the training/inference speed by several orders of magnitude.