论文标题

时间集的连续时间用户偏好建模预测

Continuous-Time User Preference Modelling for Temporal Sets Prediction

论文作者

Yu, Le, Liu, Zihang, Sun, Leilei, Du, Bowen, Liu, Chuanren, Lv, Weifeng

论文摘要

给定一系列集合,每个集合都有时间戳并包含任意数量的元素,时间集预测旨在预测后续集合中的元素。时间集的先前研究主要集中在元素的建模上,并根据其相互作用的元素隐式表示每个用户的喜好。但是,用户偏好通常是不断发展的,并且用户偏好的间接学习范式无法完全捕获进化趋势。为此,我们为时间集预测提出了一个连续的时间用户偏好模型框架,该框架预测明确地对每个用户的不断发展的偏好进行了建模,通过维护内存库来存储所有用户和元素的状态。具体而言,我们首先通过以非延期时间顺序排列所有用户集的交互来构建通用序列,然后按时间顺序从每个用户集的交互中学习。对于每次互动,我们根据当前编码的消息和过去的回忆不断更新相关用户的记忆和元素。此外,我们提出了一个个性化的用户行为学习模块,以根据每个用户的历史顺序发现特定于用户的特征,该模块根据用户和元素从双重角度汇总了先前相互作用的元素。最后,我们开发了一种设定批次算法来提高模型效率,可以在培训和评估过程中提前创建时间一致的批次,并在培训和评估过程中实现3.5倍和3.0倍的加速。在四个现实世界数据集上的实验证明了我们方法在转导和归纳环境下的优越性。还显示了我们方法的良好解释性。

Given a sequence of sets, where each set has a timestamp and contains an arbitrary number of elements, temporal sets prediction aims to predict the elements in the subsequent set. Previous studies for temporal sets prediction mainly focus on the modelling of elements and implicitly represent each user's preference based on his/her interacted elements. However, user preferences are often continuously evolving and the evolutionary trend cannot be fully captured with the indirect learning paradigm of user preferences. To this end, we propose a continuous-time user preference modelling framework for temporal sets prediction, which explicitly models the evolving preference of each user by maintaining a memory bank to store the states of all the users and elements. Specifically, we first construct a universal sequence by arranging all the user-set interactions in a non-descending temporal order, and then chronologically learn from each user-set interaction. For each interaction, we continuously update the memories of the related user and elements based on their currently encoded messages and past memories. Moreover, we present a personalized user behavior learning module to discover user-specific characteristics based on each user's historical sequence, which aggregates the previously interacted elements from dual perspectives according to the user and elements. Finally, we develop a set-batch algorithm to improve the model efficiency, which can create time-consistent batches in advance and achieve 3.5x and 3.0x speedups in the training and evaluation process on average. Experiments on four real-world datasets demonstrate the superiority of our approach over state-of-the-arts under both transductive and inductive settings. The good interpretability of our method is also shown.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源