论文标题
多目标推荐中基于动量的梯度方法
Momentum-based Gradient Methods in Multi-Objective Recommendation
论文作者
论文摘要
多目标梯度方法已成为解决多目标问题的标准。除其他外,它们在开发具有相关目标和冲突目标的多目标推荐系统方面表现出了令人鼓舞的结果。经典的多距离下降通常取决于梯度的组合,而不包括计算梯度的第一矩和第二矩。这导致了脆弱的行为,并错过了解决方案空间中的重要领域。在这项工作中,我们创建了一个多目标模型 - 不稳定的Adamize方法,该方法利用Adam Optimizer在单目标问题中的好处。在计算同时优化所有目标的共同梯度下降矢量之前,这会纠正和稳定每个目标的〜〜级梯度。我们评估多目标对两个多目标推荐系统的好处,以及相关或冲突的三种不同客观组合。我们报告了通过三个不同的帕累托前指标来衡量的显着改进:超量,覆盖范围和间距。最后,我们证明\ textit {apamized}帕累托前部严格在多个目标对上占据了前一个。
Multi-objective gradient methods are becoming the standard for solving multi-objective problems. Among others, they show promising results in developing multi-objective recommender systems with both correlated and conflicting objectives. Classic multi-gradient~descent usually relies on the combination of the gradients, not including the computation of first and second moments of the gradients. This leads to a brittle behavior and misses important areas in the solution space. In this work, we create a multi-objective model-agnostic Adamize method that leverages the benefits of the Adam optimizer in single-objective problems. This corrects and stabilizes~the~gradients of every objective before calculating a common gradient descent vector that optimizes all the objectives simultaneously. We evaluate the benefits of Multi-objective Adamize on two multi-objective recommender systems and for three different objective combinations, both correlated or conflicting. We report significant improvements, measured with three different Pareto front metrics: hypervolume, coverage, and spacing. Finally, we show that the \textit{Adamized} Pareto front strictly dominates the previous one on multiple objective pairs.