论文标题

排名:对深度不平衡回归的排名相似性正规化

RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression

论文作者

Gong, Yu, Mori, Greg, Tung, Frederick

论文摘要

数据不平衡(其中多个数据样本都来自一小部分标签,在训练深层神经网络方面构成了挑战。与分类不同,在回归中,标签是连续的,潜在的无限,并且形成了自然的顺序。回归的这些独特特征要求采用新技术,以利用标签空间关系中编码的其他信息。本文介绍了深度不平衡回归的Ranksim(排名相似性)正常化程序,该调节器编码一种电感偏差,在标签空间中更接近的样品在特征空间中也应该更接近。与最近的基于分布平滑的方法相反,RankSim捕获附近和遥远的关系:对于给定的数据样本,RankSIM鼓励其在标签空间中的邻居排序列表,以匹配特征空间中其邻居的排序列表。 Ranksim与传统的不平衡学习技术相辅相成,包括重新加权,两阶段训练和分配平滑,并在三个不平衡的回归基准中提高最先进的性能:IMDB-WIKI-DIR,AGEDB-DIR,AGEDB-DIR和STS-B-DIR。

Data imbalance, in which a plurality of the data samples come from a small proportion of labels, poses a challenge in training deep neural networks. Unlike classification, in regression the labels are continuous, potentially boundless, and form a natural ordering. These distinct features of regression call for new techniques that leverage the additional information encoded in label-space relationships. This paper presents the RankSim (ranking similarity) regularizer for deep imbalanced regression, which encodes an inductive bias that samples that are closer in label space should also be closer in feature space. In contrast to recent distribution smoothing based approaches, RankSim captures both nearby and distant relationships: for a given data sample, RankSim encourages the sorted list of its neighbors in label space to match the sorted list of its neighbors in feature space. RankSim is complementary to conventional imbalanced learning techniques, including re-weighting, two-stage training, and distribution smoothing, and lifts the state-of-the-art performance on three imbalanced regression benchmarks: IMDB-WIKI-DIR, AgeDB-DIR, and STS-B-DIR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源