论文标题
迈向分配不变回归指标的第一步
A First Step Towards Distribution Invariant Regression Metrics
论文作者
论文摘要
回归评估已经进行了数十年。一些指标已被确定为对数据的转移和缩放是可靠的,但是考虑到数据的不同分布更难解决(不平衡问题),即使它在很大程度上影响了不同数据集上的评估之间的可比性。在分类中,已经反复指出,诸如F量和准确性之类的性能指标高度依赖类分布,并且不可能在不同分布的不同数据集之间进行比较。我们表明回归中存在相同的问题。在机器人应用中的探测器参数的分布可以例如在不同的记录会话之间很大变化。在这里,我们需要回归算法,这些算法要么在所有功能值中都表现出色,要么集中在某些边界区域(例如高速)上。这必须反映在评估指标中。我们通过使用自动调整的高斯内核密度估计器加权函数值$ y $ $ y $或样品$ x $来改进已建立的回归指标的修改。我们在可再现实验中显示了合成和机器人数据,表明经典指标的行为错误,而我们的新指标对变化分布的敏感性较小,尤其是当通过$ x $中的边际分布校正时。我们的新评估概念可以比较具有不同分布的不同数据集之间的结果。此外,它可以揭示回归算法过度拟合到代表性过高的目标值。作为结果,由于我们的校正指标,将更可能选择不过期的回归算法。
Regression evaluation has been performed for decades. Some metrics have been identified to be robust against shifting and scaling of the data but considering the different distributions of data is much more difficult to address (imbalance problem) even though it largely impacts the comparability between evaluations on different datasets. In classification, it has been stated repeatedly that performance metrics like the F-Measure and Accuracy are highly dependent on the class distribution and that comparisons between different datasets with different distributions are impossible. We show that the same problem exists in regression. The distribution of odometry parameters in robotic applications can for example largely vary between different recording sessions. Here, we need regression algorithms that either perform equally well for all function values, or that focus on certain boundary regions like high speed. This has to be reflected in the evaluation metric. We propose the modification of established regression metrics by weighting with the inverse distribution of function values $Y$ or the samples $X$ using an automatically tuned Gaussian kernel density estimator. We show on synthetic and robotic data in reproducible experiments that classical metrics behave wrongly, whereas our new metrics are less sensitive to changing distributions, especially when correcting by the marginal distribution in $X$. Our new evaluation concept enables the comparison of results between different datasets with different distributions. Furthermore, it can reveal overfitting of a regression algorithm to overrepresented target values. As an outcome, non-overfitting regression algorithms will be more likely chosen due to our corrected metrics.