论文标题
量化机器学习算法中的固有随机性
Quantifying Inherent Randomness in Machine Learning Algorithms
论文作者
论文摘要
大多数机器学习(ML)算法具有多个随机元素,它们的性能受这些随机性来源的影响。本文使用一项经验研究来系统地检查两个来源的效果:模型训练中的随机性和数据集分配到培训和测试子集中的随机性。我们量化和比较以下ML算法的预测性能变化的大小:随机森林(RFS),梯度增强机(GBMS)和前馈神经网络(FFNNS)。在不同的算法中,与基于树的方法相比,模型训练的随机性会导致FFNN的差异更大。这是可以预期的,因为FFNN具有更多的随机元素,这些元素是其模型初始化和训练的一部分。我们还发现,与模型训练的固有随机性相比,数据集的随机分裂会导致更高的变化。如果原始数据集具有相当大的异质性,则数据拆分的变化可能是一个主要问题。 关键字:模型培训,可重复性,变化
Most machine learning (ML) algorithms have several stochastic elements, and their performances are affected by these sources of randomness. This paper uses an empirical study to systematically examine the effects of two sources: randomness in model training and randomness in the partitioning of a dataset into training and test subsets. We quantify and compare the magnitude of the variation in predictive performance for the following ML algorithms: Random Forests (RFs), Gradient Boosting Machines (GBMs), and Feedforward Neural Networks (FFNNs). Among the different algorithms, randomness in model training causes larger variation for FFNNs compared to tree-based methods. This is to be expected as FFNNs have more stochastic elements that are part of their model initialization and training. We also found that random splitting of datasets leads to higher variation compared to the inherent randomness from model training. The variation from data splitting can be a major issue if the original dataset has considerable heterogeneity. Keywords: Model Training, Reproducibility, Variation