论文标题
MP-boost:通过自适应特征和观察抽样来促进微小
MP-Boost: Minipatch Boosting via Adaptive Feature and Observation Sampling
论文作者
论文摘要
提升方法是最好的通用和现成的机器学习方法之一,获得了广泛的流行。在本文中,我们试图开发一种增强方法,该方法具有与流行的Adaboost和梯度增强方法相当的准确性,但在计算上却更快,并且其解决方案更容易解释。我们通过开发MP-Boost(一种基于Adaboost的算法)来实现这一目标,该算法是基于Adaboost的,该算法通过适应性地选择实例和功能的小子集或我们在每次迭代时称为MINIPATCHES(MP)的算法。通过对数据的微小子集进行依次学习,我们的方法在计算上比其他经典增强算法更快。同样,随着它的进行,MP-Boost自适应地学习了最重要的功能和最具挑战实例的功能和实例的概率分布,从而自适应地选择了最相关的学习迷人。这些学习的概率分布也有助于解释我们的方法。我们从经验上证明了在各种二元分类任务上的方法的解释性,比较准确性和计算时间。
Boosting methods are among the best general-purpose and off-the-shelf machine learning approaches, gaining widespread popularity. In this paper, we seek to develop a boosting method that yields comparable accuracy to popular AdaBoost and gradient boosting methods, yet is faster computationally and whose solution is more interpretable. We achieve this by developing MP-Boost, an algorithm loosely based on AdaBoost that learns by adaptively selecting small subsets of instances and features, or what we term minipatches (MP), at each iteration. By sequentially learning on tiny subsets of the data, our approach is computationally faster than other classic boosting algorithms. Also as it progresses, MP-Boost adaptively learns a probability distribution on the features and instances that upweight the most important features and challenging instances, hence adaptively selecting the most relevant minipatches for learning. These learned probability distributions also aid in interpretation of our method. We empirically demonstrate the interpretability, comparative accuracy, and computational time of our approach on a variety of binary classification tasks.