论文标题
超过标准化数据的非线性模型有效构建
Efficient Construction of Nonlinear Models over Normalized Data
论文作者
论文摘要
机器学习(ML)应用程序正在企业中增殖。在企业应用程序中普遍存在的关系数据通常是标准化的;结果,必须通过主要/外键联接来将数据符合,以作为ML算法的输入。在本文中,我们研究了归一化的数据,研究了流行的非线性ML模型,高斯混合模型(GMM)和神经网络(NN)的实施,这些数据涉及正常关系的二进制和多路相连的案例。 对于GMM的情况,我们展示了如何以系统的方式分解计算,以使二进制连接和多路连接以构建混合模型。我们证明,通过考虑计算,与其他适用方法相比,可以更快地进行模型的训练,而不会损失准确性。 对于NN的情况,我们提出算法来训练网络以归一化数据为输入。同样,我们提出算法可以以分解方式进行网络培训并提供性能优势。可以利用某些类型的激活函数来利用否定化引入的冗余。但是,我们证明,尝试探索这种冗余是有帮助的。探索网络较高层的冗余将始终导致成本增加,不建议进行。 我们介绍了彻底的实验评估的结果,它改变了所涉及的输入关系的几个参数,并证明了我们对GMM和NN培训的建议产生急剧的性能改善,通常从100%开始,随着基本数据的参数的变化而变化越来越高,而准确性却没有任何损失。
Machine Learning (ML) applications are proliferating in the enterprise. Relational data which are prevalent in enterprise applications are typically normalized; as a result, data has to be denormalized via primary/foreign-key joins to be provided as input to ML algorithms. In this paper, we study the implementation of popular nonlinear ML models, Gaussian Mixture Models (GMM) and Neural Networks (NN), over normalized data addressing both cases of binary and multi-way joins over normalized relations. For the case of GMM, we show how it is possible to decompose computation in a systematic way both for binary joins and for multi-way joins to construct mixture models. We demonstrate that by factoring the computation, one can conduct the training of the models much faster compared to other applicable approaches, without any loss in accuracy. For the case of NN, we propose algorithms to train the network taking normalized data as the input. Similarly, we present algorithms that can conduct the training of the network in a factorized way and offer performance advantages. The redundancy introduced by denormalization can be exploited for certain types of activation functions. However, we demonstrate that attempting to explore this redundancy is helpful up to a certain point; exploring redundancy at higher layers of the network will always result in increased costs and is not recommended. We present the results of a thorough experimental evaluation, varying several parameters of the input relations involved and demonstrate that our proposals for the training of GMM and NN yield drastic performance improvements typically starting at 100%, which become increasingly higher as parameters of the underlying data vary, without any loss in accuracy.