论文标题

具有优化的随机特征学习:通过量子机器学习的指数加速,而无需稀疏性和低排名的假设

Learning with Optimized Random Features: Exponential Speedup by Quantum Machine Learning without Sparsity and Low-Rank Assumptions

论文作者

Yamasaki, Hayata, Subramanian, Sathyawageeswar, Sonoda, Sho, Koashi, Masato

论文摘要

随机特征增强的内核方法提供了可扩展的算法,用于从大数据中学习。但是在计算上很难根据针对数据优化的概率分布进行采样随机特征,从而最大程度地减少了所需的功能数量以达到所需的准确性。在这里,我们开发了一种量子算法,用于从功能上的优化分布中采样,在运行时$ o(d)$中,该算法在输入数据的尺寸$ d $中是线性的。与此采样任务的任何已知的经典算法相比,我们的算法在$ d $中实现了指数加速。与现有的量子机学习算法相反,我们的算法规避了稀疏性和低级假设,因此具有广泛的适用性。我们还表明,采样功能可以与随机梯度下降的回归结合,以实现学习而无需取消指数加速。我们的算法基于采样优化的随机特征,导致了一个利用量子计算机的机器学习的加速框架。

Kernel methods augmented with random features give scalable algorithms for learning from big data. But it has been computationally hard to sample random features according to a probability distribution that is optimized for the data, so as to minimize the required number of features for achieving the learning to a desired accuracy. Here, we develop a quantum algorithm for sampling from this optimized distribution over features, in runtime $O(D)$ that is linear in the dimension $D$ of the input data. Our algorithm achieves an exponential speedup in $D$ compared to any known classical algorithm for this sampling task. In contrast to existing quantum machine learning algorithms, our algorithm circumvents sparsity and low-rank assumptions and thus has wide applicability. We also show that the sampled features can be combined with regression by stochastic gradient descent to achieve the learning without canceling out our exponential speedup. Our algorithm based on sampling optimized random features leads to an accelerated framework for machine learning that takes advantage of quantum computers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源