论文标题
关于量子神经网络的可学习性
On the learnability of quantum neural networks
论文作者
论文摘要
我们考虑构建基于变异杂种量子经典方案的量子神经网络(QNN)的可学习性,由于非凸优化领域,测量误差和不可避免的嘈杂的中间尺度量子(NISQ)机器引入的不可避免的门错误,这在很大程度上尚不清楚。我们在本文中的贡献是多重的。首先,我们得出了QNN对经验风险最小化的效用界限,并表明大栅极噪声,很少的量子测量和深度电路深度将导致效用差。该结果也适用于具有基于梯度的经典优化的变异量子电路,并且可以具有独立的兴趣。然后,我们证明QNN可以被视为差异私有(DP)模型。第三,我们表明,如果QNN可以有效地学习概念类,那么即使使用门噪声,QNN也可以有效地学习。该结果意味着QNN是否是在无噪声或嘈杂的量子机上实现的相同的可学习性。我们上次表明,噪声QNN可以有效地模拟量子统计查询(QSQ)模型。由于QSQ模型可以通过运行时加速处理某些任务,因此我们的结果表明,在NISQ设备上实现的修改后的QNN将保留量子优势。数值模拟支持理论结果。
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme, which remains largely unknown due to the non-convex optimization landscape, the measurement error, and the unavoidable gate errors introduced by noisy intermediate-scale quantum (NISQ) machines. Our contributions in this paper are multi-fold. First, we derive the utility bounds of QNN towards empirical risk minimization, and show that large gate noise, few quantum measurements, and deep circuit depth will lead to the poor utility bounds. This result also applies to the variational quantum circuits with gradient-based classical optimization, and can be of independent interest. We then prove that QNN can be treated as a differentially private (DP) model. Thirdly, we show that if a concept class can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise. This result implies the same learnability of QNN whether it is implemented on noiseless or noisy quantum machines. We last exhibit that the quantum statistical query (QSQ) model can be effectively simulated by noisy QNN. Since the QSQ model can tackle certain tasks with runtime speedup, our result suggests that the modified QNN implemented on NISQ devices will retain the quantum advantage. Numerical simulations support the theoretical results.