论文标题
迈向量子神经网络的训练性
Toward Trainability of Quantum Neural Networks
论文作者
论文摘要
最近已提出量子神经网络(QNN)作为经典神经网络的概括,以实现量子加速。尽管有可能胜过古典模型,但仍有严重的瓶颈用于培训QNN。也就是说,由于梯度的消失,具有随机结构的QNN具有较差的训练性,其速率指数为输入乘以数。消失的梯度可能会严重影响大型QNN的应用。在这项工作中,我们提供了一个可行的解决方案,并提供理论保证。具体而言,我们证明带有树张量和阶梯控制架构的QNN具有量度在多个多项式中消失的梯度,并用量子数消失。我们在数值上用树张量和台阶控制结构来证明QNN,以应用二进制分类。与具有随机结构的QNN相比,模拟显示出更快的收敛速率和更好的准确性。
Quantum Neural Networks (QNNs) have been recently proposed as generalizations of classical neural networks to achieve the quantum speed-up. Despite the potential to outperform classical models, serious bottlenecks exist for training QNNs; namely, QNNs with random structures have poor trainability due to the vanishing gradient with rate exponential to the input qubit number. The vanishing gradient could seriously influence the applications of large-size QNNs. In this work, we provide a viable solution with theoretical guarantees. Specifically, we prove that QNNs with tree tensor and step controlled architectures have gradients that vanish at most polynomially with the qubit number. We numerically demonstrate QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.