论文标题
关于高效变压器及以后的神经架构搜索
Neural Architecture Search on Efficient Transformers and Beyond
论文作者
论文摘要
最近,已经提出了许多有效的变压器来降低由软磁性引起的标准变压器的二次计算复杂性。但是,其中大多数只是用有效的关注机制交换软智能,而没有考虑自定义体系结构,特别是为了有效的关注。在本文中,我们认为手工制作的香草变压器体系结构可用于软马克斯的注意,可能不适合有效的变压器。为了解决这个问题,我们提出了一个新框架,通过神经体系结构搜索(NAS)技术找到有效变压器的最佳体系结构。提出的方法在流行的机器翻译和图像分类任务上进行了验证。我们观察到,与标准变压器相比,有效变压器的最佳体系结构的计算降低,但总体准确性较低。这表明SoftMax的注意力和有效的注意力具有自己的区别,但它们都无法同时平衡准确性和效率。这激发了我们混合两种类型的注意力以减少性能失衡。除了在现有NAS变压器方法中常用的搜索空间外,我们还提出了一个新的搜索空间,该空间允许NAS算法与架构一起自动搜索注意力变体。 WMT'En-DE和CIFAR-10上的广泛实验表明,我们的搜索架构与标准变压器保持了可比的精度,并显着提高了计算效率。
Recently, numerous efficient Transformers have been proposed to reduce the quadratic computational complexity of standard Transformers caused by the Softmax attention. However, most of them simply swap Softmax with an efficient attention mechanism without considering the customized architectures specially for the efficient attention. In this paper, we argue that the handcrafted vanilla Transformer architectures for Softmax attention may not be suitable for efficient Transformers. To address this issue, we propose a new framework to find optimal architectures for efficient Transformers with the neural architecture search (NAS) technique. The proposed method is validated on popular machine translation and image classification tasks. We observe that the optimal architecture of the efficient Transformer has the reduced computation compared with that of the standard Transformer, but the general accuracy is less comparable. It indicates that the Softmax attention and efficient attention have their own distinctions but neither of them can simultaneously balance the accuracy and efficiency well. This motivates us to mix the two types of attention to reduce the performance imbalance. Besides the search spaces that commonly used in existing NAS Transformer approaches, we propose a new search space that allows the NAS algorithm to automatically search the attention variants along with architectures. Extensive experiments on WMT' 14 En-De and CIFAR-10 demonstrate that our searched architecture maintains comparable accuracy to the standard Transformer with notably improved computational efficiency.