论文标题

神经算术单元

Neural Arithmetic Units

论文作者

Madsen, Andreas, Johansen, Alexander Rosenberg

论文摘要

神经网络可以近似复杂的功能,但它们难以对实数进行精确的算术操作。缺乏算术操作的电感偏差使神经网络没有推断出诸如加法,减法和乘法等任务所需的基本逻辑。我们提出了两个新的神经网络组件:神经添加单元(NAU),可以学习确切的加法和减法;以及可以乘以向量的子集的神经乘法单元(NMU)。据我们所知,NMU是第一个算术神经网络组件,可以在隐藏大小很大时学会从向量乘以元素。这两个新组件从最近提出的算术组件的理论分析中汲取灵感。我们发现,在优化NAU和NMU时,仔细的初始化,限制参数空间和正规化稀疏性很重要。与以前的神经单位相比,我们提出的单元NAU和NMU更加稳定,具有更少的参数,学习速度更快,可以收敛于更大的隐藏尺寸,获得稀疏和有意义的权重,并且可以推断为负值和小值。

Neural networks can approximate complex functions, but they struggle to perform exact arithmetic operations over real numbers. The lack of inductive bias for arithmetic operations leaves neural networks without the underlying logic necessary to extrapolate on tasks such as addition, subtraction, and multiplication. We present two new neural network components: the Neural Addition Unit (NAU), which can learn exact addition and subtraction; and the Neural Multiplication Unit (NMU) that can multiply subsets of a vector. The NMU is, to our knowledge, the first arithmetic neural network component that can learn to multiply elements from a vector, when the hidden size is large. The two new components draw inspiration from a theoretical analysis of recently proposed arithmetic components. We find that careful initialization, restricting parameter space, and regularizing for sparsity is important when optimizing the NAU and NMU. Our proposed units NAU and NMU, compared with previous neural units, converge more consistently, have fewer parameters, learn faster, can converge for larger hidden sizes, obtain sparse and meaningful weights, and can extrapolate to negative and small values.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源