论文标题
神经符号大脑
The Neuro-Symbolic Brain
论文作者
论文摘要
神经网络促进了分布式表示,没有明确的符号。尽管如此,我们建议仅通过训练稀疏的随机噪声作为反馈尖峰神经网络中的自我维持的吸引子来制造符号。这样,我们可以生成许多我们称为主要吸引者的东西,而支持它们的网络就像持有符号价值的寄存器一样,我们将其称为寄存器。像符号一样,主要吸引子是原子的,没有任何内部结构。此外,尖峰神经元自然实现的赢家接收所有机制使登记处能够在嘈杂的信号中恢复主要吸引子。使用此教职员工,当考虑两个连接的寄存器,一个输入寄存器和一个输出寄存器时,可以使用Hebbian规则在一个镜头中绑定吸引子在输出上的吸引子与输入上的吸引子处于活动状态。因此,每当吸引子在输入上活跃时,它就会诱导其结合吸引子在输出上。即使信号具有更多的绑定,但赢家全部过滤教师也可以恢复绑定的主要吸引子。但是,容量仍然有限。也可以一杆解脱,以恢复该装订的能力。该机制是工作记忆的基础,将主要吸引子变成变量。此外,我们使用一个随机的二阶网络将两个登记册持有的主要吸引子融合在一起,以将第三个登记册持有的主要吸引子束缚在他们的一枪中,事实上实现了哈希表。此外,我们介绍了由寄存器组成的寄存器开关框,以将一个寄存器的内容移至另一个寄存器。然后,我们使用尖峰神经元根据上述构建玩具符号计算机。该技术使用的方法是设计推断,可重复使用,可重复使用的,以结构性先验为代价的样本的深度学习网络。
Neural networks promote a distributed representation with no clear place for symbols. Despite this, we propose that symbols are manufactured simply by training a sparse random noise as a self-sustaining attractor in a feedback spiking neural network. This way, we can generate many of what we shall call prime attractors, and the networks that support them are like registers holding a symbolic value, and we call them registers. Like symbols, prime attractors are atomic and devoid of any internal structure. Moreover, the winner-take-all mechanism naturally implemented by spiking neurons enables registers to recover a prime attractor within a noisy signal. Using this faculty, when considering two connected registers, an input one and an output one, it is possible to bind in one shot using a Hebbian rule the attractor active on the output to the attractor active on the input. Thus, whenever an attractor is active on the input, it induces its bound attractor on the output; even though the signal gets blurrier with more bindings, the winner-take-all filtering faculty can recover the bound prime attractor. However, the capacity is still limited. It is also possible to unbind in one shot, restoring the capacity taken by that binding. This mechanism serves as a basis for working memory, turning prime attractors into variables. Also, we use a random second-order network to amalgamate the prime attractors held by two registers to bind the prime attractor held by a third register to them in one shot, de facto implementing a hash table. Furthermore, we introduce the register switch box composed of registers to move the content of one register to another. Then, we use spiking neurons to build a toy symbolic computer based on the above. The technics used suggest ways to design extrapolating, reusable, sample-efficient deep learning networks at the cost of structural priors.