论文标题

稀疏分布式表示形式的可变结合:理论和应用

Variable Binding for Sparse Distributed Representations: Theory and Applications

论文作者

Frady, E. Paxon, Kleyko, Denis, Sommer, Friedrich T.

论文摘要

符号推理和神经网络通常被认为是不兼容的方法。连接主义模型称为矢量符号体系结构(VSA)可能会弥合此差距。但是,经典的VSA和神经网络仍然被认为是不兼容的。 VSA通过致密的伪随机向量编码符号,其中信息分布在整个神经元种群中。神经网络在本地编码特征,通常形成神经激活的稀疏向量。遵循Rachkovskij(2001); Laiho等。 (2015年),我们探索具有稀疏分布式表示形式的符号推理。 VSA中的核心操作是向量之间的二元操作,以表达可变结合和集合的表示。因此,代数操作使VSA能够在固定维度的向量空间中表示和处理数据结构。使用压缩传感的技术,我们首先表明,VSA中密度向量之间的可变结合在数学上等同于稀疏矢量之间的张量产物结合,这是一种增加维度的操作。该结果表明,对一般稀疏向量的维尺寸的结合必须包括将张量矩阵减少到单个稀疏矢量中。研究了两个具有稀疏性变量结合的选项。一般稀疏矢量的一种结合方法将较早的提议扩展到将张量产物降低到矢量中,例如圆形卷积。另一种方法仅针对稀疏的块状编码(块圆形圆形卷积)定义。我们的实验表明,对块代码的可变结合具有理想的特性,而对一般稀疏矢量的结合也有效,但具有有损,类似于以前的建议。我们在示例应用程序,认知推理和分类中演示了具有稀疏块编码的VSA,并讨论了其与神经科学和神经网络的相关性。

Symbolic reasoning and neural networks are often considered incompatible approaches. Connectionist models known as Vector Symbolic Architectures (VSAs) can potentially bridge this gap. However, classical VSAs and neural networks are still considered incompatible. VSAs encode symbols by dense pseudo-random vectors, where information is distributed throughout the entire neuron population. Neural networks encode features locally, often forming sparse vectors of neural activation. Following Rachkovskij (2001); Laiho et al. (2015), we explore symbolic reasoning with sparse distributed representations. The core operations in VSAs are dyadic operations between vectors to express variable binding and the representation of sets. Thus, algebraic manipulations enable VSAs to represent and process data structures in a vector space of fixed dimensionality. Using techniques from compressed sensing, we first show that variable binding between dense vectors in VSAs is mathematically equivalent to tensor product binding between sparse vectors, an operation which increases dimensionality. This result implies that dimensionality-preserving binding for general sparse vectors must include a reduction of the tensor matrix into a single sparse vector. Two options for sparsity-preserving variable binding are investigated. One binding method for general sparse vectors extends earlier proposals to reduce the tensor product into a vector, such as circular convolution. The other method is only defined for sparse block-codes, block-wise circular convolution. Our experiments reveal that variable binding for block-codes has ideal properties, whereas binding for general sparse vectors also works, but is lossy, similar to previous proposals. We demonstrate a VSA with sparse block-codes in example applications, cognitive reasoning and classification, and discuss its relevance for neuroscience and neural networks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源