论文标题
网络为什么具有抑制/负面连接?
Why do networks have inhibitory/negative connections?
论文作者
论文摘要
为什么大脑具有抑制性联系?为什么深网的权重负重?我们从表示能力的角度提出答案。我们认为代表功能是(i)自然智力中大脑的主要作用,以及(ii)人工智能中的深网。我们对为什么有抑制/负重的回答是:学习更多功能。我们证明,在没有负重量的情况下,具有不重新激活功能的神经网络不是通用近似值。据我们所知,这可能是某些人的直观结果,但在机器学习或神经科学中,没有形式理论证明了为什么在表示能力的背景下,负重至关重要。此外,我们提供了有关非负深网无法代表的表示空间的几何特性的见解。我们希望这些见解将对对体重的分布施加的更复杂的感应先验产生更深入的了解,从而导致更有效的生物学和机器学习。
Why do brains have inhibitory connections? Why do deep networks have negative weights? We propose an answer from the perspective of representation capacity. We believe representing functions is the primary role of both (i) the brain in natural intelligence, and (ii) deep networks in artificial intelligence. Our answer to why there are inhibitory/negative weights is: to learn more functions. We prove that, in the absence of negative weights, neural networks with non-decreasing activation functions are not universal approximators. While this may be an intuitive result to some, to the best of our knowledge, there is no formal theory, in either machine learning or neuroscience, that demonstrates why negative weights are crucial in the context of representation capacity. Further, we provide insights on the geometric properties of the representation space that non-negative deep networks cannot represent. We expect these insights will yield a deeper understanding of more sophisticated inductive priors imposed on the distribution of weights that lead to more efficient biological and machine learning.