论文标题
用于图像识别的多尺度重量共享网络
Multi-Scale Weight Sharing Network for Image Recognition
论文作者
论文摘要
在本文中,我们探讨了卷积网络中多个量表的重量共享的想法。受传统计算机视觉方法的启发,我们在网络相同层的不同尺度上共享卷积内核的权重。尽管多尺度的功能聚合和内部卷积网络共享在实践中很常见,但以前的工作都没有解决卷积重量共享的问题。我们在两个异构图像识别数据集上评估了我们的重量共享方案 - ImageNet(对象识别)和Placs365 -Standard(场景分类)。与基线重新NET相比,我们的共享重新连接模型的参数少约25%,提供了相似的性能。通过转移学习实验在四个附加图像识别数据集上进一步验证了共享权重模型-Caltech256和Stanford 40动作(以对象为中心)和SUN397和MIT Inddor67(以场景为中心)。实验结果表明,在更深层网络的香草实现中,显着的冗余性,并且还表明,向增加每个参数的接受场的转变可能会改善未来的卷积网络体系结构。
In this paper, we explore the idea of weight sharing over multiple scales in convolutional networks. Inspired by traditional computer vision approaches, we share the weights of convolution kernels over different scales in the same layers of the network. Although multi-scale feature aggregation and sharing inside convolutional networks are common in practice, none of the previous works address the issue of convolutional weight sharing. We evaluate our weight sharing scheme on two heterogeneous image recognition datasets - ImageNet (object recognition) and Places365-Standard (scene classification). With approximately 25% fewer parameters, our shared-weight ResNet model provides similar performance compared to baseline ResNets. Shared-weight models are further validated via transfer learning experiments on four additional image recognition datasets - Caltech256 and Stanford 40 Actions (object-centric) and SUN397 and MIT Inddor67 (scene-centric). Experimental results demonstrate significant redundancy in the vanilla implementations of the deeper networks, and also indicate that a shift towards increasing the receptive field per parameter may improve future convolutional network architectures.