论文标题

图像识别的跨分离学习

A Cross-Residual Learning for Image Recognition

论文作者

Liang, Jun, Yu, Songsen, Yang, Huan

论文摘要

重置及其变体在图像识别的各个领域都起着重要作用。本文提供了另一种变体的Resnets,这是一种称为C-Resnets的跨层次学习网络,该网络的计算和参数少于Resnets。 C词素通过致密跳线并丰富跳线的作用来增加模块之间的信息相互作用。此外,在跳线和频道计数上进行一些细致的设计可以进一步降低C-回答物的资源消耗并提高其分类性能。为了测试C晶格的有效性,我们在实验中使用与微调的重新NETS相同的高参数设置。 我们在数据集MNIST,FashionMnist,CIFAR-10,CIFAR-100,CALTECH-101和SVHN上测试我们的C-Resnets。与微调的重新NET相比,C-Resnets不仅保持了分类性能,而且还大大减少了计算和参数的量,从而大大节省了GPU和GPU内存资源的利用率。因此,在各种情况下,我们的C-响应网是重新结构的竞争和可行替代方案。代码可在https://github.com/liangjunhello/c-resnet上找到

ResNets and its variants play an important role in various fields of image recognition. This paper gives another variant of ResNets, a kind of cross-residual learning networks called C-ResNets, which has less computation and parameters than ResNets. C-ResNets increases the information interaction between modules by densifying jumpers and enriches the role of jumpers. In addition, some meticulous designs on jumpers and channels counts can further reduce the resource consumption of C-ResNets and increase its classification performance. In order to test the effectiveness of C-ResNets, we use the same hyperparameter settings as fine-tuned ResNets in the experiments. We test our C-ResNets on datasets MNIST, FashionMnist, CIFAR-10, CIFAR-100, CALTECH-101 and SVHN. Compared with fine-tuned ResNets, C-ResNets not only maintains the classification performance, but also enormously reduces the amount of calculations and parameters which greatly save the utilization rate of GPUs and GPU memory resources. Therefore, our C-ResNets is competitive and viable alternatives to ResNets in various scenarios. Code is available at https://github.com/liangjunhello/C-ResNet

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源