论文标题

与问题相关的关注和神经网络的努力,并应用了图像分辨率和模型选择的应用

Problem-dependent attention and effort in neural networks with applications to image resolution and model selection

论文作者

Rohlfs, Chris

论文摘要

本文介绍了两种基于整体的新方法,以降低图像分类的数据和计算成本。它们可以与任何组分类器一起使用,不需要额外的培训。在第一种方法中,如果模型对低分辨率像素化版本的分类,则仅分析全尺寸图像来减少数据使用情况。当在此处考虑的最佳性能分类器上使用时,MNIST的数据使用量减少了61.2%,KMNIST的69.6%的使用率为69.6%,时FashionMnist为56.3%,SVHN的数据使用量为84.6%,Imagenet的40.6%为40.6%,Imagenet-V2的27.6%的精度降低了5%。但是,对于CIFAR-10,像素化数据并不是特别有用,整体方法可以增加数据使用,同时降低准确性。在第二种方法中,如果更简单的模型对其分类具有较低的信心,则仅使用复杂模型来降低计算成本。 MNIST的计算成本降低了82.1%,KMNIST的计算成本降低了47.6%,时尚人士的计算成本为72.3%,SVHN的计算成本为86.9%,ImageNet的89.2%,Imagenet-V2的81.5%,准确度降低了5%;对于CIFAR-10,相应的改进为13.5%。当成本不是一个对象时,从每个观测值中选择最自信的模型的投影将验证精度从Imagenet的79.3%提高到81.0%,而Imagenet-V2的投影将从Imagenet的79.3%提高到67.5%的69.4%。

This paper introduces two new ensemble-based methods to reduce the data and computation costs of image classification. They can be used with any set of classifiers and do not require additional training. In the first approach, data usage is reduced by only analyzing a full-sized image if the model has low confidence in classifying a low-resolution pixelated version. When applied on the best performing classifiers considered here, data usage is reduced by 61.2% on MNIST, 69.6% on KMNIST, 56.3% on FashionMNIST, 84.6% on SVHN, 40.6% on ImageNet, and 27.6% on ImageNet-V2, all with a less than 5% reduction in accuracy. However, for CIFAR-10, the pixelated data are not particularly informative, and the ensemble approach increases data usage while reducing accuracy. In the second approach, compute costs are reduced by only using a complex model if a simpler model has low confidence in its classification. Computation cost is reduced by 82.1% on MNIST, 47.6% on KMNIST, 72.3% on FashionMNIST, 86.9% on SVHN, 89.2% on ImageNet, and 81.5% on ImageNet-V2, all with a less than 5% reduction in accuracy; for CIFAR-10 the corresponding improvements are smaller at 13.5%. When cost is not an object, choosing the projection from the most confident model for each observation increases validation accuracy to 81.0% from 79.3% for ImageNet and to 69.4% from 67.5% for ImageNet-V2.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源