论文标题
修复火车测试的解决方案差异:FixefficityNet
Fixing the train-test resolution discrepancy: FixEfficientNet
论文作者
论文摘要
本文通过几个近期的培训程序,尤其是纠正火车和测试图像之间的差异,对有效网络分类器的性能进行了广泛的分析。所得网络称为FixefficityNet,大大优于具有相同数量的参数的初始体系结构。 例如,我们在没有其他培训数据的情况下训练了我们的FIXEFFICEDNET-B0,具有530万参数的ImageNet上的TOP-1精度。这是对经过300m未标记图像训练的嘈杂的学生EditiveNet-B0 +0.5%的绝对改善。在300m未标记的图像上预先训练的有效NET-L2预先训练,并通过FixRes进一步优化了88.5%的TOP-1准确性(Top-5:98.7%),该准确性(Top-5:98.7%)用单个作物建立了Imagenet的新技术。 这些改进的方法比通常用于ImageNet的方案更清洁,尤其是我们表明,我们的改进仍然存在于Imagenet-V2的实验环境中,而Imagenet-V2的实验环境不易过度拟合,并且具有Imagenet Real Labels。在这两种情况下,我们还建立了新的艺术状态。
This paper provides an extensive analysis of the performance of the EfficientNet image classifiers with several recent training procedures, in particular one that corrects the discrepancy between train and test images. The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters. For instance, our FixEfficientNet-B0 trained without additional training data achieves 79.3% top-1 accuracy on ImageNet with 5.3M parameters. This is a +0.5% absolute improvement over the Noisy student EfficientNet-B0 trained with 300M unlabeled images. An EfficientNet-L2 pre-trained with weak supervision on 300M unlabeled images and further optimized with FixRes achieves 88.5% top-1 accuracy (top-5: 98.7%), which establishes the new state of the art for ImageNet with a single crop. These improvements are thoroughly evaluated with cleaner protocols than the one usually employed for Imagenet, and particular we show that our improvement remains in the experimental setting of ImageNet-v2, that is less prone to overfitting, and with ImageNet Real Labels. In both cases we also establish the new state of the art.