论文标题

关于广义零击学习中视觉特征的可传递性

On the Transferability of Visual Features in Generalized Zero-Shot Learning

论文作者

Cascante-Bonilla, Paola, Karlinsky, Leonid, Smith, James Seale, Qi, Yanjun, Ordonez, Vicente

论文摘要

广义的零射击学习(GZSL)旨在训练一个可以概括为看不见类的分类器,使用一组属性作为辅助信息,以及从预先训练的卷积神经网络中提取的视觉特征。尽管最近的GZSL方法探索了各种技术来利用这些功能的能力,但代表学习技术的广泛增长仍然不足。在这项工作中,我们研究了使用不同特征提取器时不同GZSL方法的实用性,并检查这些模型的预训练目标,数据集和体系结构设计如何影响其功能表示能力。我们的结果表明,1)使用GZSL的生成成分的方法在使用最近的特征提取器时提供了更多优势; 2)使用自我监督的学习目标和知识蒸馏进行预训练的功能提取器提供了更好的特征表示,与最近的GZSL技术一起使用时提高了15%的性能; 3)通过较大数据集预训练的特定特征提取器不一定会提高GZSL方法的性能。此外,我们研究了GZSL方法如何抵抗CLIP,这是一个最新的多模式预训练模型,具有强烈的零击性能。我们发现,GZSL任务仍然受益于基于生成的GZSL方法以及剪辑的Internet规模预训练,以实现细粒度数据集中的最新性能。我们发布了一个模块化框架,用于分析GZSL中的表示问题:https://github.com/uvavision/tv-gzsl

Generalized Zero-Shot Learning (GZSL) aims to train a classifier that can generalize to unseen classes, using a set of attributes as auxiliary information, and the visual features extracted from a pre-trained convolutional neural network. While recent GZSL methods have explored various techniques to leverage the capacity of these features, there has been an extensive growth of representation learning techniques that remain under-explored. In this work, we investigate the utility of different GZSL methods when using different feature extractors, and examine how these models' pre-training objectives, datasets, and architecture design affect their feature representation ability. Our results indicate that 1) methods using generative components for GZSL provide more advantages when using recent feature extractors; 2) feature extractors pre-trained using self-supervised learning objectives and knowledge distillation provide better feature representations, increasing up to 15% performance when used with recent GZSL techniques; 3) specific feature extractors pre-trained with larger datasets do not necessarily boost the performance of GZSL methods. In addition, we investigate how GZSL methods fare against CLIP, a more recent multi-modal pre-trained model with strong zero-shot performance. We found that GZSL tasks still benefit from generative-based GZSL methods along with CLIP's internet-scale pre-training to achieve state-of-the-art performance in fine-grained datasets. We release a modular framework for analyzing representation learning issues in GZSL here: https://github.com/uvavision/TV-GZSL

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源