论文标题

通过知识图的生成对抗性零射击学习

Generative Adversarial Zero-shot Learning via Knowledge Graphs

论文作者

Geng, Yuxia, Chen, Jiaoyan, Chen, Zhuo, Ye, Zhiquan, Yuan, Zonggang, Jia, Yantao, Chen, Huajun

论文摘要

零射击学习(ZSL)是为了处理那些没有标记培训数据的未看到类的预测。最近,由于ZSL的高精度,概括能力等,诸如生成对抗网络(GAN)之类的生成方法正在广泛研究。但是,现在使用的类的侧面信息仅限于文本说明和属性注释,这些注释与课堂的语义相比。在本文中,我们通过将丰富的语义(kg)纳入gan中,引入了一种名为kg-gan的新生成ZSL方法。具体来说,我们以图形神经网络为基础,并从两个视图中编码kg:类视图和属性视图考虑了KG的不同语义。每个节点(代表视觉类别)的学习良好的语义嵌入,我们利用gans将令人信服的视觉特征综合为看不见的类。根据我们对多个图像分类数据集的评估,KG-GAN比最新的基线可以取得更好的性能。

Zero-shot learning (ZSL) is to handle the prediction of those unseen classes that have no labeled training data. Recently, generative methods like Generative Adversarial Networks (GANs) are being widely investigated for ZSL due to their high accuracy, generalization capability and so on. However, the side information of classes used now is limited to text descriptions and attribute annotations, which are in short of semantics of the classes. In this paper, we introduce a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs. Specifically, we build upon Graph Neural Networks and encode KG from two views: class view and attribute view considering the different semantics of KG. With well-learned semantic embeddings for each node (representing a visual category), we leverage GANs to synthesize compelling visual features for unseen classes. According to our evaluation with multiple image classification datasets, KG-GAN can achieve better performance than the state-of-the-art baselines.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源