论文标题

vitpose:人姿势估计的简单视觉变压器基线

ViTPose: Simple Vision Transformer Baselines for Human Pose Estimation

论文作者

Xu, Yufei, Zhang, Jing, Zhang, Qiming, Tao, Dacheng

论文摘要

尽管在设计中没有考虑特定的领域知识,但纯视觉变压器在视觉识别任务中表现出色。但是,几乎没有努力揭示出这种简单结构来实现姿势估计任务的潜力。在本文中,我们显示了纯正视觉变压器的出乎意料的良好功能,可以从各个方面进行姿势估算,即模型结构的简单性,模型大小的可伸缩性,训练范式的灵活性,模型之间的知识可传递性,通过简单的基线模型,称为VITPOSE。具体而言,VITPOSE采用普通和非等级视觉变压器作为骨干,以提取给定人员实例的特征,而轻量级解码器进行姿势估计。可以通过占据可扩展模型容量的优点和变形金刚的高平行性,从而从100m缩放到1B参数,从而在吞吐量和性能之间设置新的帕累托前部。此外,VITPOSE在注意力类型,输入分辨率,预训练和填充策略以及处理多个姿势任务方面非常灵活。我们还从经验上证明,大型vitpose模型的知识可以通过简单的知识令牌轻松地转移到小型模型中。实验结果表明,我们的基本vitpose模型在具有挑战性的Coco Kepoint检测基准上优于代表性方法,而最大的模型设定了新的最新技术。代码和型号可在https://github.com/vitae-transformer/vitpose上找到。

Although no specific domain knowledge is considered in the design, plain vision transformers have shown excellent performance in visual recognition tasks. However, little effort has been made to reveal the potential of such simple structures for pose estimation tasks. In this paper, we show the surprisingly good capabilities of plain vision transformers for pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model called ViTPose. Specifically, ViTPose employs plain and non-hierarchical vision transformers as backbones to extract features for a given person instance and a lightweight decoder for pose estimation. It can be scaled up from 100M to 1B parameters by taking the advantages of the scalable model capacity and high parallelism of transformers, setting a new Pareto front between throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, pre-training and finetuning strategy, as well as dealing with multiple pose tasks. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our basic ViTPose model outperforms representative methods on the challenging MS COCO Keypoint Detection benchmark, while the largest model sets a new state-of-the-art. The code and models are available at https://github.com/ViTAE-Transformer/ViTPose.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源