SHViT: Single-Head Vision Transformer with Memory Efficient Macro Design
CoRR(2024)
摘要
Recently, efficient Vision Transformers have shown great performance with low
latency on resource-constrained devices. Conventionally, they use 4x4 patch
embeddings and a 4-stage structure at the macro level, while utilizing
sophisticated attention with multi-head configuration at the micro level. This
paper aims to address computational redundancy at all design levels in a
memory-efficient manner. We discover that using larger-stride patchify stem not
only reduces memory access costs but also achieves competitive performance by
leveraging token representations with reduced spatial redundancy from the early
stages. Furthermore, our preliminary analyses suggest that attention layers in
the early stages can be substituted with convolutions, and several attention
heads in the latter stages are computationally redundant. To handle this, we
introduce a single-head attention module that inherently prevents head
redundancy and simultaneously boosts accuracy by parallelly combining global
and local information. Building upon our solutions, we introduce SHViT, a
Single-Head Vision Transformer that obtains the state-of-the-art speed-accuracy
tradeoff. For example, on ImageNet-1k, our SHViT-S4 is 3.3x, 8.1x, and 2.4x
faster than MobileViTv2 x1.0 on GPU, CPU, and iPhone12 mobile device,
respectively, while being 1.3
segmentation on MS COCO using Mask-RCNN head, our model achieves performance
comparable to FastViT-SA12 while exhibiting 3.8x and 2.0x lower backbone
latency on GPU and mobile device, respectively.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要