SpineNet: Learning Scale-Permuted Backbone for Recognition and Localization

CVPR(2020)

引用 232|浏览436
暂无评分
摘要
Convolutional neural networks typically encode an input image into a series of intermediate features with decreasing resolutions. While this structure is suited to classification tasks, it does not perform well for tasks requiring simultaneous recognition and localization (e.g., object detection). The encoder-decoder architectures are proposed to resolve this by applying a decoder network onto a backbone model designed for classification tasks. In this paper, we argue that encoder-decoder architecture is ineffective in generating strong multi-scale features because of the scale-decreased backbone. We propose SpineNet, a backbone with scale-permuted intermediate features and cross-scale connections that is learned on an object detection task by Neural Architecture Search. SpineNet achieves state-of-the-art performance of one-stage object detector on COCO with 60% less computation, and outperforms ResNet-FPN counterparts by 6% AP. SpineNet architecture can transfer to classification tasks, achieving 6% top-1 accuracy improvement on a challenging iNaturalist fine-grained dataset.
更多
查看译文
关键词
convolutional neural networks,input image,decreasing resolutions,classification tasks,simultaneous recognition,simultaneous localization,encoder-decoder architecture,decoder network,backbone model,multiscale features,scale-decreased backbone,scale-permuted intermediate features,cross-scale connections,neural architecture search,ResNet-FPN models,FLOPs,SpineNet-190 achieves,single model object detection,scale-permuted backbone learning,test-time augmentation,test-time augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要