Can Weight Sharing Outperform Random Architecture Search? An Investigation With TuNAS

CVPR(2020)

引用 145|浏览486
暂无评分
摘要
Efficient Neural Architecture Search methods based on weight sharing have shown good promise in democratizing Neural Architecture Search for computer vision models. There is, however, an ongoing debate whether these efficient methods are significantly better than random search. Here we perform a thorough comparison between efficient and random search methods on a family of progressively larger and more challenging search spaces for image classification and detection on ImageNet and COCO. While the efficacies of both methods are problem-dependent, our experiments demonstrate that there are large, realistic tasks where efficient search methods can provide substantial gains over random search. In addition, we propose and evaluate techniques which improve the quality of searched architectures and reduce the need for manual hyper-parameter tuning. Source code and experiment data are available at https://github.com/google-research/google-research/tree/master/tunas
更多
查看译文
关键词
TuNAS,manual hyper-parameter tuning,COCO,ImageNet,random architecture search,efficient neural architecture search methods,progressively larger search spaces,computer vision models,weight sharing,searched architectures,efficient search methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要