High-Level Parallelism and Nested Features for Dynamic Inference Cost and Top-Down Attention

André Peter Kelm, Niels Hannemann, Bruno Heberle, Lucas Schmidt,Tim Rolff,Christian Wilms,Ehsan Yaghoubi,Simone Frintrop

arxiv(2023)

引用 0|浏览1
暂无评分
摘要
This paper introduces a novel network topology that seamlessly integrates dynamic inference cost with a top-down attention mechanism, addressing two significant gaps in traditional deep learning models. Drawing inspiration from human perception, we combine sequential processing of generic low-level features with parallelism and nesting of high-level features. This design not only reflects a finding from recent neuroscience research regarding - spatially and contextually distinct neural activations - in human cortex, but also introduces a novel "cutout" technique: the ability to selectively activate task-relevant categories to optimize inference cost and eliminate the need for re-training. We believe this paves the way for future network designs that are lightweight and adaptable, making them suitable for a wide range of applications, from compact edge devices to large-scale clouds. Our proposed topology also comes with a built-in top-down attention mechanism, which allows processing to be directly influenced by either enhancing or inhibiting category-specific high-level features, drawing parallels to the selective attention mechanism observed in human cognition. Using targeted external signals, we experimentally enhanced predictions across all tested models. In terms of dynamic inference cost our methodology can achieve an exclusion of up to 73.48 % of parameters and 84.41 % fewer giga-multiply-accumulate (GMAC) operations, analysis against comparative baselines show an average reduction of 40 % in parameters and 8 % in GMACs across the cases we evaluated.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要