Frontiers in AI Acceleration: From Approximate Computing to FeFET Monolithic 3D Integration

2023 IFIP/IEEE 31ST INTERNATIONAL CONFERENCE ON VERY LARGE SCALE INTEGRATION, VLSI-SOC(2023)

引用 0|浏览1
暂无评分
摘要
With the rapidly expanding applications of artificial intelligence (AI), the quest for hardware acceleration to foster high-speed and energy-efficient AI computation has become ever more important. In this work, we first explore the performance and energy advantages of employing classical AI acceleration with conventional systolic multiply-accumulate (MAC) arrays. We then highlight the growing importance of monolithic 3D integration as a transformative hardware acceleration strategy, moving beyond the constraints of classical von Neumann architectures. We also discuss how brain-inspired hyperdimensional computing (HDC) offers an exciting avenue for overcoming the power-hungry requirements often associated with MAC arrays, which are inevitable in deep learning hardware. Addressing the limitations of von Neumann architectures, we present the potential of monolithic 3D integration to enable ultra-dense Processing-in-Memory (PiM) layers stacked on top of high-performance CMOS logic. This novel approach offers to enhance computational performance. Recognizing the need for compatibility with low thermal budgets, we identify ferroelectric thin-film transistors (FeTFT) as a promising candidate for back-end-ofline (BEOL) fabrication. We highlight recent advances in BEOL FeTFT technology and demonstrate how technology/algorithm co-optimization plays a crucial role in the successful realization of reliable brain-inspired HDC on potentially unreliable FeTFT-based PiM layers. Our results showcase the potential of these innovations for the development of next-generation, energyefficient AI hardware.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要