FeFET-based Process-in-Memory Architecture for Low-Power DNN Training

2021 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH)(2021)

引用 2|浏览1
暂无评分
摘要
Although deep neural networks (DNNs) have become the cornerstone of Artificial Intelligence, the current training of DNNs still requires dozens of CPU hours. Prior works created various customized hardware accelerators for DNNs, however, most of these accelerators are designed to accelerate DNN inference and lack basic support for complex compute phases and sophisticated data dependency involved i...
更多
查看译文
关键词
Training,Technological innovation,Power demand,Memory management,Pipelines,Nanoscale devices,Generators
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要