Chrome Extension
WeChat Mini Program
Use on ChatGLM

20.5 C-Transformer: A 2.6-18.1Μj/token Homogeneous DNN-Transformer/Spiking-Transformer Processor with Big-Little Network and Implicit Weight Generation for Large Language Models

2024 IEEE International Solid-State Circuits Conference (ISSCC)(2024)

Cited 0|Views22
No score
Key words
Language Model,Large Language Models,Energy Consumption,Power Consumption,Sparsity,Input Values,Random Values,Question Answering,Input Channels,Spiking Neural Networks,Perplexity,Least Significant Bit,Real-time Response,Increase Energy Efficiency,Datapath,Position Embedding,Output Spike,Area Overhead
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined