D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models
NeurIPS 2024(2024)
关键词
Large Language Models,Dynamic Inference,Inference Acceleration,Adaptive Computing Resource Allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要