谷歌浏览器插件
订阅小程序
在清言上使用

D-LLM: A Token Adaptive Computing Resource Allocation Strategy for Large Language Models

Yikun Jiang, Huanyu Wang, Lei Xie,Hanbin Zhao,Chao Zhang,Hui Qian,John C.S. Lui

NeurIPS 2024(2024)

引用 0|浏览0
暂无评分
关键词
Large Language Models,Dynamic Inference,Inference Acceleration,Adaptive Computing Resource Allocation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要