谷歌浏览器插件
订阅小程序
在清言上使用

Mixture of Experts for Network Optimization: A Large Language Model-enabled Approach

arXiv (Cornell University)(2024)

引用 0|浏览80
暂无评分
摘要
Optimizing various wireless user tasks poses a significant challenge fornetworking systems because of the expanding range of user requirements. Despiteadvancements in Deep Reinforcement Learning (DRL), the need for customizedoptimization tasks for individual users complicates developing and applyingnumerous DRL models, leading to substantial computation resource and energyconsumption and can lead to inconsistent outcomes. To address this issue, wepropose a novel approach utilizing a Mixture of Experts (MoE) framework,augmented with Large Language Models (LLMs), to analyze user objectives andconstraints effectively, select specialized DRL experts, and weigh eachdecision from the participating experts. Specifically, we develop a gatenetwork to oversee the expert models, allowing a collective of experts totackle a wide array of new tasks. Furthermore, we innovatively substitute thetraditional gate network with an LLM, leveraging its advanced reasoningcapabilities to manage expert model selection for joint decisions. Our proposedmethod reduces the need to train new DRL models for each unique optimizationproblem, decreasing energy consumption and AI model implementation costs. TheLLM-enabled MoE approach is validated through a general maze navigation taskand a specific network service provider utility maximization task,demonstrating its effectiveness and practical applicability in optimizingcomplex networking systems.
更多
查看译文
关键词
Generative AI (GAI),large language model,mixture of experts,network optimization
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要