Is Temperature Sample Efficient for Softmax Gaussian Mixture of Experts?
CoRR(2024)
摘要
Dense-to-sparse gating mixture of experts (MoE) has recently become an
effective alternative to a well-known sparse MoE. Rather than fixing the number
of activated experts as in the latter model, which could limit the
investigation of potential experts, the former model utilizes the temperature
to control the softmax weight distribution and the sparsity of the MoE during
training in order to stabilize the expert specialization. Nevertheless, while
there are previous attempts to theoretically comprehend the sparse MoE, a
comprehensive analysis of the dense-to-sparse gating MoE has remained elusive.
Therefore, we aim to explore the impacts of the dense-to-sparse gate on the
maximum likelihood estimation under the Gaussian MoE in this paper. We
demonstrate that due to interactions between the temperature and other model
parameters via some partial differential equations, the convergence rates of
parameter estimations are slower than any polynomial rates, and could be as
slow as 𝒪(1/log(n)), where n denotes the sample size. To address
this issue, we propose using a novel activation dense-to-sparse gate, which
routes the output of a linear layer to an activation function before delivering
them to the softmax function. By imposing linearly independence conditions on
the activation function and its derivatives, we show that the parameter
estimation rates are significantly improved to polynomial rates.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要