Learning to Rank under Multinomial Logit Choice

JOURNAL OF MACHINE LEARNING RESEARCH(2023)

引用 0|浏览18
暂无评分
摘要
Learning the optimal ordering of content is an important challenge in website design. The learning to rank (LTR) framework models this problem as a sequential problem of selecting lists of content and observing where users decide to click. Most previous work on LTR assumes that the user considers each item in the list in isolation, and makes binary choices to click or not on each. We introduce a multinomial logit (MNL) choice model to the LTR framework, which captures the behaviour of users who consider the ordered list of items as a whole and make a single choice among all the items and a no-click option. Under the MNL model, the user favours items which are either inherently more attractive, or placed in a preferable position within the list. We propose upper confidence bound (UCB) algorithms to minimise regret in two settings -where the position dependent parameters root are known, and unknown. We present theoretical analysis leading to an omega( root JT) lower bound for the problem, an O similar to( JT) upper bound on regret of the UCB algorithm in the known-parameter setting, and an O similar to(K2 root JT) upper bound on regret, the first, in the more challenging unknown-position-parameter setting. Our analyses are based on tight new concentration results for Geometric random variables, and novel functional inequalities for maximum likelihood estimators computed on discrete data.
更多
查看译文
关键词
Learning to rank,Multinomial Logit choice model,Multi-armed Bandits,Upper Confidence Bound,Concentration Inequalities.
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要