Influencing Bandits: Arm Selection for Preference Shaping
CoRR(2024)
摘要
We consider a non stationary multi-armed bandit in which the population
preferences are positively and negatively reinforced by the observed rewards.
The objective of the algorithm is to shape the population preferences to
maximize the fraction of the population favouring a predetermined arm. For the
case of binary opinions, two types of opinion dynamics are considered –
decreasing elasticity (modeled as a Polya urn with increasing number of balls)
and constant elasticity (using the voter model). For the first case, we
describe an Explore-then-commit policy and a Thompson sampling policy and
analyse the regret for each of these policies. We then show that these
algorithms and their analyses carry over to the constant elasticity case. We
also describe a Thompson sampling based algorithm for the case when more than
two types of opinions are present. Finally, we discuss the case where presence
of multiple recommendation systems gives rise to a trade-off between their
popularity and opinion shaping objectives.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要