Online Network Revenue Management Using Thompson Sampling

Operations Research(2018)

引用 89|浏览103
暂无评分
摘要
AbstractThompson sampling is a randomized Bayesian machine learning method, whose original motivation was to sequentially evaluate treatments in clinical trials. In recent years, this method has drawn wide attention, as Internet companies have successfully implemented it for online ad display. In “Online network revenue management using Thompson sampling,” K. Ferreira, D. Simchi-Levi, and H. Wang propose using Thompson sampling for a revenue management problem where the demand function is unknown. A main challenge to adopt Thompson sampling for revenue management is that the original method does not incorporate inventory constraints. However, the authors show that Thompson sampling can be naturally combined with a linear program formulation to include inventory constraints. The result is a dynamic pricing algorithm that incorporates domain knowledge and has strong theoretical performance guarantees as well as promising numerical performance results. Interestingly, the authors demonstrate that Thompson sampling achieves poor performance when it does not take into account domain knowledge. Finally, the proposed dynamic pricing algorithm is highly flexible and is applicable in a range of industries, from airlines and internet advertising all the way to online retailing.We consider a price-based network revenue management problem in which a retailer aims to maximize revenue from multiple products with limited inventory over a finite selling season. As is common in practice, we assume the demand function contains unknown parameters that must be learned from sales data. In the presence of these unknown demand parameters, the retailer faces a trade-off commonly referred to as the “exploration-exploitation trade-off.” Toward the beginning of the selling season, the retailer may offer several different prices to try to learn demand at each price (“exploration” objective). Over time, the retailer can use this knowledge to set a price that maximizes revenue throughout the remainder of the selling season (“exploitation” objective). We propose a class of dynamic pricing algorithms that builds on the simple, yet powerful, machine learning technique known as “Thompson sampling” to address the challenge of balancing the exploration-exploitation trade-off under the presence of inventory constraints. Our algorithms have both strong theoretical performance guarantees and promising numerical performance results when compared with other algorithms developed for similar settings. Moreover, we show how our algorithms can be extended for use in general multiarmed bandit problems with resource constraints as well as in applications in other revenue management settings and beyond.The online appendix is available at https://doi.org/10.1287/opre.2018.1755.
更多
查看译文
关键词
multi armed bandit,thompson sampling,revenue
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要