When Fairness meets Bias: a Debiased Framework for Fairness aware Top-N Recommendation

PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023(2023)

引用 1|浏览28
暂无评分
摘要
Fairness in the recommendation domain has recently attracted increasing attention due to more and more concerns about the algorithm discrimination and ethics. While recent years have witnessed many promising fairness aware recommender models, an important problem has been largely ignored, that is, the fairness can be biased due to the user personalized selection tendencies or the non-uniform item exposure probabilities. To study this problem, in this paper, we formally define a novel task named as unbiased fairness aware Top-N recommendation. For solving this task, we firstly define an ideal loss function based on all the user-item pairs. Considering that, in real-world datasets, only a small number of user-item interactions can be observed, we then approximate the above ideal loss with a more tractable objective based on the inverse propensity score (IPS). Since the recommendation datasets can be noisy and quite sparse, which brings difficulties for accurately estimating the IPS, we propose to optimize the objective in an IPS range instead of a specific point, which improves the model fault tolerance capability. In order to make our model more applicable to the commonly studied Top-N recommendation, we soften the ranking metrics such as Precision, Hit-Ratio, and NDCG to derive a fully differentiable framework. We conduct extensive experiments to demonstrate the effectiveness of our model based on four real-world datasets.
更多
查看译文
关键词
recommendation system,fairness aware recommendation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要