User Strategization and Trustworthy Algorithms
CoRR(2023)
摘要
Many human-facing algorithms – including those that power recommender
systems or hiring decision tools – are trained on data provided by their
users. The developers of these algorithms commonly adopt the assumption that
the data generating process is exogenous: that is, how a user reacts to a given
prompt (e.g., a recommendation or hiring suggestion) depends on the prompt and
not on the algorithm that generated it. For example, the assumption that a
person's behavior follows a ground-truth distribution is an exogeneity
assumption. In practice, when algorithms interact with humans, this assumption
rarely holds because users can be strategic. Recent studies document, for
example, TikTok users changing their scrolling behavior after learning that
TikTok uses it to curate their feed, and Uber drivers changing how they accept
and cancel rides in response to changes in Uber's algorithm.
Our work studies the implications of this strategic behavior by modeling the
interactions between a user and their data-driven platform as a repeated,
two-player game. We first find that user strategization can actually help
platforms in the short term. We then show that it corrupts platforms' data and
ultimately hurts their ability to make counterfactual decisions. We connect
this phenomenon to user trust, and show that designing trustworthy algorithms
can go hand in hand with accurate estimation. Finally, we provide a
formalization of trustworthiness that inspires potential interventions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要