Changing distributions and preferences in learning systems

AIES(2023)

引用 0|浏览0
暂无评分
摘要
In this talk, I’ll describe some recent work outlining how distribution shifts are fundamental to working with human-centric data. Some of these shifts come from attempting to "join" datasets gathered in different contexts, others may be the result of people’s preferences affecting which data they provide to which systems, and even more can arise when peoples’ preferences themselves are shaped by ML systems’ recommendations. Each of these types of shift require different modeling and analysis to more accurately predict the behavior of ML pipelines deployed in a way where they interact repeatedly with people who care about their predictions.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要