谷歌浏览器插件
订阅小程序
在清言上使用

Incremental AI

Asian journal of law and economics(2023)

引用 0|浏览11
暂无评分
摘要
The usual narrative is backlash to artificial intelligence (AI). A recent study found that when judges were given decision-support, it ended up increasing disparities-not because the algorithm was biased-in fact the algorithm would have resulted in lower disparities. But the judges selectively paid attention to the algorithm, which resulted in greater disparities. This article argues for an incremental approach leveraging recent theoretical insights from social preference economics. The core insight is that judges are moral decision-makers-you're right or wrong, good or bad-and to understand what motivates these decision-makers, one might turn to self-image motives-a topic of active behavioral research in recent years. Each stage leverages motives related to the self: self-image, self-improvement, self-understanding, and ego. In stage 1, people use AI as a support tool, speeding up existing processes (for example, by prefilling forms). Once they're used to this, they can more easily accept an added functionality (Stage 2) in which AI becomes a choice monitor, pointing out choice inconsistencies and reminding the human of her prior choices in similar situations. Stage 3 elevates the AI to the role of a more general coach, providing outcome feedback on choices and highlighting decision patterns. Then, in stage 4, the AI brings in other people's decision histories and patterns, serving as a platform for a community of experts. This framework contrasts with the current framework where the AI simply recommends an optimal decision.
更多
查看译文
关键词
judicial analytics,causal inference,behavioral judging
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要