Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector
arxiv(2024)
摘要
AI algorithms used in the public sector, e.g., for allocating social benefits
or predicting fraud, often involve multiple public and private stakeholders at
various phases of the algorithm's life-cycle. Communication issues between
these diverse stakeholders can lead to misinterpretation and misuse of
algorithms. We investigate the communication processes for AI fairness-related
decisions by conducting interviews with practitioners working on algorithmic
systems in the public sector. By applying qualitative coding analysis, we
identify key elements of communication processes that underlie fairness-related
human decisions. We analyze the division of roles, tasks, skills, and
challenges perceived by stakeholders. We formalize the underlying communication
issues within a conceptual framework that i. represents the communication
patterns ii. outlines missing elements, such as actors who miss skills for
their tasks. The framework is used for describing and analyzing key
organizational issues for fairness-related decisions. Three general patterns
emerge from the analysis: 1. Policy-makers, civil servants, and domain experts
are less involved compared to developers throughout a system's life-cycle. This
leads to developers taking on extra roles such as advisor, while they
potentially miss the required skills and guidance from domain experts. 2.
End-users and policy-makers often lack the technical skills to interpret a
system's limitations, and rely on developer roles for making decisions
concerning fairness issues. 3. Citizens are structurally absent throughout a
system's life-cycle, which may lead to decisions that do not include relevant
considerations from impacted stakeholders.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要