The role of explainable AI in the context of the AI Act.

FAccT(2023)

引用 0|浏览14
暂无评分
摘要
The proposed EU regulation for Artificial Intelligence (AI), the AI Act, has sparked some debate about the role of explainable AI (XAI) in high-risk AI systems. Some argue that black-box AI models will have to be replaced with transparent ones, others argue that using XAI techniques might help in achieving compliance. This work aims to bring some clarity as regards XAI in the context of the AI Act and focuses in particular on the AI Act requirements for transparency and human oversight. After outlining key points of the debate and describing the current limitations of XAI techniques, this paper carries out an interdisciplinary analysis of how the AI Act addresses the issue of opaque AI systems. In particular, we argue that neither does the AI Act mandate a requirement for XAI, which is the subject of intense scientific research and is not without technical limitations, nor does it ban the use of black-box AI systems. Instead, the AI Act aims to achieve its stated policy objectives with the focus on transparency (including documentation) and human oversight. Finally, in order to concretely illustrate our findings and conclusions, a use case on AI-based proctoring is presented.
更多
查看译文
关键词
explainable artificial intelligence, XAI, AI Act, EU regulation, trustworthy AI, transparency, human oversight
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要