A Trust Framework for Government Use of Artificial Intelligence and Automated Decision Making
CoRR(2022)
摘要
This paper identifies the current challenges of the mechanisation, digitisation and automation of public sector systems and processes, and proposes a modern and practical framework to ensure and assure ethical and high veracity Artificial Intelligence (AI) or Automated Decision Making (ADM) systems in public institutions. This framework is designed for the specific context of the public sector, in the jurisdictional and constitutional context of Australia, but is extendable to other jurisdictions and private sectors. The goals of the framework are to: 1) earn public trust and grow public confidence in government systems; 2) to ensure the unique responsibilities and accountabilities (including to the public) of public institutions under Administrative Law are met effectively; and 3) to assure a positive human, societal and ethical impact from the adoption of such systems. The framework could be extended to assure positive environmental or other impacts, but this paper focuses on human/societal outcomes and public trust. This paper is meant to complement principles-based frameworks like Australia's Artificial Intelligence Ethics Framework and the EU Assessment List for Trustworthy AI. In many countries, COVID created a bubble of improved trust, a bubble which has arguably already popped, and in an era of unprecedented mistrust of public institutions (but even in times of high trust) it is not enough that a service is faster, or more cost-effective. This paper proposes recommendations for government systems (technology platforms, operations, culture, governance, engagement, etc.) that would help to improve public confidence and trust in public institutions, policies and services, whilst meeting the special obligations and responsibilities of the public sector.
更多查看译文
关键词
trust framework,artificial intelligence,automated decision,government use
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要