Purpose Limitation By Design As A Counter To Function Creep And System Insecurity In Police Artificial Intelligence

EngRN: Computer Engineering (Topic)(2020)

引用 0|浏览0
暂无评分
摘要
AI’s dual nature makes it both a threat and a means to protect human rights and information technology systems. Amongst others, issues pertaining to the opacity and inclusion of potential biases in algorithmic processes as well as the inherent security vulnerabilities of such applications, unveil a tension between such technological pitfalls and the aptness of current regulatory frameworks. As a consequence, normative concepts might need to be reconsidered as to support the development of fair AI. This paper reflects on the importance of the purpose limitation principle and its role in the design phase, to mitigate the adverse impact of AI on human rights and the security of information systems. To define, elaborate, and ‘manufacture’ the purpose for which AI is deployed is critical for mitigating the intrusive impact on human rights. However, the inevitable uncertainty in the formulation of these objectives may lead to scenarios where machines do what we ask them to do, but not necessarily what we intend. Moreover, the continuous development of a system’s capabilities may allow for uses far beyond the scope of its originally envisaged deployment and purpose. In an AI context, the deployment of AI beyond its originally specified, explicit and legitimate purposes can lead to function creep as well as exacerbate security incidents. For example, AI systems intended for specific crime prevention goals might gradually be repurposed for unwarranted surveillance activities not originally considered. Furthermore, the lack of a defined purpose in combination with the inherent security vulnerabilities of AI technology draw into question the suitability of using machine learning tools in complex information technology systems. In data protection law, the principle of purpose limitation requires the purposes for which data is processed to be specified, and subsequent use limited thereto (OECD, 1981). This paper seeks to determine whether this principle can address the consequences of function creep by exploring the use cases of predictive policing and information systems security. It is argued that, although this core principle can improve the security of AI systems and their better alignment with human rights, it currently often fails to do so. We propose that a more incisive assessment of the envisioned purposes should take place during the design phase to improve the security of AI systems and their better alignment with human rights.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要