Data Privacy Threat Modelling for Autonomous Systems: A Survey From the GDPR's Perspective

IEEE Transactions on Big Data(2023)

引用 1|浏览27
暂无评分
摘要
Artificial Intelligence-based applications have been increasingly deployed in every field of life including smart homes, smart cities, healthcare services, and autonomous systems where personal data is collected across heterogeneous sources and processed using ”black-box” algorithms in opaque centralised servers. As a consequence, preserving the data privacy and security of these applications is of utmost importance. In this respect, a modelling technique for identifying potential data privacy threats and specifying countermeasures to mitigate the related vulnerabilities in such AI-based systems plays a significant role in preserving and securing personal data. Various threat modelling techniques have been proposed such as STRIDE, LINDDUN, and PASTA but none of them is sufficient to model the data privacy threats in autonomous systems. Furthermore, they are not designed to model compliance with data protection legislation like the EU/UK General Data Protection Regulation (GDPR), which is fundamental to protecting data owners’ privacy as well as to preventing personal data from potential privacy-related attacks. In this article, we survey the existing threat modelling techniques for data privacy threats in autonomous systems and then analyse such techniques from the viewpoint of GDPR compliance. Following the analysis, We employ STRIDE and LINDDUN in autonomous cars, a specific use-case of autonomous systems, to scrutinise the challenges and gaps of the existing techniques when modelling data privacy threats. Prospective research directions for refining data privacy threats & GDPR-compliance modelling techniques for autonomous systems are also presented.
更多
查看译文
关键词
Autonomous systems,data privacy,general data protection regulation,GDPR,threat modelling technique
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要