Explainability in AI-based Behavioral Malware Detection Systems

Computers & Security(2024)

引用 0|浏览4
暂无评分
摘要
Nowadays, our security and privacy are strongly threatened by malware programs which aim to steal our confidential data and make our systems out of service, among other things. While traditional signature-based malware detection methods or statistical analysis have proven to be ineffective and time-consuming, recently data-driven Artificial Intelligence (AI) techniques, i.e. Machine Learning (ML) and Deep Learning (DL) approaches, have been successfully applied leveraging the behaviour of malware in terms of API calls, and achieving promising performances. However, their black-box behavior leads to a lack of explainability thus preventing their application in real world scenarios. In light of this, eXplainable Artificial Intelligence (XAI) methodologies and tools can be effectively embedded within an AI-based malware detection process in order to make more understandable the produced results. In this paper, we propose a XAI framework for behavioral malware detection problems and evaluate the usefulness of four XAI methods (SHAP, LIME, LRP and Attention mechanism) on three datasets with different size, sequence length and number of classes, by which we could evaluate the strengths and weaknesses – from effectiveness and efficiency point of views – of recurrent deep architectures (i.e. Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) models), and their applicability in the modern Cyber Security (CS) scenarios.
更多
查看译文
关键词
Behavioral Malware detection,eXplainable Artificial Intelligence,Data-driven Cyber Security,Long-Short Term Memory
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要