A Malware Detection and Extraction Method for the Related Information Using the ViT Attention Mechanism on Android Operating System

被引:11
作者
Jo, Jeonggeun [1 ]
Cho, Jaeik [2 ]
Moon, Jongsub [1 ]
机构
[1] Korea Univ, Dept Informat Secur, Seoul 02841, South Korea
[2] Lewis Univ, Dept Comp Sci, Romeoville, IL 60446 USA
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 11期
关键词
explainable artificial intelligence (XAI); deep learning; cybersecurity; mobile malware; malware detection; visualization;
D O I
10.3390/app13116839
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Artificial intelligence (AI) is increasingly being utilized in cybersecurity, particularly for detecting malicious applications. However, the black-box nature of AI models presents a significant challenge. This lack of transparency makes it difficult to understand and trust the results. In order to address this, it is necessary to incorporate explainability into the detection model. There is insufficient research to provide reasons why applications are detected as malicious or explain their behavior. In this paper, we propose a method of a Vision Transformer(ViT)-based malware detection model and malicious behavior extraction using an attention map to achieve high detection accuracy and high interpretability. Malware detection uses a ViT-based model, which takes an image as input. ViT offers a significant advantage for image detection tasks by leveraging attention mechanisms, enabling robust interpretation and understanding of the intricate patterns within the images. The image is converted from an application. An attention map is generated with attention values generated during the detection process. The attention map is used to identify factors that the model deems important. Class and method names are extracted and provided based on the identified factors. The performance of the detection was validated using real-world datasets. The malware detection accuracy was 80.27%, which is a high level of accuracy compared to other models used for image-based malware detection. The interpretability was measured in the same way as the F1-score, resulting in an interpretability score of 0.70. This score is superior to existing interpretable machine learning (ML)-based methods, such as Drebin, LIME, and XMal. By analyzing malicious applications, we also confirmed that the extracted classes and methods are related to malicious behavior. With the proposed method, security experts can understand the reason behind the model's detection and the behavior of malicious applications. Given the growing importance of explainable artificial intelligence in cybersecurity, this method is expected to make a significant contribution to this field.
引用
收藏
页数:22
相关论文
共 45 条
[41]   Malware Analysis of Imaged Binary Samples by Convolutional Neural Network with Attention Mechanism [J].
Yakura, Hiromu ;
Shinozaki, Shinnosuke ;
Nishimura, Reon ;
Oyama, Yoshihiro ;
Sakuma, Jun .
PROCEEDINGS OF THE EIGHTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY (CODASPY'18), 2018, :127-134
[42]   A Novel Android Malware Detection Approach Based on Convolutional Neural Network [J].
Zhang, Yi ;
Yang, Yuexiang ;
Wang, Xiaolei .
ICCSP 2018: PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON CRYPTOGRAPHY, SECURITY AND PRIVACY, 2018, :144-149
[43]  
Zhang ZB, 2022, Arxiv, DOI [arXiv:2208.14937, DOI 10.48550/ARXIV.2208.14937]
[44]  
Zhao H., 2020, PROC IEEECVF C COMPU, P10073
[45]   Learning Deep Features for Discriminative Localization [J].
Zhou, Bolei ;
Khosla, Aditya ;
Lapedriza, Agata ;
Oliva, Aude ;
Torralba, Antonio .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :2921-2929