Toward Learning Model-Agnostic Explanations for Deep Learning-Based Signal Modulation Classifiers

被引:1
作者
Tian, Yunzhe [1 ]
Xu, Dongyue [1 ]
Tong, Endong [1 ]
Sun, Rui [1 ]
Chen, Kang [1 ]
Li, Yike [1 ]
Baker, Thar [2 ]
Niu, Wenjia [1 ]
Liu, Jiqiang [1 ]
机构
[1] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Transp, Beijing 100044, Peoples R China
[2] Univ Brighton, Sch Architecture Technol & Engn, Brighton BN2 4GJ, England
关键词
Black-box model; deep learning (DL); explainable AI; interpretability; model reliability; modulation classification; CLASSIFICATION; RECOGNITION; NETWORK;
D O I
10.1109/TR.2024.3367780
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recent advances in deep learning (DL) have brought tremendous gains in signal modulation classification. However, DL-based classifiers lack transparency and interpretability, which raises concern about model's reliability and hinders the wide deployment in real-word applications. While explainable methods have recently emerged, little has been done to explain the DL-based signal modulation classifiers. In this work, we propose a novel model-agnostic explainer, Model-Agnostic Signal modulation classification Explainer (MASE), which provides explanations for the predictions of black-box modulation classifiers. With the subsequence-based signal interpretable representation and in-distribution local signal sampling, MASE learns a local linear surrogate model to derive a class activation vector, which assigns importance values to the timesteps of signal instance. Besides, the constellation-based explanation visualization is adopted to spotlight the important signal features relevant to model prediction. We furthermore propose the first generic quantitative explanation evaluation framework for signal modulation classification to automatically measure the faithfulness, sensitivity, robustness, and efficiency of explanations. Extensive experiments are conducted on two real-world datasets with four black-box signal modulation classifiers. The quantitative results indicate MASE outperforms two state-of-the-art methods with 44.7% improvement in faithfulness, 30.6% improvement in robustness, and 44.1% decrease in sensitivity. Through qualitative visualizations, we further demonstrate the explanations of MASE are more human interpretable and provide better understanding into the reliability of black-box model decisions.
引用
收藏
页码:1529 / 1543
页数:15
相关论文
共 55 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] Exploiting auto-encoders and segmentation methods for middle-level explanations of image classification systems
    Apicella, Andrea
    Giugliano, Salvatore
    Isgro, Francesco
    Prevete, Roberto
    [J]. KNOWLEDGE-BASED SYSTEMS, 2022, 255
  • [3] Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems
    Bahramali, Alireza
    Nasr, Milad
    Houmansadr, Amir
    Goeckel, Dennis
    Towsley, Don
    [J]. CCS '21: PROCEEDINGS OF THE 2021 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, : 126 - 140
  • [4] TimeSHAP: Explaining Recurrent Models through Sequence Perturbations
    Bento, Joao
    Saleiro, Pedro
    Cruz, Andre F.
    Figueiredo, Mario A. T.
    Bizarro, Pedro
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2565 - 2573
  • [5] Shared Spectrum Monitoring Using Deep Learning
    Bhatti, Farrukh Aziz
    Khan, Muhammad Jaleed
    Selim, Ahmed
    Paisana, Francisco
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2021, 7 (04) : 1171 - 1185
  • [6] Signal Modulation Classification Based on the Transformer Network
    Cai, Jingjing
    Gan, Fengming
    Cao, Xianghai
    Liu, Wei
    [J]. IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (03) : 1348 - 1357
  • [7] Chen JY, 2020, IEEE IND ELEC, P3543, DOI [10.1109/IECON43393.2020.9254271, 10.1109/iecon43393.2020.9254271]
  • [8] Survey of automatic modulation classification techniques: classical approaches and new trends
    Dobre, O. A.
    Abdi, A.
    Bar-Ness, Y.
    Su, W.
    [J]. IET COMMUNICATIONS, 2007, 1 (02) : 137 - 156
  • [9] Class-Specific Explainability for Deep Time Series Classifiers
    Doddaiah, Ramesh
    Parvatharaju, Prathyush
    Rundensteiner, Elke
    Hartvigsen, Thomas
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 101 - 110
  • [10] Explainable AI (XAI): Core Ideas, Techniques, and Solutions
    Dwivedi, Rudresh
    Dave, Devam
    Naik, Het
    Singhal, Smiti
    Omer, Rana
    Patel, Pankesh
    Qian, Bin
    Wen, Zhenyu
    Shah, Tejal
    Morgan, Graham
    Ranjan, Rajiv
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (09)