Class Information-Guided Reconstruction for Automatic Modulation Open-Set Recognition

被引:0
作者
Zhang, Ziwei [1 ]
Zhu, Mengtao [2 ]
Liu, Jiabin [2 ]
Li, Yunjie [2 ]
Wang, Shafei [1 ,3 ]
机构
[1] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing 100081, Peoples R China
[2] Beijing Inst Technol, Sch Informat & Elect, Beijing 100081, Peoples R China
[3] Lab Electromagnet Space Cognit & Intelligent Contr, Beijing 100191, Peoples R China
基金
中国国家自然科学基金;
关键词
Modulation; Image reconstruction; Training; Time-frequency analysis; Vectors; Electromagnetics; Semantics; Automatic modulation recognition; open-set recognition; reconstruction model; mutual information;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Automatic Modulation Recognition (AMR) is vital for radar and communication systems. Traditional AMR operates under closed-set scenarios where all modulation types are pre-defined. However, in practical settings, unknown modulation types may emerge due to technological advancements. Closed-set training poses the risk of misclassifying unknown modulations into existing known classes, leading to serious implications for situation awareness and threat assessment. To tackle this challenge, this paper presents a Class Information guided Reconstruction (CIR) framework that can simultaneously achieve Known Class Classification (KCC) and Unknown Class Identification (UCI). The CIR leverages reconstruction losses to differentiate between known and unknown classes, utilizing Class Conditional Vectors (CCVs) and a Mutual Information (MI) loss function to fully exploit class information. The CCVs offer class-specific guidance for reconstruction process, ensuring accurate reconstruction for known samples while producing subpar results for unknown ones. Moreover, to enhance distinguishability, an MI loss function is introduced to capture class-discriminative semantics in latent space, enabling closer alignment with CCVs during reconstruction. The synergistic relationship between CCVs and MI facilitates optimal UCI performance without compromising KCC accuracy. The CIR is evaluated on simulated, public and real-world datasets, demonstrating its effectiveness and robustness, particularly in low SNR and high unknown class prevalence scenarios.
引用
收藏
页码:1103 / 1118
页数:16
相关论文
共 63 条
[1]  
Alemi AA, 2019, Arxiv, DOI arXiv:1612.00410
[2]  
[Anonymous], 2016, P 6 GNU RAD C
[3]   Towards Open Set Deep Networks [J].
Bendale, Abhijit ;
Boult, Terrance E. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1563-1572
[4]   Variational Open Set Recognition (VOSR) [J].
Buquicchio, Luke ;
Gerych, Walter ;
Alajaji, Abdulaziz ;
Chandrasekaran, Kavin ;
Mansoor, Hamid ;
Hartvigsen, Thomas ;
Rundensteiner, Elke ;
Agu, Emmanuel .
2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, :994-1001
[5]   Semisupervised Radar Intrapulse Signal Modulation Classification With Virtual Adversarial Training [J].
Cai, Jingjing ;
He, Minghao ;
Cao, Xianghai ;
Gan, Fengming .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (06) :9929-9940
[6]   Signal Modulation Classification Based on the Transformer Network [J].
Cai, Jingjing ;
Gan, Fengming ;
Cao, Xianghai ;
Liu, Wei .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2022, 8 (03) :1348-1357
[7]  
Chakravarthy RV, 2020, 2020 IEEE INTERNATIONAL RADAR CONFERENCE (RADAR), P542, DOI [10.1109/radar42522.2020.9114773, 10.1109/RADAR42522.2020.9114773]
[8]   Multitask-Learning-Based Deep Neural Network for Automatic Modulation Classification [J].
Chang, Shuo ;
Huang, Sai ;
Zhang, Ruiyun ;
Feng, Zhiyong ;
Liu, Liang .
IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (03) :2192-2206
[9]   Adversarial Reciprocal Points Learning for Open Set Recognition [J].
Chen, Guangyao ;
Peng, Peixi ;
Wang, Xiangqian ;
Tian, Yonghong .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) :8065-8081
[10]   Learning Open Set Network with Discriminative Reciprocal Points [J].
Chen, Guangyao ;
Qiao, Limeng ;
Shi, Yemin ;
Peng, Peixi ;
Li, Jia ;
Huang, Tiejun ;
Pu, Shiliang ;
Tian, Yonghong .
COMPUTER VISION - ECCV 2020, PT III, 2020, 12348 :507-522