Self-supervised and semi-supervised learning for few-shot specific emitter identification using CNN-transformer with virtual adversarial training

被引:0
作者
Sun, Minhong [1 ]
Wei, Liang [1 ]
Yu, Chunlai [2 ]
Qiu, Zhaoyang [1 ]
Teng, Jiazhong [1 ]
机构
[1] Hangzhou Dianzi Univ, Hangzhou 310018, Zhejiang, Peoples R China
[2] Air Force Early Warning Acad, Wuhan 430019, Hubei, Peoples R China
关键词
Specific emitter identification; Self-supervised learning; Semi-supervised learning; Contrast learning; CNN-Transformer; NEURAL-NETWORKS;
D O I
10.1007/s10489-025-06645-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Specific emitter identification (SEI) is the process of extracting features from received signals to identify individual emitters, playing a crucial role in enhancing the security of wireless systems. Conventional deep learning-based SEI approaches heavily rely on large-scale datasets, but their performance significantly degrades under few-shot conditions. Existing few-shot SEI methods also face challenges, such as insufficient feature representation learning. In this paper, we propose a novel CNN-Transformer-based framework, FCR-CT (Feature Contrastive Reconstruction with CNN-Transformer), combined with virtual adversarial training (VAT) to improve SEI performance under few-shot conditions. During the pretraining phase, self-supervised learning is employed to optimize the encoder parameters, using a cascade of CNN and Transformer to construct an encoder-decoder structure that reconstructs unlabeled signals. By introducing feature contrastive loss, the model enhances intra-class compactness and interclass separability in the feature space, improving its representation learning capabilities. In the semi-supervised phase, the decoder is replaced with a classifier, and VAT is applied to refine the feature boundaries, further boosting classification accuracy in few-shot scenarios. Experimental results on the open-source ADS-B dataset demonstrate that the proposed FCR-CT(VAT) method achieves a 90.52% average recognition rate across 10 categories, a 1.92% improvement over the model without VAT. For 30 categories with 20 samples each, the recognition rate reached 68.65%, surpassing existing methods such as CVCNN, CNN-MAT, and SA-CNN by more than 5%. These results confirm the effectiveness and robustness of our approach in addressing the challenges of few-shot SEI in practical applications. The code is publicly available at: https://github.com/egglion/FCR-CT-_VAT.
引用
收藏
页数:18
相关论文
共 27 条
[1]   Transfer learning-assisted multi-resolution breast cancer histopathological images classification [J].
Ahmad, Nouman ;
Asghar, Sohail ;
Gillani, Saira Andleeb .
VISUAL COMPUTER, 2022, 38 (08) :2751-2770
[2]   Deep Learning for Large-Scale Real-World ACARS and ADS-B Radio Signal Classification [J].
Chen, Shichuan ;
Zheng, Shilian ;
Yang, Lifeng ;
Yang, Xiaoniu .
IEEE ACCESS, 2019, 7 :89256-89264
[3]  
Danev B, 2009, 2009 INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS (IPSN 2009), P25
[4]   Specific Emitter Identification via Convolutional Neural Networks [J].
Ding, Lida ;
Wang, Shilian ;
Wang, Fanggang ;
Zhang, Wei .
IEEE COMMUNICATIONS LETTERS, 2018, 22 (12) :2591-2594
[5]   Semi-Supervised Specific Emitter Identification Method Using Metric-Adversarial Training [J].
Fu, Xue ;
Peng, Yang ;
Liu, Yuchao ;
Lin, Yun ;
Gui, Guan ;
Gacanin, Haris ;
Adachi, Fumiyuki .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (12) :10778-10789
[6]   Deep-Learning for Radar: A Survey [J].
Geng, Zhe ;
Yan, He ;
Zhang, Jindong ;
Zhu, Daiyin .
IEEE ACCESS, 2021, 9 :141800-141818
[7]   A Two-Stage Model Based on a Complex-Valued Separate Residual Network for Cross-Domain IIoT Devices Identification [J].
Han, Guangjie ;
Xu, Zhengwei ;
Zhu, Hongbo ;
Ge, Yunlu ;
Peng, Jinlin .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (02) :2589-2599
[8]  
Hippenstiel RD, 1996, ISSPA 96 - FOURTH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND ITS APPLICATIONS, PROCEEDINGS, VOLS 1 AND 2, P740
[9]  
Huang K., 2022, IEEE Wirel. Commun. Lett, V21, DOI [10.1109/LWC.2022.3184674, DOI 10.1109/LWC.2022.3184674]
[10]   An end-to-end deep convolutional neural network-based data-driven fusion framework for identification of human induced pluripotent stem cell-derived endothelial cells in photomicrographs [J].
Iqbal, Imran ;
Ullah, Imran ;
Peng, Tingying ;
Wang, Weiwei ;
Ma, Nan .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 139