Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning

被引:10
作者
Zhao, Dongxing [1 ]
Yang, Junan [1 ]
Liu, Hui [1 ]
Huang, Keju [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Engn, Hefei 230000, Peoples R China
关键词
specific emitter identification; self-supervised learning; small samples; deep learning; signal processing; REPRESENTATION; CLASSIFICATION;
D O I
10.3390/electronics11213485
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Specific emitter identification (SEI) is extracting the features of the received radio signals and determining the emitter individuals that generate the signals. Although deep learning-based methods have been effectively applied for SEI, their performance declines dramatically with the smaller number of labeled training samples and in the presence of significant noise. To address this issue, we propose an improved Bootstrap Your Own Late (BYOL) self-supervised learning scheme to fully exploit the unlabeled samples, which comprises the pretext task adopting contrastive learning conception and the downstream task. We designed three optimized data augmentation methods for communication signals in the former task to serve the contrastive concept. We built two neural networks, online and target networks, which interact and learn from each other. The proposed scheme demonstrates the generality of handling the small and sufficient sample cases across a wide range from 10 to 400, being labeled in each group. The experiment also shows promising accuracy and robustness where the recognition results increase at 3-8% from 3 to 7 signal-to-noise ratio (SNR). Our scheme can accurately identify the individual emitter in a complicated electromagnetic environment.
引用
收藏
页数:14
相关论文
共 50 条
[41]   Decorrelation-Based Self-Supervised Visual Representation Learning for Writer Identification [J].
Tra, Arkadip mai ;
Mitra, Shree ;
Manna, Siladittya ;
Bhattacharya, Saumik ;
Pal, Umapada .
ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2025, 24 (07)
[42]   CLSSATP: Contrastive learning and self-supervised learning model for aquatic toxicity prediction [J].
Lin, Ye ;
Yang, Xin ;
Zhang, Mingxuan ;
Cheng, Jinyan ;
Lin, Hai ;
Zhao, Qi .
AQUATIC TOXICOLOGY, 2025, 279
[43]   Pavement anomaly detection based on transformer and self-supervised learning [J].
Lin, Zijie ;
Wang, Hui ;
Li, Shenglin .
AUTOMATION IN CONSTRUCTION, 2022, 143
[44]   Self-Adaptive Training: Bridging Supervised and Self-Supervised Learning [J].
Huang, Lang ;
Zhang, Chao ;
Zhang, Hongyang .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (03) :1362-1377
[45]   DeepSet SimCLR: Self-supervised deep sets for improved pathology representation learning [J].
Torpey, David ;
Klein, Richard .
PATTERN RECOGNITION LETTERS, 2024, 186 :64-70
[46]   Self-Supervised Feature Enhancement: Applying Internal Pretext Task to Supervised Learning [J].
Xie, Tianshu ;
Yang, Yuhang ;
Ding, Zilin ;
Cheng, Xuan ;
Wang, Xiaomin ;
Gong, Haigang ;
Liu, Ming .
IEEE ACCESS, 2023, 11 :1708-1717
[47]   Self-Supervised Clustering for Leaf Disease Identification [J].
Monowar, Muhammad Mostafa ;
Hamid, Md. Abdul ;
Kateb, Faris A. ;
Ohi, Abu Quwsar ;
Mridha, M. F. .
AGRICULTURE-BASEL, 2022, 12 (06)
[48]   A Self-Supervised Residual Feature Learning Model for Multifocus Image Fusion [J].
Wang, Zeyu ;
Li, Xiongfei ;
Duan, Haoran ;
Zhang, Xiaoli .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :4527-4542
[49]   A self-supervised learning model based on variational autoencoder for limited-sample mammogram classification [J].
Karagoz, Meryem Altin ;
Nalbantoglu, O. Ufuk .
APPLIED INTELLIGENCE, 2024, 54 (04) :3448-3463
[50]   Entorhinal mismatch: A model of self-supervised learning in the hippocampus [J].
Santos-Pata, Diogo ;
Amil, Adrian F. ;
Raikov, Ivan Georgiev ;
Renno-Costa, Cesar ;
Mura, Anna ;
Soltesz, Ivan ;
Verschure, Paul F. M. J. .
ISCIENCE, 2021, 24 (04)