Carrier-Free UWB Sensor Small-Sample Terrain Recognition Based on Improved ACGAN With Self-Attention

被引:6
作者
Li, Xiaoxiong [1 ]
Xiao, Zelong [1 ]
Zhu, Yuying [1 ]
Zhang, Shuning [1 ]
Chen, Si [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
基金
中国国家自然科学基金;
关键词
Sensors; Training; Time-frequency analysis; Noise reduction; Sensor phenomena and characterization; Generative adversarial networks; Convolution; Carrier-free UWB sensor; terrain recognition; ACGAN; self-attention; SPWVD; DCNN; EMPIRICAL MODE DECOMPOSITION;
D O I
10.1109/JSEN.2022.3157894
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The carrier-free UWB sensor features high distance resolution and high interference immunity. It is not easily affected by weather and lighting conditions, and its received echoes contain detailed structural information of the target. This paper proposes a small sample terrain recognition framework based on the carrier-free UWB sensor. The time-frequency feature maps of terrain echo signals are used for classification. However, insufficient samples make the classifier prone to overfitting, so we propose an Improved Auxiliary Classifier Generative Adversarial Network (IACGAN) for data enhancement in this paper. Firstly, attention mechanism and multi-scale convolution are added to the network structure of ACGAN to improve the feature extraction capability of time-feature images of echo signals. Secondly, the discriminator's true/false judgment criterion changes from Jensen-Shannon divergence to Wasserstein distance with gradient penalty, improving training stability. Finally, label classification of the generated samples by the discriminator is eliminated, which further enhances the quality of the generated images. Experiments show that the IACGAN improves the quality of generated images with IS and FID as the generation quality evaluation criteria. Furthermore, k-fold cross-validation shows that data augmentation by IACGAN improves the recognition rate of the CNN classifier. Finally, the experiment also found that directly using the discriminator in the trained IACGAN as the classifier can achieve more than 97% accuracy. That does not require additional training of the classifier on the expanded training set, which is an efficient and low-cost alternative.
引用
收藏
页码:8050 / 8058
页数:9
相关论文
共 32 条
[1]   Deep Multi-Layer Perception Based Terrain Classification for Planetary Exploration Rovers [J].
Bai, Chengchao ;
Guo, Jifeng ;
Guo, Linli ;
Song, Junlin .
SENSORS, 2019, 19 (14)
[2]  
Barratt Shane T., 2018, A note on the inception score
[3]   Attention Augmented Convolutional Networks [J].
Bello, Irwan ;
Zoph, Barret ;
Vaswani, Ashish ;
Shlens, Jonathon ;
Le, Quoc V. .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3285-3294
[4]   Pros and cons of GAN evaluation measures [J].
Borji, Ali .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2019, 179 :41-65
[5]   Self-Attention-Based Deep Feature Fusion for Remote Sensing Scene Classification [J].
Cao, Ran ;
Fang, Leyuan ;
Lu, Ting ;
He, Nanjun .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2021, 18 (01) :43-47
[6]  
Cordonnier J. B., 2019, INT C LEARNING REPRE, P1
[7]   Image Data Augmentation for SAR Sensor via Generative Adversarial Nets [J].
Cui, Zongyong ;
Zhang, Mingrui ;
Cao, Zongjie ;
Cao, Changjie .
IEEE ACCESS, 2019, 7 :42255-42268
[8]   Variational Mode Decomposition [J].
Dragomiretskiy, Konstantin ;
Zosso, Dominique .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (03) :531-544
[9]   Motion Classification Using Kinematically Sifted ACGAN-Synthesized Radar Micro-Doppler Signatures [J].
Erol, Baris ;
Gurbuz, Sevgi Zubyede ;
Amin, Moeness G. .
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2020, 56 (04) :3197-3213
[10]  
Gal Y, 2016, ADV NEUR IN, V29