Influence of Autoencoder-Based Data Augmentation on Deep Learning-Based Wireless Communication

被引:8
作者
Li, Linyu [1 ,2 ]
Zhang, Zhengming [1 ,2 ]
Yang, Luxi [1 ,2 ]
机构
[1] Southeast Univ, Sch Informat Sci & Engn, Natl Mobile Commun Res Lab, Nanjing 210096, Peoples R China
[2] Pervas Commun Res Ctr, Purple Mt Labs, Nanjing 211111, Peoples R China
基金
中国国家自然科学基金;
关键词
Task analysis; Training; Wireless communication; Resource management; Channel estimation; Decoding; Data models; Deep learning; autoencoder; mixup; channel estimation; power allocation;
D O I
10.1109/LWC.2021.3092716
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning (DL) has been gradually applied to wireless communication and has achieved remarkable results. However, training a DL model requires numerous data, and an insufficient training dataset will cause serious overfitting problem and reduce model accuracy. Data augmentation (DA) is one of the commonly used techniques to solve the above problem. It is widely used to improve the performance of image and text classification tasks, but its impact on DL-based wireless communication has not been fully explored. Inspired by the emerging deep autoencoder (AE) generative model, we propose an AE-based DA method to improve the generalization performance of the DL-based wireless communication. We perform experimental verification on the two tasks of channel estimation model of wireless communication physical layer and massive multiple-input multiple-output (MIMO) power allocation optimization model. Experimental results show that the AE-based DA method can improve the generalization performance of the wireless communication, but this improvement is related to the training dataset size. When the training dataset size is smaller than a threshold, AE can improve model performance by increasing the relevant training data, but when it is larger than this threshold, it reduces model performance. We also propose that the straight line intersection method can be used directly to roughly determine this threshold. Furthermore, we propose a mixup-based method to solve the problem that the AE cannot improve model performance when the training dataset size is larger than the threshold.
引用
收藏
页码:2090 / 2093
页数:4
相关论文
共 15 条
[1]   Deep Audio-Visual Speech Recognition [J].
Afouras, Triantafyllos ;
Chung, Joon Son ;
Senior, Andrew ;
Vinyals, Oriol ;
Zisserman, Andrew .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :8717-8727
[2]  
Anderson A, 2018, IEEE MILIT COMMUN C, P213, DOI 10.1109/MILCOM.2018.8599807
[3]   Randaugment: Practical automated data augmentation with a reduced search space [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Shlens, Jonathon ;
Le, Quoc, V .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :3008-3017
[4]   AutoAugment: Learning Augmentation Strategies from Data [J].
Cubuk, Ekin D. ;
Zoph, Barret ;
Mane, Dandelion ;
Vasudevan, Vijay ;
Le, Quoc V. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :113-123
[5]  
Inoue H., 2018, ARXIV180102929, DOI 10.48550/arXiv.1801.02929
[6]  
Jiang XW, 2016, IEEE IJCNN, P3378, DOI 10.1109/IJCNN.2016.7727631
[7]   Investigations of Object Detection in Images/Videos Using Various Deep Learning Techniques and Embedded Platforms-A Comprehensive Review [J].
Murthy, Chinthakindi Balaram ;
Hashmi, Mohammad Farukh ;
Bokde, Neeraj Dhanraj ;
Geem, Zong Woo .
APPLIED SCIENCES-BASEL, 2020, 10 (09)
[8]   Auto-encoder-based generative models for data augmentation on regression problems [J].
Ohno, Hiroshi .
SOFT COMPUTING, 2020, 24 (11) :7999-8009
[9]  
Ollivier Y., 2014, AUTO ENCODERS RECONS
[10]  
Sanguinetti L, 2018, CONF REC ASILOMAR C, P1257, DOI 10.1109/ACSSC.2018.8645343