CRissNet: An Efficient and Lightweight Network for CSI Feedback in Massive MIMO Systems

被引:0
作者
Wang, Binghui [1 ]
Teng, Yinglei [1 ]
Zhao, Yangliu [1 ]
Yu, Yaxin [2 ]
Lau, Vincent K. N. [3 ]
机构
[1] Beijing Univ Posts & Telecommun, Beijing Key Lab Work Safety Intelligent Monitoring, Beijing 100876, Peoples R China
[2] China Mobile Jiangsu Co Ltd, Dept Digital Intelligence Support, Nanjing 210029, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Elect & Commun Engn, Hong Kong, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Correlation; Downlink; Training; Computational modeling; Attention mechanisms; Uplink; Wireless communication; Image reconstruction; Discrete Fourier transforms; Delays; Massive MIMO; CSI feedback; deep learning; Criss-Cross attention;
D O I
暂无
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
The attractive high spectrum and energy efficiency performance of massive multiple-input multiple-output (MIMO) highly relies on the availability of the channel state information (CSI). In frequency-division duplexing (FDD), the high dimensionality of the CSI matrices brings about the significant feedback overhead, while deep learning (DL)-based data compression has shown great promise with limited communication resources. In this paper, regarding the spatial-temporal correlation property of CSI matrices, we propose a low-parameter neural network named CRissNet, which utilizes the Criss-Cross attention to comprehensively extract the essence of CSI matrices. A new criss-cross correlation matrix (CCCM) is devised to measure the information importance in the angular-delay domain and provide additional explanatory insight for the applicability of the proposed attention scheme. Additionally, to enhance the weak correlation condition performance in the outdoors, we design an extended Criss-Cross attention algorithm, named Criss-Cross attention+, which enhances the self-attention with side-view observation of adjacent antennas. Some training methods of the real and imaginary part combination, variable sizes design, etc., are conducted, but not limited to the specific feedback neural networks. The simulations show the superiority of the proposed feedback scheme over the state-of-the-art methods for both indoor and outdoor scenes especially in high compress ratio condition. The open source codes are available at https://github.com/CRissNet/CRissNet.git.
引用
收藏
页码:1452 / 1465
页数:14
相关论文
共 37 条
[1]   Lightweight Convolutional Neural Networks for CSI Feedback in Massive MIMO [J].
Cao, Zheng ;
Shih, Wan-Ting ;
Guo, Jiajia ;
Wen, Chao-Kai ;
Jin, Shi .
IEEE COMMUNICATIONS LETTERS, 2021, 25 (08) :2624-2628
[2]   An iterative thresholding algorithm for linear inverse problems with a sparsity constraint [J].
Daubechies, I ;
Defrise, M ;
De Mol, C .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2004, 57 (11) :1413-1457
[3]   Deep Learning Based Communication Over the Air [J].
Doerner, Sebastian ;
Cammerer, Sebastian ;
Hoydis, Jakob ;
ten Brink, Stephan .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (01) :132-143
[4]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[5]  
Glorot X., 2010, P INT C ART INT STAT, P1
[6]   CAnet: Uplink-Aided Downlink Channel Acquisition in FDD Massive MIMO Using Deep Learning [J].
Guo, Jiajia ;
Wen, Chao-Kai ;
Jin, Shi .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2022, 70 (01) :199-214
[7]   Deep Learning-Based CSI Feedback for Beamforming in Single- and Multi-Cell Massive MIMO Systems [J].
Guo, Jiajia ;
Wen, Chao-Kai ;
Jin, Shi .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (07) :1872-1884
[8]  
Guo JJ, 2020, IEEE T WIREL COMMUN, V19, P2827, DOI [10.1109/TWC.2020.2968430, 10.1109/TNSE.2020.2997359]
[9]   Model-Driven Deep Learning for Physical Layer Communications [J].
He, Hengtao ;
Jin, Shi ;
Wen, Chao-Kai ;
Gao, Feifei ;
Li, Geoffrey Ye ;
Xu, Zongben .
IEEE WIRELESS COMMUNICATIONS, 2019, 26 (05) :77-83
[10]   Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1026-1034