Deep Learning-based Carrier Frequency Offset Estimation with One-Bit ADCs

被引:22
作者
Dreifuerst, Ryan M. [1 ]
Heath, Robert W., Jr. [1 ]
Kulkarni, Mandar N. [2 ]
Zhang, Jianzhong Charlie [2 ]
机构
[1] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[2] Samsung Res Amer, Stand & Mobil Innovat Lab, Mountain View, CA USA
来源
PROCEEDINGS OF THE 21ST IEEE INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (IEEE SPAWC2020) | 2020年
关键词
Carrier frequency offset; millimeter wave; MIMO; deep learning; one-bit receivers;
D O I
10.1109/spawc48557.2020.9154214
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Low resolution architectures are a power efficient solution for high bandwidth communication at millimeter wave and terahertz frequencies. In such systems, carrier synchronization is important yet has not received much attention. In this paper, we develop and analyze deep learning architectures for estimating the carrier frequency of a complex sinusoid in noise from the 1-bit samples of the in-phase and quadrature components. Carrier frequency offset estimation from a sinusoid is used in GSM and is a first step towards developing a more comprehensive solution with other kinds of signals. We train four different deep learning architectures each on eight datasets which represent possible training considerations. Specifically, we consider how training with various signal to noise ratios (SNR), quantization, and sequence lengths affects estimation error. Further, we analyze each architecture in terms of scalability for MIMO receivers. In simulations, we compare execution time and mean squared error (MSE) versus classic signal processing techniques. We demonstrate that training with quantized data, drawn from signals with SNRs between 0-10dB tends to improve deep learning estimator performance across the entire SNR range of interest. We conclude that convolutional models have the best performance, while also requiring shorter execution time than FFT methods. Our approach is able to accurately estimate carrier frequencies from 1-bit quantized data with fewer pilots and lower signal to noise ratios (SNRs) than traditional signal processing methods.
引用
收藏
页数:5
相关论文
共 15 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
[Anonymous], 2018, CUFFT
[3]   Signal parameter estimation using 1-bit dithered quantization [J].
Dabeer, Onkar ;
Karnik, Aditya .
IEEE TRANSACTIONS ON INFORMATION THEORY, 2006, 52 (12) :5389-5405
[4]  
Dreifuerst R. M., DEEP LEARNING FREQUE
[5]  
Gianelli C, 2016, CONF REC ASILOMAR C, P399, DOI 10.1109/ACSSC.2016.7869068
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]   Effects of sampling and quantization on single-tone frequency estimation [J].
Host-Madsen, A ;
Händel, P .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2000, 48 (03) :650-662
[8]   Timing and Frequency Synchronization for 1-bit Massive MU-MIMO-OFDM Downlink [J].
Jacobsson, Sven ;
Lindquist, Carl ;
Durisi, Giuseppe ;
Eriksson, Thomas ;
Studer, Christoph .
2019 IEEE 20TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC 2019), 2019,
[9]  
Kotera K., 2012, 2012 International Conference on Information Networking (ICOIN 2012), P275, DOI 10.1109/ICOIN.2012.6164391
[10]  
LeCun Y., 1998, HDB BRAIN THEORY NEU, P255