Multi-modal fusion for sensing-aided beam tracking in mmWave communications

被引:0
|
作者
Bian, Yijie [1 ]
Yang, Jie [2 ,3 ]
Dai, Lingyun [1 ]
Lin, Xi [4 ]
Cheng, Xinyao [4 ]
Que, Hang [4 ,5 ]
Liang, Le [3 ,4 ,5 ]
Jin, Shi [3 ,4 ,5 ]
机构
[1] Southeast Univ, CHIEN SHIUNG WU Coll, Nanjing, Peoples R China
[2] Key Lab Measurement & Control Complex Syst Engn, Nanjing, Peoples R China
[3] Frontiers Sci Ctr Mobile Informat Commun & Secur, Nanjing, Peoples R China
[4] Sch Informat Sci & Technol, Nanjing, Peoples R China
[5] Natl Mobile Commun Res Lab, Nanjing, Peoples R China
关键词
mmWave communications; Deep learning; Beam training and tracking; Multi-modal data; Decision-level fusion; MILLIMETER-WAVE;
D O I
10.1016/j.phycom.2024.102514
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Millimeter wave (mmWave) communication has attracted extensive attention and research due to its wide bandwidth and abundant spectrum resources. Effective and fast beam tracking is a critical challenge for the practical deployment of mmWave communications. Existing studies demonstrate the potential of sensing- aided beam tracking. However, most studies are focus on single-modal data assistance without considering multi-modal calibration or the impact of inference latency of different sub-modules. Thus, in this study, we design a decision-level multi-modal (mmWave received signal power vector, RGB image and GPS data) fusion for sensing-aided beam tracking (DMBT) method. The proposed DMBT method includes three designed mechanisms, namely normal prediction process, beam misalignment alert and beam tracking correction. The normal prediction process conducts partial beam training instead of exhaustive beam training, which largely reduces large beam training overhead. It also comprehensively selects prediction results from multi-modal data to enhance the DMBT method robustness to noise. The beam misalignment alert based on RGB image and GPS data detects whether there exists beam misalignment and also predict the optimal beam. The beam tracking correction is designed to capture the optimal beam if misalignment happens by reusing certain blocks in normal prediction process and possibly outdated prediction results. Finally, we evaluate the proposed DMBT method in the vehicle-to-infrastructure scenario based on a real-world dataset. The results show that the method is capable of self-correction and mitigating the negative effect of the relative inference latency. Moreover, 75%-93% beam training overhead can be saved to maintain reliable communication even when faced with considerable noise in measurement data.
引用
收藏
页数:15
相关论文
共 50 条
  • [31] Robust Beam-Tracking for mmWave Mobile Communications
    Jayaprakasam, Suhanya
    Ma, Xiaoxue
    Choi, Jun Won
    Kim, Sunwoo
    IEEE COMMUNICATIONS LETTERS, 2017, 21 (12) : 2654 - 2657
  • [32] Diffraction Characteristics Aided Blockage and Beam Prediction for mmWave Communications
    Li, Xiaogang
    Yu, Li
    Zhang, Yuxiang
    Zhang, Jianhua
    Liu, Baoling
    Jiang, Tao
    Xia, Liang
    2022 IEEE 95TH VEHICULAR TECHNOLOGY CONFERENCE (VTC2022-SPRING), 2022,
  • [33] A Multi-Modal Gaze Tracking Algorithm
    Su, Haiming
    Hou, Zhenjie
    Huan, Juan
    Yan, Ke
    Ding, Hao
    2019 INTERNATIONAL CONFERENCE ON INTERNET OF THINGS (ITHINGS) AND IEEE GREEN COMPUTING AND COMMUNICATIONS (GREENCOM) AND IEEE CYBER, PHYSICAL AND SOCIAL COMPUTING (CPSCOM) AND IEEE SMART DATA (SMARTDATA), 2019, : 655 - 660
  • [34] Vision-Aided Beam Allocation for Indoor mmWave Communications
    Sarker, Md Abdul Latif
    Orikumhi, Igbafe
    Kang, Jeongwan
    Jwa, Hye-Kyung
    Na, Jee-Hyeon
    Kim, Sunwoo
    12TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE (ICTC 2021): BEYOND THE PANDEMIC ERA WITH ICT CONVERGENCE INNOVATION, 2021, : 1403 - 1408
  • [35] Soft multi-modal data fusion
    Coppock, S
    Mazack, L
    PROCEEDINGS OF THE 12TH IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS 1 AND 2, 2003, : 636 - 641
  • [36] Multi-Modal Fusion Object Tracking Based on Fully Convolutional Siamese Network
    Qi, Ke
    Chen, Liji
    Zhou, Yicong
    Qi, Yutao
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 440 - 444
  • [37] A Quantitative Validation of Multi-Modal Image Fusion and Segmentation for Object Detection and Tracking
    LaHaye, Nicholas
    Garay, Michael J.
    Bue, Brian D.
    El-Askary, Hesham
    Linstead, Erik
    REMOTE SENSING, 2021, 13 (12)
  • [38] Multi-Modal Fusion for End-to-End RGB-T Tracking
    Zhang, Lichao
    Danelljan, Martin
    Gonzalez-Garcia, Abel
    van de Weijer, Joost
    Khan, Fahad Shahbaz
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 2252 - 2261
  • [39] Multi-modal fusion for video understanding
    Hoogs, A
    Mundy, J
    Cross, G
    30TH APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP, PROCEEDINGS: ANALYSIS AND UNDERSTANDING OF TIME VARYING IMAGERY, 2001, : 103 - 108
  • [40] Multi-modal data fusion: A description
    Coppock, S
    Mazlack, LJ
    KNOWLEDGE-BASED INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT 2, PROCEEDINGS, 2004, 3214 : 1136 - 1142