Task-Oriented Multi-User Semantic Communications

被引:201
作者
Xie, Huiqiang [1 ]
Qin, Zhijin [1 ]
Tao, Xiaoming [2 ]
Letaief, Khaled B. [3 ,4 ]
机构
[1] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Dept Elect Engn, Beijing 100084, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[4] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Task analysis; Transmitters; Transformers; Receivers; Image retrieval; Machine translation; Deep learning; semantic communications; multimodal fusion; multi-user communications; transformer; WIRELESS COMMUNICATIONS; INTERNET;
D O I
10.1109/JSAC.2022.3191326
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
While semantic communications have shown the potential in the case of single-modal single-users, its applications to the multi-user scenario remain limited. In this paper, we investigate deep learning (DL) based multi-user semantic communication systems for transmitting single-modal data and multimodal data, respectively. We adopt three intelligent tasks, including, image retrieval, machine translation, and visual question answering (VQA) as the transmission goal of semantic communication systems. We propose a Transformer based framework to unify the structure of transmitters for different tasks. For the single-modal multi-user system, we propose two Transformer based models, named, DeepSC-IR and DeepSC-MT, to perform image retrieval and machine translation, respectively. In this case, DeepSC-IR is trained to optimize the distance in embedding space between images and DeepSC-MT is trained to minimize the semantic errors by recovering the semantic meaning of sentences. For the multimodal multi-user system, we develop a Transformer enabled model, named, DeepSC-VQA, for the VQA task by extracting text-image information at the transmitters and fusing it at the receiver. In particular, a novel layer-wise Transformer is designed to help fuse multimodal data by adding connection between each of the encoder and decoder layers. Numerical results show that the proposed models are superior to traditional communications in terms of the robustness to channels, computational complexity, transmission delay, and the task-execution performance at various task-specific metrics.
引用
收藏
页码:2584 / 2597
页数:14
相关论文
共 53 条
[1]   Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering [J].
Anderson, Peter ;
He, Xiaodong ;
Buehler, Chris ;
Teney, Damien ;
Johnson, Mark ;
Gould, Stephen ;
Zhang, Lei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :6077-6086
[2]   Aggregating Deep Convolutional Features for Image Retrieval [J].
Babenko, Artem ;
Lempitsky, Victor .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :1269-1277
[3]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[4]   Deep Joint Source-Channel Coding for Wireless Image Transmission [J].
Bourtsoulatze, Eirina ;
Kurka, David Burth ;
Gunduz, Deniz .
IEEE TRANSACTIONS ON COGNITIVE COMMUNICATIONS AND NETWORKING, 2019, 5 (03) :567-579
[5]  
Catherine Wah, 2011, CNSTR2011001 CALTECH
[6]  
Chen W, 2022, Arxiv, DOI [arXiv:2101.11282, DOI 10.48550/ARXIV.2101.11282]
[7]  
Cho K., 2014, COMPUT SCI
[8]  
Farsad N, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P2326, DOI 10.1109/ICASSP.2018.8461983
[9]  
Goldsmith A., 2005, Wireless Communications
[10]   Deep Image Retrieval: Learning Global Representations for Image Search [J].
Gordo, Albert ;
Almazan, Jon ;
Revaud, Jerome ;
Larlus, Diane .
COMPUTER VISION - ECCV 2016, PT VI, 2016, 9910 :241-257