Task-Oriented Multi-User Semantic Communications

被引:199
作者
Xie, Huiqiang [1 ]
Qin, Zhijin [1 ]
Tao, Xiaoming [2 ]
Letaief, Khaled B. [3 ,4 ]
机构
[1] Queen Mary Univ London, Sch Elect Engn & Comp Sci, London E1 4NS, England
[2] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol, Dept Elect Engn, Beijing 100084, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Elect & Comp Engn, Hong Kong, Peoples R China
[4] Peng Cheng Lab, Shenzhen 518066, Peoples R China
基金
中国国家自然科学基金;
关键词
Semantics; Task analysis; Transmitters; Transformers; Receivers; Image retrieval; Machine translation; Deep learning; semantic communications; multimodal fusion; multi-user communications; transformer; WIRELESS COMMUNICATIONS; INTERNET;
D O I
10.1109/JSAC.2022.3191326
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
While semantic communications have shown the potential in the case of single-modal single-users, its applications to the multi-user scenario remain limited. In this paper, we investigate deep learning (DL) based multi-user semantic communication systems for transmitting single-modal data and multimodal data, respectively. We adopt three intelligent tasks, including, image retrieval, machine translation, and visual question answering (VQA) as the transmission goal of semantic communication systems. We propose a Transformer based framework to unify the structure of transmitters for different tasks. For the single-modal multi-user system, we propose two Transformer based models, named, DeepSC-IR and DeepSC-MT, to perform image retrieval and machine translation, respectively. In this case, DeepSC-IR is trained to optimize the distance in embedding space between images and DeepSC-MT is trained to minimize the semantic errors by recovering the semantic meaning of sentences. For the multimodal multi-user system, we develop a Transformer enabled model, named, DeepSC-VQA, for the VQA task by extracting text-image information at the transmitters and fusing it at the receiver. In particular, a novel layer-wise Transformer is designed to help fuse multimodal data by adding connection between each of the encoder and decoder layers. Numerical results show that the proposed models are superior to traditional communications in terms of the robustness to channels, computational complexity, transmission delay, and the task-execution performance at various task-specific metrics.
引用
收藏
页码:2584 / 2597
页数:14
相关论文
共 53 条
[11]   Clustering-driven unsupervised deep hashing for image retrieval [J].
Gu, Yifan ;
Wang, Shidong ;
Zhang, Haofeng ;
Yao, Yazhou ;
Yang, Wankou ;
Liu, Li .
NEUROCOMPUTING, 2019, 368 :114-123
[12]  
Hadsell R., 2006, P 2006 IEEE COMP SOC, V2, P1735
[13]   Propagation Modeling for Wireless Communications in the Terahertz Band [J].
Han, Chong ;
Chen, Yi .
IEEE COMMUNICATIONS MAGAZINE, 2018, 56 (06) :96-101
[14]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.8.1735, 10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]
[15]   Object-Location-Aware Hashing for Multi-Label Image Retrieval via Automatic Mask Learning [J].
Huang, Chang-Qin ;
Yang, Shang-Ming ;
Pan, Yan ;
Lai, Han-Jiang .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2018, 27 (09) :4490-4502
[16]   Wireless Image Retrieval at the Edge [J].
Jankowski, Mikolaj ;
Gunduz, Deniz ;
Mikolajczyk, Krystian .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (01) :89-100
[17]   CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning [J].
Johnson, Justin ;
Hariharan, Bharath ;
van der Maaten, Laurens ;
Fei-Fei, Li ;
Zitnick, C. Lawrence ;
Girshick, Ross .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1988-1997
[18]  
Kalchbrenner N., 2013, EMNLP
[19]  
Kim J., 2017, ICLR
[20]  
Kingma D. P., 2014, 2 INT C LEARNING REP, V1050, P10