Deep Reinforcement Learning Based Task-Oriented Communication in Multi-Agent Systems

被引:10
作者
He, Guojun [1 ,2 ,3 ]
Feng, Mingjie [1 ,2 ,3 ]
Zhang, Yu [1 ,2 ,3 ]
Liu, Guanghua [1 ,2 ,3 ]
Dai, Yueyue [1 ,2 ,3 ]
Jiang, Tao [1 ,2 ,3 ]
机构
[1] Huazhong Univ Sci & Technol, Res Ctr 6G Mobile Commun, Wuhan, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Cyber Sci & Engn, Wuhan, Peoples R China
[3] Huazhong Univ Sci & Technol, Wuhan Natl Lab Optoelect, Wuhan, Peoples R China
关键词
Deep learning; Reinforcement learning; Cooperative systems; Information retrieval; Data communication; Task analysis; Multi-agent systems;
D O I
10.1109/MWC.003.2200469
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Driven by the increasing demand for executing intelligent tasks in various fields, multi-agent system (MAS) has drawn significant attention recently. An MAS relies on efficient communication between agents to exchange task-relevant information, so as support cooperative operation. Meanwhile, traditional communication systems are bit-oriented, which neglect the content and task relevance of the transmitted data. Thus, if bit-oriented communication patterns are applied in a MAS, a significant amount of task-irrelevant data would be transmitted, leading to communication resource waste and low operational efficiency. Considering that many emerging MASs are data-intensive and delay-sensitive, traditional ways of communication are unfit for these MASs. Task-oriented communication is a promising solution to deal with this issue, but its application in MAS still faces various challenges. In this article, we propose a task-oriented communication based framework for MAS, aiming to support efficient cooperation among agents. This framework specifies the collection, transmission, and processing of task-relevant information, in which task relevance is fully utilized to enhance communication efficiency. Based on the proposed framework, we then apply deep reinforcement learning (DRL) to implement task-oriented communication, in which a modular design and an end-to-end design for information extraction, data transmission, and task execution are proposed. Finally, the open problems for future research are discussed.
引用
收藏
页码:112 / 119
页数:8
相关论文
共 15 条
[11]   Semantic Communications: Overview, Open Issues, and Future Research Directions [J].
Luo, Xuewen ;
Chen, Hsiao-Hwa ;
Guo, Qing .
IEEE WIRELESS COMMUNICATIONS, 2022, 29 (01) :210-219
[12]   Human-level control through deep reinforcement learning [J].
Mnih, Volodymyr ;
Kavukcuoglu, Koray ;
Silver, David ;
Rusu, Andrei A. ;
Veness, Joel ;
Bellemare, Marc G. ;
Graves, Alex ;
Riedmiller, Martin ;
Fidjeland, Andreas K. ;
Ostrovski, Georg ;
Petersen, Stig ;
Beattie, Charles ;
Sadik, Amir ;
Antonoglou, Ioannis ;
King, Helen ;
Kumaran, Dharshan ;
Wierstra, Daan ;
Legg, Shane ;
Hassabis, Demis .
NATURE, 2015, 518 (7540) :529-533
[13]  
Shannon CE., 1949, The mathematical theory of communication
[14]   Effective Communications: A Joint Learning and Communication Framework for Multi-Agent Reinforcement Learning Over Noisy Channels [J].
Tung, Tze-Yang ;
Kobus, Szymon ;
Roig, Joan Pujol ;
Gunduz, Deniz .
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2021, 39 (08) :2590-2603
[15]   Semantic Communications in Networked Systems: A Data Significance Perspective [J].
Uysal, Elif ;
Kaya, Onur ;
Ephremides, Anthony ;
Gross, James ;
Codreanu, Marian ;
Popovski, Petar ;
Assaad, Mohamad ;
Liva, Gianluigi ;
Munari, Andrea ;
Soret, Beatriz ;
Soleymani, Touraj ;
Johansson, Karl Henrik .
IEEE NETWORK, 2022, 36 (04) :233-240