Resource Allocation for Metaverse Experience Optimization: A Multi-Objective Multi-Agent Evolutionary Reinforcement Learning Approach

被引:1
作者
Feng, Lei [1 ]
Jiang, Xiaoyi [1 ]
Sun, Yao [2 ]
Niyato, Dusit [3 ]
Zhou, Yu [1 ]
Gu, Shiyi [1 ]
Yang, Zhixiang [1 ]
Yang, Yang [1 ]
Zhou, Fanqin [1 ]
机构
[1] Beijing Univ Posts & Telecommun BUPT, State Key Lab Networking & Switching Technol, Beijing, Peoples R China
[2] Univ Glasgow, James Watt Sch Engn, Glasgow City, Scotland
[3] Nanyang Technol Univ, Coll Comp & Data Sci, Singapore City, Singapore
基金
中国国家自然科学基金; 新加坡国家研究基金会;
关键词
Metaverse; Quality of experience; Delays; Resource management; Optimization; Heuristic algorithms; Rendering (computer graphics); Wireless communication; Energy consumption; Costs; Metaverse experience; multi-objective optimization; resource allocation; Meta-Immersion; energy consumption; VIRTUAL-REALITY; LATENCY;
D O I
10.1109/TMC.2024.3509680
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the Metaverse, real-time, concurrent services such as virtual classrooms and immersive gaming require local graphic rendering to maintain low latency. However, the limited processing power and battery capacity of user devices make it challenging to balance Quality of Experience (QoE) and terminal energy consumption. In this paper, we investigate a multi-objective optimization problem (MOP) regarding power control and rendering capacity allocation by formulating it as a multi-objective optimization problem. This problem aims to minimize energy consumption while maximizing Meta-Immersion (MI), a metric that integrates objective network performance with subjective user perception. To solve this problem, we propose a Multi-Objective Multi-Agent Evolutionary Reinforcement Learning with User-Object-Attention (M2ERL-UOA) algorithm. The algorithm employs a prediction-driven evolutionary learning mechanism for multi-agents, coupled with optimized rendering capacity decisions for virtual objects. The algorithm can yield a superior Pareto front that attains the Nash equilibrium. Simulation results demonstrate that the proposed algorithm can generate Pareto fronts, effectively adapts to dynamic user preferences, and significantly reduces decision-making time compared to several benchmarks.
引用
收藏
页码:3473 / 3488
页数:16
相关论文
共 63 条
[21]   Stationary equilibria in stochastic games: structure, selection, and computation [J].
Herings, PJJ ;
Peeters, RJAP .
JOURNAL OF ECONOMIC THEORY, 2004, 118 (01) :32-60
[22]   QoE Analysis and Resource Allocation for Wireless Metaverse Services [J].
Jiang, Yuna ;
Kang, Jiawen ;
Ge, Xiaohu ;
Niyato, Dusit ;
Xiong, Zehui .
IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (08) :4735-4750
[23]   Handling Constrained Many-Objective Optimization Problems via Problem Transformation [J].
Jiao, Ruwang ;
Zeng, Sanyou ;
Li, Changhe ;
Yang, Shengxiang ;
Ong, Yew-Soon .
IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (10) :4834-4847
[24]  
Jie Feng, 2019, 2019 IEEE/CIC International Conference on Communications in China (ICCC), P443, DOI 10.1109/ICCChina.2019.8855916
[25]  
Joshi A., 2015, British Journal of Applied Science & Technology, V7, P396, DOI [DOI 10.9734/BJAST/2015/14975, 10.9734/bjast/2015/14975, 10.9734/BJAST/2015/14975.]
[26]   NOMA Assisted Two-Tier VR Content Transmission: A Tile-Based Approach for QoE Optimization [J].
Li, Yang ;
Dou, Chenglong ;
Wu, Yuan ;
Jia, Weijia ;
Lu, Rongxing .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (05) :3769-3784
[27]   GAN-powered heterogeneous multi-agent reinforcement learning for UAV-assisted task [J].
Li, Yangyang ;
Feng, Lei ;
Yang, Yang ;
Li, Wenjing .
AD HOC NETWORKS, 2024, 153
[28]  
Lin Z., 2024, P 2024 IEEE WIR COMM, P1
[29]  
Liu J., 2023, P 32 WIR OPT COMM C, P1
[30]  
Liu LR, 2024, Arxiv, DOI arXiv:2207.06492