共 50 条
Large Language Models (LLMs) Inference Offloading and Resource Allocation in Cloud-Edge Computing: An Active Inference Approach
被引:3
|作者:
He, Ying
[1
]
Fang, Jingcheng
[1
]
Yu, F. Richard
[1
,2
]
Leung, Victor C.
[3
]
机构:
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[2] Carleton Univ, Sch Informat Technol, Ottawa, ON K1S 5B6, Canada
[3] Univ British Columbia, Dept Elect Comp Engn, Vancouver V6T 1Z4, BC, Canada
基金:
中国国家自然科学基金;
关键词:
Task analysis;
Computational modeling;
Cloud computing;
Resource management;
Edge computing;
Artificial neural networks;
Predictive models;
Active inference;
cloud-edge computing;
large language model;
reinforcement learning;
resource allocation;
task offloading;
D O I:
10.1109/TMC.2024.3415661
中图分类号:
TP [自动化技术、计算机技术];
学科分类号:
0812 ;
摘要:
With the increasing popularity and demands for large language model applications on mobile devices, it is difficult for resource-limited mobile terminals to run large-model inference tasks efficiently. Traditional deep reinforcement learning (DRL) based approaches have been used to offload large language models (LLMs) inference tasks to servers. However, existing DRL solutions suffer from data inefficiency, insensitivity to latency requirements, and non-adaptability to task load variations, which will degrade the performance of LLMs. In this paper, we propose a novel approach based on active inference for LLMs inference task offloading and resource allocation in cloud-edge computing. Extensive simulation results show that our proposed method has superior performance over mainstream DRLs, improves in data utilization efficiency, and is more adaptable to changing task load scenarios.
引用
收藏
页码:11253 / 11264
页数:12
相关论文