Optimizing Microservice Deployment in Edge Computing with Large Language Models: Integrating Retrieval Augmented Generation and Chain of Thought Techniques

被引:0
|
作者
Feng, Kan [1 ]
Luo, Lijun [1 ]
Xia, Yongjun [2 ]
Luo, Bin [2 ]
He, Xingfeng [1 ]
Li, Kaihong [3 ]
Zha, Zhiyong [4 ]
Xu, Bo [1 ,5 ]
Peng, Kai [1 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Hubei Key Lab Smart Internet Technol, Wuhan 430074, Peoples R China
[2] Hubei Huazhong Elect Power Technol Dev Co Ltd, Wuhan 430079, Peoples R China
[3] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
[4] State Grid Informat Telecommun Co Ltd, Wuhan 430048, Peoples R China
[5] Hubei ChuTianYun Co Ltd, Wuhan 430076, Peoples R China
来源
SYMMETRY-BASEL | 2024年 / 16卷 / 11期
关键词
large language models; retrieval augmented generation; microservice deployment; mobile edge computing;
D O I
10.3390/sym16111470
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Large Language Models (LLMs) have demonstrated impressive capabilities in autogenerating code based on natural language instructions provided by humans. We observed that in the microservice models of edge computing, the problem of deployment latency optimization can be transformed into an NP-hard mathematical optimization problem. However, in the real world, deployment strategies at the edge often require immediate updates, while human-engineered code tends to be lagging. To bridge this gap, we innovatively integrated LLMs into the decision-making process for microservice deployment. Initially, we constructed a private Retrieval Augmented Generation (RAG) database containing prior knowledge. Subsequently, we employed meticulously designed step-by-step inductive instructions and used the chain of thought (CoT) technique to enable the LLM to learn, reason, reflect, and regenerate. We decomposed the microservice deployment latency optimization problem into a collection of granular sub-problems (described in natural language), progressively providing instructions to the fine-tuned LLM to generate corresponding code blocks. The generated code blocks underwent integration and consistency assessment. Additionally, we prompted the LLM to generate code without the use of the RAG database for comparative analysis. We executed the aforementioned code and comparison algorithm under identical operational environments and simulation parameters, conducting rigorous result analysis. Our fine-tuned model significantly reduced latencies by 22.8% in handling surges in request flows, 37.8% in managing complex microservice types, and 39.5% in processing increased network nodes compared to traditional algorithms. Moreover, our approach demonstrated marked improvements in latency performance over LLMs not utilizing RAG technology and reinforcement learning algorithms reported in other literature. The use of LLMs also highlights the concept of symmetry, as the symmetrical structure of input-output relationships in microservice deployment models aligns with the LLM's inherent ability to process and generate balanced and optimized code. Symmetry in this context allows for more efficient resource allocation and reduces redundant operations, further enhancing the model's effectiveness. We believe that LLMs hold substantial potential in optimizing microservice deployment models.
引用
收藏
页数:22
相关论文
共 18 条
  • [1] Empowering large language models for automated clinical assessment with generation-augmented retrieval and hierarchical chain-of-thought
    Gu, Zhanzhong
    Jia, Wenjing
    Piccardi, Massimo
    Yu, Ping
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2025, 162
  • [2] Integrating Small Language Models with Retrieval-Augmented Generation in Computing Education: Key Takeaways, Setup, and Practical Insights
    Yu, Zezhu
    Liu, Suqing
    Denny, Paul
    Bergen, Andreas
    Liut, Michael
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 2, 2025, : 1302 - 1308
  • [3] Integrating Small Language Models with Retrieval-Augmented Generation in Computing Education: Key Takeaways, Setup, and Practical Insights
    Yu, Zezhu
    Liu, Suqing
    Denny, Paul
    Bergen, Andreas
    Liut, Michael
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 1, 2025, : 1302 - 1308
  • [4] Enhancing textual textbook question answering with large language models and retrieval augmented generation
    Alawwad, Hessa A.
    Alhothali, Areej
    Naseem, Usman
    Alkhathlan, Ali
    Jamal, Amani
    PATTERN RECOGNITION, 2025, 162
  • [5] Adaptive Control of Retrieval-Augmented Generation for Large Language Models Through Reflective Tags
    Yao, Chengyuan
    Fujita, Satoshi
    ELECTRONICS, 2024, 13 (23):
  • [6] Building Conversational Agents for Stroke Rehabilitation: An Evaluation of Large Language Models and Retrieval Augmented Generation
    Retevoi, Alexandra
    Devittori, Giada
    Kowatsch, Tobias
    Lambercy, Olivier
    PROCEEDINGS OF THE 24TH ACM INTERNATIONAL CONFERENCE ON INTELLIGENT VIRTUAL AGENTS, IVA 2024, 2024,
  • [7] Can Small Language Models With Retrieval-Augmented Generation Replace Large Language Models When Learning Computer Science?
    Liu, Suqing
    Yu, Zezhu
    Huang, Feiran
    Bulbulia, Yousef
    Bergen, Andreas
    Liut, Michael
    PROCEEDINGS OF THE 2024 CONFERENCE INNOVATION AND TECHNOLOGY IN COMPUTER SCIENCE EDUCATION, VOL 1, ITICSE 2024, 2024, : 388 - 393
  • [8] Optimizing High-Level Synthesis Designs with Retrieval-Augmented Large Language Models
    Xu, Haocheng
    Hu, Haotian
    Huang, Sitao
    2024 IEEE LLM AIDED DESIGN WORKSHOP, LAD 2024, 2024,
  • [9] Facilitating university admission using a chatbot based on large language models with retrieval-augmented generation
    Chen, Zheng
    Zou, Di
    Xie, Haoran
    Lou, Huajie
    Pang, Zhiyuan
    EDUCATIONAL TECHNOLOGY & SOCIETY, 2024, 27 (04): : 454 - 470
  • [10] Enhancement of the Performance of Large Language Models inDiabetes Education through Retrieval-Augmented Generation:Comparative Study
    Wang, Dingqiao
    Liang, Jiangbo
    Ye, Jinguo
    Li, Jingni
    Li, Jingpeng
    Zhang, Qikai
    Hu, Qiuling
    Pan, Caineng
    Wang, Dongliang
    Liu, Zhong
    Shi, Wen
    Shi, Danli
    Li, Fei
    Qu, Bo
    Zheng, Yingfeng
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26