Adaptive In-Network Collaborative Caching for Enhanced Ensemble Deep Learning at Edge

被引:2
|
作者
Qin, Yana [1 ,2 ]
Wu, Danye [3 ]
Xu, Zhiwei [1 ,2 ]
Tian, Jie [4 ]
Zhang, Yujun [2 ]
机构
[1] Inner Mongolia Univ Technol, Coll Data Sci & Applicat, Hohhot 100080, Peoples R China
[2] Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
[3] Samsung R&D Inst China, Beijing, Peoples R China
[4] New Jersey Inst Technol, Dept Comp Sci, 323 Dr Martin Luther King Jr Blvd, Newark, NJ 07102 USA
基金
美国国家科学基金会;
关键词
ALLOCATION;
D O I
10.1155/2021/9285802
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can, in natural, leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, and analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Enhanced In-Network Caching for Deep Learning in Edge Networks
    Zhang, Jiaqi
    Liu, Wenjing
    Zhang, Li
    Tian, Jie
    ELECTRONICS, 2024, 13 (23):
  • [2] Collaborative Video Caching in the Edge Network using Deep Reinforcement Learning
    Lekharu, Anirban
    Gupta, Pranav
    Sur, Aridit
    Patra, Moumita
    ACM TRANSACTIONS ON INTERNET OF THINGS, 2024, 5 (03):
  • [3] In-Network Caching and Learning Optimization for Federated Learning in Mobile Edge Networks
    Saputra, Yuris Mulya
    Nguyen, Diep N.
    Dinh Thai Hoang
    Dutkiewicz, Eryk
    IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC 2022), 2022, : 1653 - 1658
  • [4] "In-Network Ensemble": Deep Ensemble Learning with Diversified Knowledge Distillation
    Li, Xingjian
    Xiong, Haoyi
    Chen, Zeyu
    Huan, Jun
    Xu, Cheng-Zhong
    Dou, Dejing
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2021, 12 (05)
  • [5] On Learning Suitable Caching Policies for In-Network Caching
    Pires, Stefani
    Ribeiro, Adriana
    Sampaio, Leobino N.
    IEEE Transactions on Machine Learning in Communications and Networking, 2024, 2 : 1076 - 1092
  • [6] Community oriented in-network caching and edge caching for over-the-top services in adaptive network conditions to improve performance
    Pandey, Suman
    Park, Soyoung
    Choi, Mi Jung
    INTERNATIONAL JOURNAL OF NETWORK MANAGEMENT, 2020, 30 (04)
  • [7] A Survey of Deep Learning for Data Caching in Edge Network
    Wang, Yantong
    Friderikos, Vasilis
    INFORMATICS-BASEL, 2020, 7 (04):
  • [8] Collaborative in-network caching for multi-path routing
    Miyoshi, Yuta
    Wada, Takuya
    Hirata, Kouji
    2017 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TW), 2017,
  • [9] Collaborative Caching in Edge Computing via Federated Learning and Deep Reinforcement Learning
    Wang, Yali
    Chen, Jiachao
    WIRELESS COMMUNICATIONS & MOBILE COMPUTING, 2022, 2022
  • [10] Collaborative Edge Computing and Caching With Deep Reinforcement Learning Decision Agents
    Ren, Jianji
    Wang, Haichao
    Hou, Tingting
    Zheng, Shuai
    Tang, Chaosheng
    IEEE ACCESS, 2020, 8 : 120604 - 120612