A Cross-Layer Optimization Framework for Distributed Computing in IoT Networks

被引:2
作者
Shang, Bodong [1 ]
Liu, Shiya [1 ]
Lu, Sidi [2 ]
Yi, Yang [1 ]
Shi, Weisong [2 ]
Liu, Lingjia [1 ]
机构
[1] Virginia Tech, Bradley Dept Elect & Comp Engn, Blacksburg, VA 24061 USA
[2] Wayne State Univ, Dept Comp Sci, Detroit, MI 48202 USA
来源
2020 IEEE/ACM SYMPOSIUM ON EDGE COMPUTING (SEC 2020) | 2020年
关键词
Distributed computing; machine learning; federated learning; neuromorphic computing; PROGRAM DEPENDENCE GRAPH;
D O I
10.1109/SEC50012.2020.00067
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In Internet-of-Thing (IoT) networks, enormous low-power IoT devices execute latency-sensitive yet computation-intensive machine learning tasks. However, the energy is usually scarce for IoT devices, especially for some without battery and relying on solar power or other renewables forms. In this paper, we introduce a cross-layer optimization framework for distributed computing among low-power IoT devices. Specifically, a programming layer design for distributed IoT networks is presented by addressing the problems of application partition, task scheduling, and communication overhead mitigation. Furthermore, the associated federated learning and local differential privacy schemes are developed in the communication layer to enable distributed machine learning with privacy preservation. In addition, we illustrate a three-dimensional network architecture with various network components to facilitate efficient and reliable information exchange among IoT devices. Moreover, a model quantization design for IoT devices is illustrated to reduce the cost of information exchange. Finally, a parallel and scalable neuromorphic computing system for IoT devices is established to achieve energy-efficient distributed computing platforms in the hardware layer. Based on the introduced cross-layer optimization framework, IoT devices can execute their machine learning tasks in an energy-efficient way while guaranteeing data privacy and reducing communication costs.
引用
收藏
页码:440 / 444
页数:5
相关论文
共 24 条
  • [1] A Dynamic Application-Partitioning Algorithm with Improved Offloading Mechanism for Fog Cloud Networks
    Abro, Adeel
    Deng, Zhongliang
    Memon, Kamran Ali
    Laghari, Asif Ali
    Mohammadani, Khalid Hussain
    ul Ain, Noor
    [J]. FUTURE INTERNET, 2019, 11 (07):
  • [2] Augonnet C, 2009, LECT NOTES COMPUT SC, V5415, P174
  • [3] Differential Privacy Preserving of Training Model in Wireless Big Data with Edge Computing
    Du, Miao
    Wang, Kun
    Xia, Zhuoqun
    Zhang, Yan
    [J]. IEEE TRANSACTIONS ON BIG DATA, 2020, 6 (02) : 283 - 295
  • [4] THE PROGRAM DEPENDENCE GRAPH AND ITS USE IN OPTIMIZATION
    FERRANTE, J
    OTTENSTEIN, KJ
    WARREN, JD
    [J]. ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 1987, 9 (03): : 319 - 349
  • [5] Hinton G., 2012, NEURAL NETWORKS MACH, P31
  • [6] Hubara M., 2016, NEURIPS, Vabs/1609.07061
  • [7] Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference
    Jacob, Benoit
    Kligys, Skirmantas
    Chen, Bo
    Zhu, Menglong
    Tang, Matthew
    Howard, Andrew
    Adam, Hartwig
    Kalenichenko, Dmitry
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2704 - 2713
  • [8] Jere S., 2020, ABS200708030 ARXIV
  • [9] Kang YP, 2017, TWENTY-SECOND INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXII), P615, DOI 10.1145/3037697.3037698
  • [10] Krishnamoorthi R., 2018, ARXIV180608342