Energy-Efficient DNN Training Processorson Micro-AI Systems

被引:4
|
作者
Han, Donghyeon [1 ]
Kang, Sanghoon [1 ]
Kim, Sangyeob [1 ]
Lee, Juhyoung [1 ]
Yoo, Hoi-Jun [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Elect Engn, Daejeon 34141, South Korea
来源
IEEE OPEN JOURNAL OF THE SOLID-STATE CIRCUITS SOCIETY | 2022年 / 2卷
关键词
Training; Program processors; Artificial intelligence; System-on-chip; Solid state circuits; Performance evaluation; Energy efficiency; Backpropagation (BP); backward unlocking (BU); bit-precision optimization; deep neural network (DNN) training; reading transposed weight; sparsity exploitation; FACE RECOGNITION PROCESSOR; LEARNING PROCESSOR; LOW-POWER; ACCELERATOR; TUTORIAL;
D O I
10.1109/OJSSCS.2022.3219034
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Many edge/mobile devices are now able to utilize deep neural networks (DNNs) thanks to the development of mobile DNN accelerators. Mobile DNN accelerators overcame the problems of limited computing resources and battery capacity by realizing energy-efficient inference. However, its passive behavior makes it difficult for DNN to provide active customization for individual users or its service environment. The importance of on-chip training is rising more and more to provide active interaction between DNN processors and ever-changing surroundings or conditions. Despite its advantages, the DNN training has more constraints than the inference such that it was considered impractical to be realized on mobile/edge devices. Recently, there are many trials to realize mobile DNN training, and a number of prior works will be summarized. First, it arranges the new challenges of the DNN accelerator induced by training functionality and discusses new hardware features related to the challenges. Second, it explains algorithm-hardware co-optimization methods and explains why it becomes mainstream in mobile DNN training research. Third, it compares the main differences between the conventional inference accelerators and recent training processors. Finally, the conclusion is made by proposing the future directions of the DNN training processor in micro-AI systems.
引用
收藏
页码:259 / 275
页数:17
相关论文
共 50 条
  • [1] Enabling Energy-Efficient DNN Training on Hybrid GPU-FPGA Accelerators
    He, Xin
    Liu, Jiawen
    Xie, Zhen
    Chen, Hao
    Chen, Guoyang
    Zhang, Weifeng
    Li, Dong
    PROCEEDINGS OF THE 2021 ACM INTERNATIONAL CONFERENCE ON SUPERCOMPUTING, ICS 2021, 2021, : 227 - 241
  • [2] Leveraging AI for energy-efficient manufacturing systems: Review and future prospectives
    Abadi, Mohammad Mehdi Keramati Feyz
    Liu, Chao
    Zhang, Ming
    Hu, Youxi
    Xu, Yuchun
    JOURNAL OF MANUFACTURING SYSTEMS, 2025, 78 : 153 - 177
  • [3] Energy-Efficient Federated Training on Mobile Device
    Zhang, Qiyang
    Zhu, Zuo
    Zhou, Ao
    Sun, Qibo
    Dustdar, Schahram
    Wang, Shangguang
    IEEE NETWORK, 2024, 38 (01): : 180 - 186
  • [4] Design of Processing-in-Memory With Triple Computational Path and Sparsity Handling for Energy-Efficient DNN Training
    Han, Wontak
    Heo, Jaehoon
    Kim, Junsoo
    Lim, Sukbin
    Kim, Joo-Young
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2022, 12 (02) : 354 - 366
  • [5] The Hardware and Algorithm Co-Design for Energy-Efficient DNN Processor on Edge/Mobile Devices
    Lee, Jinsu
    Kang, Sanghoon
    Lee, Jinmook
    Shin, Dongjoo
    Han, Donghyeon
    Yoo, Hoi-Jun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2020, 67 (10) : 3458 - 3470
  • [6] AI-Based MPC Controller for Energy-Efficient HVAC Systems
    Alosta, Mahmud
    Abobakr, Saad
    El Kaouachi, Amine
    Sboui, Lokman
    2024 IEEE CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, CCECE 2024, 2024, : 113 - 114
  • [7] ELEMENT: Energy-Efficient Multi-NoP Architecture for IMC-Based 2.5-D Accelerator for DNN Training
    Neethu, K.
    Shahana, K. C. Sharin
    James, Rekha K.
    Jose, John
    Mandal, Sumit K.
    IEEE DESIGN & TEST, 2023, 40 (06) : 51 - 63
  • [8] Posit Process Element for Using in Energy-Efficient DNN Accelerators
    Zolfagharinejad, Mohamadreza
    Kamal, Mehdi
    Afzali-Khusha, Ali
    Pedram, Massoud
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2022, 30 (06) : 844 - 848
  • [9] EFFECT-DNN: Energy-efficient Edge Framework for Real-time DNN Inference
    Zhang, Xiaojie
    Mounesan, Motahare
    Debroy, Saptarshi
    2023 IEEE 24TH INTERNATIONAL SYMPOSIUM ON A WORLD OF WIRELESS, MOBILE AND MULTIMEDIA NETWORKS, WOWMOM, 2023, : 10 - 20
  • [10] MCAIMem: A Mixed SRAM and eDRAM Cell for Area and Energy-Efficient On-Chip AI Memory
    Nguyen, Duy-Thanh
    Bhattacharjee, Abhiroop
    Moitra, Abhishek
    Panda, Priyadarshini
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2024, 32 (11) : 2023 - 2036