Stochastic Neural Networks with Layer-Wise Adjustable Sequence Length

被引:0
|
作者
Wang, Ziheng [1 ]
Reviriego, Pedro [2 ]
Niknia, Farzad [1 ]
Liu, Shanshan [3 ]
Gao, Zhen [4 ]
Lombardi, Fabrizio [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Univ Politecn Madrid, ETSI Telecomunicac, Madrid 28040, Spain
[3] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[4] Tianjin Univ, Sch Informat & Engn, Tianjin, Peoples R China
来源
2024 IEEE 24TH INTERNATIONAL CONFERENCE ON NANOTECHNOLOGY, NANO 2024 | 2024年
关键词
INTERNET; CIRCUITS;
D O I
10.1109/NANO61778.2024.10628894
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The implementation of Neural Networks (NNs) on resource-limited devices poses significant challenges, with Stochastic Computing (SC) as a solution for efficient execution. By representing values as sequences of bits that are processed serially, SC implementations significantly reduce the energy dissipation of NNs. However, with NNs growing in size, SC can also become energy-intensive, prompting a need for enhanced efficiency. This paper introduces Adjustable Sequence Length (ASL), a method employing varied sequence lengths across different NN layers to reduce energy/latency overheads with negligible impact on performance. The feasibility of sequence truncation across layers is assessed; as per simulation results, the ASL method demonstrates significant savings in energy of up to 40.41% and the savings in latency of up to 54.74% when compared with conventional SC implementations.
引用
收藏
页码:436 / 441
页数:6
相关论文
共 50 条
  • [1] Stochastic Layer-Wise Precision in Deep Neural Networks
    Lacey, Griffin
    Taylor, Graham W.
    Areibi, Shawki
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2018, : 663 - 672
  • [2] Layer-Wise Weight Decay for Deep Neural Networks
    Ishii, Masato
    Sato, Atsushi
    IMAGE AND VIDEO TECHNOLOGY (PSIVT 2017), 2018, 10749 : 276 - 289
  • [3] Layer-Wise Compressive Training for Convolutional Neural Networks
    Grimaldi, Matteo
    Tenace, Valerio
    Calimera, Andrea
    FUTURE INTERNET, 2019, 11 (01)
  • [4] Unsupervised Layer-Wise Model Selection in Deep Neural Networks
    Ludovic, Arnold
    Helene, Paugam-Moisy
    Michele, Sebag
    ECAI 2010 - 19TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2010, 215 : 915 - 920
  • [5] Collaborative Layer-Wise Discriminative Learning in Deep Neural Networks
    Jin, Xiaojie
    Chen, Yunpeng
    Dong, Jian
    Feng, Jiashi
    Yan, Shuicheng
    COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 733 - 749
  • [6] Layer-Wise Training to Create Efficient Convolutional Neural Networks
    Zeng, Linghua
    Tian, Xinmei
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 631 - 641
  • [7] Interpreting Convolutional Neural Networks via Layer-Wise Relevance Propagation
    Jia, Wohuan
    Zhang, Shaoshuai
    Jiang, Yue
    Xu, Li
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT I, 2022, 13338 : 457 - 467
  • [8] Explaining Therapy Predictions with Layer-wise Relevance Propagation in Neural Networks
    Yang, Yinchong
    Tresp, Volker
    Wunderle, Marius
    Fasching, Peter A.
    2018 IEEE INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI), 2018, : 152 - 162
  • [9] Layer-Wise Optimization of Contextual Neural Networks with Dynamic Field of Aggregation
    Jodlowiec, Marcin
    Albu, Adriana
    Wolk, Krzysztof
    Nguyen Thai-Nghe
    Karasinski, Adrian
    INTELLIGENT INFORMATION AND DATABASE SYSTEMS, ACIIDS 2022, PT II, 2022, 13758 : 302 - 312
  • [10] Compressing Neural Networks: Towards Determining the Optimal Layer-wise Decomposition
    Liebenuein, Lucas
    Maalouf, Alaa
    Gal, Oren
    Feldman, Dan
    Rus, Daniela
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34