Novel Analog Implementation of a Hyperbolic Tangent Neuron in Artificial Neural Networks

被引:45
作者
Shakiba, Fatemeh Mohammadi [1 ,2 ]
Zhou, MengChu [3 ,4 ,5 ]
机构
[1] Southern Illinois Univ Carbondale, Carbondale, IL USA
[2] New Jersey Inst Technol, Newark, NJ USA
[3] Nanjing Univ Sci & Technol, Nanjing, Peoples R China
[4] Beijing Inst Technol, Beijing, Peoples R China
[5] Rensselaer Polytech Inst, Troy, NY USA
关键词
Neurons; Hardware; Multi-layer neural network; Biological neural networks; Computer architecture; Memristors; Power demand; Artificial neural network (ANN); activation function; hardware implementation; machine learning; memristive neural network (MNN); neuromorphic architecture; hyperbolic tangent; HARDWARE IMPLEMENTATION; SIGMOID FUNCTION; ACTIVATION FUNCTIONS; DESIGN; APPROXIMATION; GENERATORS;
D O I
10.1109/TIE.2020.3034856
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, enormous datasets have made power dissipation and area usage lie at the heart of designs for artificial neural networks (ANNs). Considering the significant role of activation functions in neurons and the growth of hardware-based neural networks like memristive neural networks, this work proposes a novel design for a hyperbolic tangent activation function (Tanh) to be used in memristive-based neuromorphic architectures. The purpose of implementing a CMOS-based design for Tanh is to decrease power dissipation and area usage. This design also increases the overall speed of computation in ANNs, while keeping the accuracy in an acceptable range. The proposed design is one of the first analog designs for the hyperbolic tangent and its performance is analyzed by using two well-known datsets, including the Modified National Institute of Standards and Technology (MNIST) and Fashion-MNIST. The direct implementation of the proposed design for Tanh is proposed and investigated via software and hardware modeling.
引用
收藏
页码:10856 / 10867
页数:12
相关论文
共 79 条
[1]   A Configurable FPGA Implementation of the Tanh Function using DCT Interpolation [J].
Abdelsalam, Ahmed M. ;
Langlois, J. M. Pierre ;
Cheriet, F. .
2017 IEEE 25TH ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM 2017), 2017, :168-171
[2]   A Circuit-Based Learning Architecture for Multilayer Neural Networks With Memristor Bridge Synapses [J].
Adhikari, Shyam Prasad ;
Kim, Hyongsuk ;
Budhathoki, Ram Kaji ;
Yang, Changju ;
Chua, Leon O. .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2015, 62 (01) :215-223
[3]   True North: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip [J].
Akopyan, Filipp ;
Sawada, Jun ;
Cassidy, Andrew ;
Alvarez-Icaza, Rodrigo ;
Arthur, John ;
Merolla, Paul ;
Imam, Nabil ;
Nakamura, Yutaka ;
Datta, Pallab ;
Nam, Gi-Joon ;
Taba, Brian ;
Beakes, Michael ;
Brezzo, Bernard ;
Kuang, Jente B. ;
Manohar, Rajit ;
Risk, William P. ;
Jackson, Bryan ;
Modha, Dharmendra S. .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2015, 34 (10) :1537-1557
[4]  
[Anonymous], neuromorphic computing and neural networks in hardware
[5]  
[Anonymous], 2017, Master's Thesis
[6]   Low-error digital hardware implementation of artificial neuron activation functions and their derivative [J].
Armato, A. ;
Fanucci, L. ;
Scilingo, E. P. ;
De Rossi, D. .
MICROPROCESSORS AND MICROSYSTEMS, 2011, 35 (06) :557-567
[7]  
Baas B., 2011, ECEVCL20114, V4
[8]   Memory analysis for memristors and memristive recurrent neural networks [J].
Bao, Gang ;
Zhang, Yide ;
Zeng, Zhigang .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2020, 7 (01) :96-105
[9]   Approximation of sigmoid function and the derivative for hardware implementation of artificial neurons [J].
Basterretxea, K ;
Tarela, JM ;
del Campo, I .
IEE PROCEEDINGS-CIRCUITS DEVICES AND SYSTEMS, 2004, 151 (01) :18-24
[10]  
Basterretxea Koldo, 2012, Proceedings of the 2012 NASA/ESA Conference on Adaptive Hardware and Systems (AHS 2012), P152, DOI 10.1109/AHS.2012.6268644