A probability-inspired normalization for fixed-precision Hyper-Dimensional Computing

被引:1
作者
Datta, Sohum [1 ]
Rabaey, Jan M. [1 ]
机构
[1] Univ Calif Berkeley, Elect Engn & Comp Sci, Berkeley, CA 94720 USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA | 2022年
关键词
Hyper-dimensional computing; energy efficiency; Internet-of-Things; probability; machine learning;
D O I
10.1109/AICAS54282.2022.9869986
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hyper-Dimensional Computing (HDC), a promising nano-scalable paradigm for low-energy predictions and lightweight learned models, has seen a surge of interest from the hardware accelerator community. However, the classical single-bit per vector element approach for HDC seldom achieves higher classification accuracy than multi-hit alternatives, and is inadequate to support the rapidly growing application space. A great challenge for multi-bit HDC hardware is to negotiate the enormous increase in logic vis-a-vis the single-bit hardware. Key to minimizing this cost is to limit bits per vector element, which is potentially unbounded without transformation, and can be very large for some applications. This work proposes a hardware-friendly numerical transformation on a HDC vector where the result has fixed bits per element. Under a reasonable assumption on the vector's distribution, it is proven that the transformation guarantees at most a small, known error in associative search. Verification experiments indicate the theoretical guarantee is very pessimistic; the actual error is less than 18% of the theoretical upper bound. Estimates predict 3.8X hardware savings with a 0.04e/c accuracy drop. We believe emerging stochastic approaches like HDC offer exciting new opportunities of employing high-dimensional probability theory for accelerator design.
引用
收藏
页码:21 / 24
页数:4
相关论文
共 14 条
[1]  
Anderson Alexander G., 2018, INT C LEARNING REPRE
[2]   A Programmable Hyper-Dimensional Processor Architecture for Human-Centric IoT [J].
Datta, Sohum ;
Antonio, Ryan A. G. ;
Ison, Aldrin R. S. ;
Rabaey, Jan M. .
IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2019, 9 (03) :439-452
[3]   A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks [J].
Frady, E. Paxon ;
Kleyko, Denis ;
Sommer, Friedrich T. .
NEURAL COMPUTATION, 2018, 30 (06) :1449-1513
[4]   Classification Using Hyperdimensional Computing: A Review [J].
Ge, Lulu ;
Parhi, Keshab K. .
IEEE CIRCUITS AND SYSTEMS MAGAZINE, 2020, 20 (02) :30-47
[5]  
Hernandez-Cano A., 2021, P GREAT LAK S VLSI, P397, DOI [10.1145/3453688.3461749, DOI 10.1145/3453688.3461749]
[6]   SparseHD: Algorithm-Hardware Co-Optimization for Efficient High-Dimensional Computing [J].
Imani, Mohsen ;
Salamat, Sahand ;
Khaleghi, Behnam ;
Samragh, Mohammad ;
Koushanfar, Farinaz ;
Rosing, Tajana .
2019 27TH IEEE ANNUAL INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES (FCCM), 2019, :190-198
[7]   Language Geometry Using Random Indexing [J].
Joshi, Aditya ;
Halseth, Johan T. ;
Kanerva, Pentti .
QUANTUM INTERACTION, QI 2016, 2017, 10106 :265-274
[8]   Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors [J].
Kanerva, Pentti .
COGNITIVE COMPUTATION, 2009, 1 (02) :139-159
[9]   Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods [J].
Kent, Spencer J. ;
Frady, E. Paxon ;
Sommer, Friedrich T. ;
Olshausen, Bruno A. .
NEURAL COMPUTATION, 2020, 32 (12) :2332-+
[10]  
Kleyko D, 2021, Arxiv, DOI arXiv:2106.05268