A Lightweight Deep Compressive Model for Large-Scale Spike Compression

被引:0
|
作者
Wu, Tong [1 ]
Zhao, Wenfeng [1 ]
Keefer, Edward [2 ]
Yang, Zhi [1 ]
机构
[1] Univ Minnesota, Biomed Engn, Minneapolis, MN 55455 USA
[2] Nerves Inc, Dallas, TX USA
来源
2018 IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE (BIOCAS): ADVANCED SYSTEMS FOR ENHANCING HUMAN HEALTH | 2018年
关键词
neural signal processing; data compression; vector quantization; deep learning;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we developed a deep learning-based compression model to reduce the data rate of multichannel action potentials in neural recording experiments. The proposed compression model is built upon a deep compressive autoencoder (CAE) with discrete latent embeddings. The encoder network of CAE is equipped with residual transformations to extract representative features from spikes, which are mapped into the latent embedding space and updated via vector quantization (VQ). The indexes of VQ codebook are further entropy coded as the compressed signals. The decoder network reconstructs spikes with high quality from the latent embeddings. Experimental results on both synthetic and in-vivo datasets show that the proposed model consistently outperforms conventional methods that utilize hand-crafted features and/or signal-agnostic transformations by achieving much higher compression ratios (20-500x) and better or comparable signal reconstruction accuracies. Furthermore, we have estimated the hardware cost of the CAE model and shown the feasibility of its on-chip integration with neural recording circuits. The proposed model can reduce the required data transmission bandwidth in large-scale recording experiments and maintain good signal qualities, which will be helpful to design power-efficient and lightweight wireless neural interfaces.
引用
收藏
页码:207 / 210
页数:4
相关论文
共 50 条
  • [1] Deep compressive autoencoder for action potential compression in large-scale neural recording
    Wu, Tong
    Zhao, Wenfeng
    Keefer, Edward
    Yang, Zhi
    JOURNAL OF NEURAL ENGINEERING, 2018, 15 (06)
  • [2] EfficientFi: Toward Large-Scale Lightweight WiFi Sensing via CSI Compression
    Yang, Jianfei
    Chen, Xinyan
    Zou, Han
    Wang, Dazhuo
    Xu, Qianwen
    Xie, Lihua
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (15) : 13086 - 13095
  • [3] Lightweight Deep Learning Based Channel Estimation for Extremely Large-Scale Massive MIMO Systems
    Gao, Shen
    Dong, Peihao
    Pan, Zhiwen
    You, Xiaohu
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2024, 73 (07) : 10750 - 10754
  • [4] Compression strategies for large-scale electrophysiology data
    Buccino, Alessio P.
    Winter, Olivier
    Bryant, David
    Feng, David
    Svoboda, Karel
    Siegle, Joshua H.
    JOURNAL OF NEURAL ENGINEERING, 2023, 20 (05)
  • [5] Large-scale Deep Learning at Baidu
    Yu, Kai
    PROCEEDINGS OF THE 22ND ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT (CIKM'13), 2013, : 2211 - 2211
  • [6] An Efficient Large-Scale Volume Data Compression Algorithm
    Xiao, Degui
    Zhao, Liping
    Yang, Lei
    Li, Zhiyong
    Li, Kenli
    ADVANCES IN NEURAL NETWORKS - ISNN 2009, PT 3, PROCEEDINGS, 2009, 5553 : 567 - 575
  • [7] Hybrid Deep Learning Ensemble Model for Improved Large-Scale Car Recognition
    Verma, Abhishek
    Liu, Yu
    2017 IEEE SMARTWORLD, UBIQUITOUS INTELLIGENCE & COMPUTING, ADVANCED & TRUSTED COMPUTED, SCALABLE COMPUTING & COMMUNICATIONS, CLOUD & BIG DATA COMPUTING, INTERNET OF PEOPLE AND SMART CITY INNOVATION (SMARTWORLD/SCALCOM/UIC/ATC/CBDCOM/IOP/SCI), 2017,
  • [8] Large-scale Pollen Recognition with Deep Learning
    de Geus, Andre R.
    Barcelos, Celia A. Z.
    Batista, Marcos A.
    da Silva, Sergio F.
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [9] Deep Learning on Large-scale Muticore Clusters
    Sakiyama, Kazumasa
    Kato, Shinpei
    Ishikawa, Yutaka
    Hori, Atsushi
    Monrroy, Abraham
    2018 30TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD 2018), 2018, : 314 - 321
  • [10] Efficient Use of GPU Memory for Large-Scale Deep Learning Model Training
    Choi, Hyeonseong
    Lee, Jaehwan
    APPLIED SCIENCES-BASEL, 2021, 11 (21):