Task-Projected Hyperdimensional Computing for Multi-task Learning

被引:1
作者
Chang, Cheng-Yang [1 ]
Chuang, Yu-Chuan [1 ]
Wu, An-Yeu [1 ]
机构
[1] Natl Taiwan Univ, Grad Inst Elect Engn, Taipei, Taiwan
来源
ARTIFICIAL INTELLIGENCE APPLICATIONS AND INNOVATIONS, AIAI 2020, PT I | 2020年 / 583卷
关键词
Hyperdimensional Computing; Multi-task learning; Redundant dimensionality;
D O I
10.1007/978-3-030-49161-1_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Brain-inspired Hyperdimensional (HD) computing is an emerging technique for cognitive tasks in the field of low-power design. As an energyefficient and fast learning computational paradigm, HD computing has shown great success in many real-world applications. However, an HD model incrementally trained on multiple tasks suffers from the negative impacts of catastrophic forgetting. The model forgets the knowledge learned from previous tasks and only focuses on the current one. To the best of our knowledge, no study has been conducted to investigate the feasibility of applying multi-task learning to HD computing. In this paper, we propose Task-Projected Hyperdimensional Computing (TP-HDC) to make the HD model simultaneously support multiple tasks by exploiting the redundant dimensionality in the hyperspace. To mitigate the interferences between different tasks, we project each task into a separate subspace for learning. Compared with the baselinemethod, our approach efficiently utilizes the unused capacity in the hyperspace and shows a 12.8% improvement in averaged accuracy with negligible memory overhead.
引用
收藏
页码:241 / 251
页数:11
相关论文
共 50 条
  • [41] Multi-Task Learning with Capsule Networks
    Lei, Kai
    Fu, Qiuai
    Liang, Yuzhi
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [42] Multi-task Learning by Pareto Optimality
    Dyankov, Deyan
    Riccio, Salvatore Danilo
    Di Fatta, Giuseppe
    Nicosia, Giuseppe
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, 2019, 11943 : 605 - 618
  • [43] A Simple Approach to Balance Task Loss in Multi-Task Learning
    Liang, Sicong
    Deng, Chang
    Zhang, Yu
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 812 - 823
  • [44] Multi-Task Network Representation Learning
    Xie, Yu
    Jin, Peixuan
    Gong, Maoguo
    Zhang, Chen
    Yu, Bin
    FRONTIERS IN NEUROSCIENCE, 2020, 14
  • [45] Modeling Trajectories with Multi-task Learning
    Liu, Kaijun
    Ruan, Sijie
    Xu, Qianxiong
    Long, Cheng
    Xiao, Nan
    Hu, Nan
    Yu, Liang
    Pan, Sinno Jialin
    2022 23RD IEEE INTERNATIONAL CONFERENCE ON MOBILE DATA MANAGEMENT (MDM 2022), 2022, : 208 - 213
  • [46] Pareto Multi-task Deep Learning
    Riccio, Salvatore D.
    Dyankov, Deyan
    Jansen, Giorgio
    Di Fatta, Giuseppe
    Nicosia, Giuseppe
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT II, 2020, 12397 : 132 - 141
  • [47] Network Clustering for Multi-task Learning
    Mu, Zhiying
    Gao, Dehong
    Guo, Sensen
    NEURAL PROCESSING LETTERS, 2025, 57 (01)
  • [48] Multi-Task Learning for Relation Extraction
    Zhou, Kai
    Luo, Xiangfeng
    Wang, Hao
    Xu, Richard
    2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, : 1480 - 1487
  • [49] Multi-task learning for gland segmentation
    Iman Rezazadeh
    Pinar Duygulu
    Signal, Image and Video Processing, 2023, 17 : 1 - 9
  • [50] Multi-task Learning with Modular Reinforcement Learning
    Xue, Jianyong
    Alexandre, Frederic
    FROM ANIMALS TO ANIMATS 16, 2022, 13499 : 127 - 138