Personalized multi-head self-attention network for news recommendation

被引:1
作者
Zheng, Cong [1 ]
Song, Yixuan [2 ]
机构
[1] Global Energy Interconnect Grp Co Ltd, 8 Xuanwumennei St, Beijing, Peoples R China
[2] Baidu Online Network Technol Beijing Co Ltd, Baidu Campus,10 Shangdi 10th St, Beijing, Peoples R China
关键词
News recommendation; Natural language processing; Multi-head self-attention; Neural networks; Embedding;
D O I
10.1016/j.neunet.2024.106824
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid explosion of online news and user population, personalized news recommender systems have proved to be efficient ways of alleviating information overload problems by suggesting information which attracts users in line with their tastes. Exploring relationships among words and news is critical to structurally model users' latent tastes including interested domains, while selecting informative words and news can directly reflect users' interests. Most of the current studies do not provide an effective framework that combines distilling users' interested latent spaces and explicit points systematically. Moreover, introducing more advanced techniques to merely chase accuracy has become a universal phenomenon. In this study, we design a P ersonalized M ulti-Head S elf-Attention N etwork ( PMSN ) for news recommendation, which combines multi-head self-attention network with personalized attention mechanism from both word and news levels. Multi-head self-attention mechanism is used to model interactions among words and news, exploring latent interests. Personalized attention mechanism is applied by embedding users' IDs to highlight informative words and news, which can enhance the interpretability of personalization. Comprehensive experiments conducted using two real-world datasets demonstrate that PMSN efficiently outperforms state-of-the-art methods in terms of recommendation accuracy, without complicated structure design and exhausted even external resources consumption. Furthermore, visualized case study validates that attention mechanism indeed increases the interpretability.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Transformative reconstruction of missing acoustic well logs using multi-head self-attention BiRNNs
    Fan, Xiangyu
    Meng, Fan
    Deng, Juan
    Semnani, Amir
    Zhao, Pengfei
    Zhang, Qiangui
    GEOENERGY SCIENCE AND ENGINEERING, 2025, 245
  • [42] The effect of the head number for multi-head self-attention in remaining useful life prediction of rolling bearing and interpretability
    Zhao, Qiwu
    Zhang, Xiaoli
    Wang, Fangzhen
    Fan, Panfeng
    Mbeka, Erick
    NEUROCOMPUTING, 2025, 616
  • [43] Fine-Grained Deep Knowledge-Aware Network for News Recommendation with Self-Attention
    Gao, Jie
    Xin, Xin
    Liu, Junshuai
    Wang, Rui
    Lu, Jing
    Li, Biao
    Fan, Xin
    Guo, Ping
    2018 IEEE/WIC/ACM INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE (WI 2018), 2018, : 81 - 88
  • [44] DMOIT: denoised multi-omics integration approach based on transformer multi-head self-attention mechanism
    Liu, Zhe
    Park, Taesung
    FRONTIERS IN GENETICS, 2024, 15
  • [45] Multi-fidelity fusion for soil classification via LSTM and multi-head self-attention CNN model
    Zhou, Xiaoqi
    Sheil, Brian
    Suryasentana, Stephen
    Shi, Peixin
    ADVANCED ENGINEERING INFORMATICS, 2024, 62
  • [46] Motorcyclist helmet detection in single images: a dual-detection framework with multi-head self-attention
    Li, Chun-Hong
    Huang, Dong
    Zhang, Guang-Yu
    Cui, Jinrong
    SOFT COMPUTING, 2024, 28 (05) : 4321 - 4333
  • [47] Motorcyclist helmet detection in single images: a dual-detection framework with multi-head self-attention
    Chun-Hong Li
    Dong Huang
    Guang-Yu Zhang
    Jinrong Cui
    Soft Computing, 2024, 28 : 4321 - 4333
  • [48] EPSViTs: A hybrid architecture for image classification based on parameter-shared multi-head self-attention
    Liao, Huixian
    Li, Xiaosen
    Qin, Xiao
    Wang, Wenji
    He, Guodui
    Huang, Haojie
    Guo, Xu
    Chun, Xin
    Zhang, Jinyong
    Fu, Yunqin
    Qin, Zhengyou
    IMAGE AND VISION COMPUTING, 2024, 149
  • [49] SMGformer: integrating STL and multi-head self-attention in deep learning model for multi-step runoff forecasting
    Wang, Wen-chuan
    Gu, Miao
    Hong, Yang-hao
    Hu, Xiao-xue
    Zang, Hong-fei
    Chen, Xiao-nan
    Jin, Yan-guo
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [50] On the diversity of multi-head attention
    Li, Jian
    Wang, Xing
    Tu, Zhaopeng
    Lyu, Michael R.
    NEUROCOMPUTING, 2021, 454 : 14 - 24