Fusing heterogeneous information for multi-modal attributed network embedding

被引:2
|
作者
Yang, Jieyi [1 ]
Zhu, Feng [1 ]
Dong, Yihong [1 ]
Qian, Jiangbo [1 ]
机构
[1] Ningbo Univ, Fac Elect Engn & Comp Sci, Ningbo 315211, Peoples R China
基金
中国国家自然科学基金;
关键词
Multimodal attributed network; Heterogeneous network; Graph embedding; Graph neural network;
D O I
10.1007/s10489-023-04675-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the real world, networks with many types of nodes and edges are complex, forming a heterogeneous network. For instance, film networks contain different node types, such as directors, films and actors, as well as different types of edge and multimodal attributes. Most existing attribution network embedding algorithms cannot flexibly capture the impact of multimodal attributes on the topology. Premature fusion of multimodal features encodes different attribute information into the representation embedding, while the later fusion strategy ignores the interaction between different modes, both of which affect the modeling of graph embedding.To solve this problem, we propose a multimodal attribute network representation learning algorithm based on heterogeneity information fusion, named FHIANE. It extracts features from multimodal information sources through deep heterogeneous convolutional networks and projects them into a consistent semantic space while maintaining structural information. In addition, we design a modality fusion network based on an extended attention mechanism that takes full advantage of the consistency and complementarity of multimodal information. We evaluate the performance of the FHIANE algorithm on several real datasets through challenging tasks such as link prediction and node classification. The experimental results show that FHIANE outperforms other baselines.
引用
收藏
页码:22328 / 22347
页数:20
相关论文
共 50 条
  • [1] Fusing heterogeneous information for multi-modal attributed network embedding
    Yang Jieyi
    Zhu Feng
    Dong Yihong
    Qian Jiangbo
    Applied Intelligence, 2023, 53 : 22328 - 22347
  • [2] Efficient and Effective Multi-Modal Queries Through Heterogeneous Network Embedding
    Chi Thang Duong
    Thanh Tam Nguyen
    Yin, Hongzhi
    Weidlich, Matthias
    Mai, Thai Son
    Aberer, Karl
    Quoc Viet Hung Nguyen
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (11) : 5307 - 5320
  • [3] Graph2GO: a multi-modal attributed network embedding method for inferring protein functions
    Fan, Kunjie
    Guan, Yuanfang
    Zhang, Yan
    GIGASCIENCE, 2020, 9 (08):
  • [4] Dynamic heterogeneous attributed network embedding
    Li, Hongbo
    Zheng, Wenli
    Tang, Feilong
    Song, Yitong
    Yao, Bin
    Zhu, Yanmin
    INFORMATION SCIENCES, 2024, 662
  • [5] Fusing Multi-modal Features for Gesture Recognition
    Wu, Jiaxiang
    Cheng, Jian
    Zhao, Chaoyang
    Lu, Hanqing
    ICMI'13: PROCEEDINGS OF THE 2013 ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2013, : 453 - 459
  • [6] Fusing attributed and topological global-relations for network embedding
    Sun, Xin
    Yu, Yongbo
    Liang, Yao
    Dong, Junyu
    Plant, Claudia
    Bohm, Christian
    INFORMATION SCIENCES, 2021, 558 : 76 - 90
  • [7] Hash Embedding for Attributed Multiplex Heterogeneous Network
    Su, Huimin
    Li, Qian
    Guo, Hongyu
    Liu, Yulong
    Computer Engineering and Applications, 60 (24): : 131 - 139
  • [8] Fast Attributed Multiplex Heterogeneous Network Embedding
    Liu, Zhijun
    Huang, Chao
    Yu, Yanwei
    Fan, Baode
    Dong, Junyu
    CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, : 995 - 1004
  • [9] Multi-Modal Emotion Recognition Fusing Video and Audio
    Xu, Chao
    Du, Pufeng
    Feng, Zhiyong
    Meng, Zhaopeng
    Cao, Tianyi
    Dong, Caichao
    APPLIED MATHEMATICS & INFORMATION SCIENCES, 2013, 7 (02): : 455 - 462
  • [10] Multi-source and Multi-modal Deep Network Embedding for Cross-network Node Classification
    Yang, Hongwei
    He, Hui
    Zhang, Weizhe
    Wang, Yan
    Jing, Lin
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2024, 18 (06)