MGKGR: Multimodal Semantic Fusion for Geographic Knowledge Graph Representation

被引:0
|
作者
Zhang, Jianqiang [1 ]
Chen, Renyao [1 ]
Li, Shengwen [1 ,2 ,3 ]
Li, Tailong [4 ]
Yao, Hong [1 ,2 ,3 ,4 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Peoples R China
[2] China Univ Geosci, State Key Lab Biogeol & Environm Geol, Wuhan 430074, Peoples R China
[3] China Univ Geosci, Hubei Key Lab Intelligent Geoinformat Proc, Wuhan 430078, Peoples R China
[4] China Univ Geosci, Sch Future Technol, Wuhan 430074, Peoples R China
关键词
multimodal; geographic knowledge graph; knowledge graph representation;
D O I
10.3390/a17120593
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Geographic knowledge graph representation learning embeds entities and relationships in geographic knowledge graphs into a low-dimensional continuous vector space, which serves as a basic method that bridges geographic knowledge graphs and geographic applications. Previous geographic knowledge graph representation methods primarily learn the vectors of entities and their relationships from their spatial attributes and relationships, which ignores various semantics of entities, resulting in poor embeddings on geographic knowledge graphs. This study proposes a two-stage multimodal geographic knowledge graph representation (MGKGR) model that integrates multiple kinds of semantics to improve the embedding learning of geographic knowledge graph representation. Specifically, in the first stage, a spatial feature fusion method for modality enhancement is proposed to combine the structural features of geographic knowledge graphs with two modal semantic features. In the second stage, a multi-level modality feature fusion method is proposed to integrate heterogeneous features from different modalities. By fusing the semantics of text and images, the performance of geographic knowledge graph representation is improved, providing accurate representations for downstream geographic intelligence tasks. Extensive experiments on two datasets show that the proposed MGKGR model outperforms the baselines. Moreover, the results demonstrate that integrating textual and image data into geographic knowledge graphs can effectively enhance the model's performance.
引用
收藏
页数:16
相关论文
共 50 条
  • [31] Learning Joint Multimodal Representation Based on Multi-fusion Deep Neural Networks
    Gu, Zepeng
    Lang, Bo
    Yue, Tongyu
    Huang, Lei
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 276 - 285
  • [32] A multimodal fusion method for Alzheimer's disease based on DCT convolutional sparse representation
    Zhang, Guo
    Nie, Xixi
    Liu, Bangtao
    Yuan, Hong
    Li, Jin
    Sun, Weiwei
    Huang, Shixin
    FRONTIERS IN NEUROSCIENCE, 2023, 16
  • [33] Predicting circRNA-Drug Resistance Associations Based on a Multimodal Graph Representation Learning Framework
    Liu, Ziqiang
    Dai, Qiguo
    Yu, Xianhai
    Duan, Xiaodong
    Wang, Chunyu
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (03) : 1838 - 1848
  • [34] Large Models and Multimodal: A Survey of Cutting-Edge Approaches to Knowledge Graph Completion
    Wu, Minxin
    Gong, Yufei
    Lu, Heping
    Li, Baofeng
    Wang, Kai
    Zhou, Yanquan
    Li, Lei
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14878 : 163 - 174
  • [35] Representation-Based Completion of Knowledge Graph with Open-World Data
    Yue, Kun
    Wang, Jiahui
    Li, Xinbai
    Hu, Kuang
    2020 5TH INTERNATIONAL CONFERENCE ON COMPUTER AND COMMUNICATION SYSTEMS (ICCCS 2020), 2020, : 1 - 8
  • [36] A Text Generation Method Based on a Multimodal Knowledge Graph for Fault Diagnosis of Consumer Electronics
    Wu, Yuezhong
    Sun, Yuxuan
    Chen, Lingjiao
    Zhang, Xuanang
    Liu, Qiang
    APPLIED SCIENCES-BASEL, 2024, 14 (21):
  • [37] A Model of Text-Enhanced Knowledge Graph Representation Learning With Mutual Attention
    Wang, Yashen
    Zhang, Huanhuan
    Shi, Ge
    Liu, Zhirun
    Zhou, Qiang
    IEEE ACCESS, 2020, 8 : 52895 - 52905
  • [38] An adaptive multi-graph neural network with multimodal feature fusion learning for MDD detection
    Xing, Tao
    Dou, Yutao
    Chen, Xianliang
    Zhou, Jiansong
    Xie, Xiaolan
    Peng, Shaoliang
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [39] Multimodal prediction of student performance: A fusion of signed graph neural networks and large language models
    Wang, Sijie
    Ni, Lin
    Zhang, Zeyu
    Li, Xiaoxuan
    Zheng, Xianda
    Liu, Jiamou
    PATTERN RECOGNITION LETTERS, 2024, 181 : 1 - 8
  • [40] Interactive Visual Analysis of COVID-19 Epidemic Situation Using Geographic Knowledge Graph
    Jiang B.
    You X.
    Li K.
    Zhou X.
    Wen H.
    You, Xiong (youarexiong@163.com), 1600, Wuhan University (45): : 836 - 845