MGKGR: Multimodal Semantic Fusion for Geographic Knowledge Graph Representation

被引:0
|
作者
Zhang, Jianqiang [1 ]
Chen, Renyao [1 ]
Li, Shengwen [1 ,2 ,3 ]
Li, Tailong [4 ]
Yao, Hong [1 ,2 ,3 ,4 ]
机构
[1] China Univ Geosci, Sch Comp Sci, Wuhan 430074, Peoples R China
[2] China Univ Geosci, State Key Lab Biogeol & Environm Geol, Wuhan 430074, Peoples R China
[3] China Univ Geosci, Hubei Key Lab Intelligent Geoinformat Proc, Wuhan 430078, Peoples R China
[4] China Univ Geosci, Sch Future Technol, Wuhan 430074, Peoples R China
关键词
multimodal; geographic knowledge graph; knowledge graph representation;
D O I
10.3390/a17120593
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Geographic knowledge graph representation learning embeds entities and relationships in geographic knowledge graphs into a low-dimensional continuous vector space, which serves as a basic method that bridges geographic knowledge graphs and geographic applications. Previous geographic knowledge graph representation methods primarily learn the vectors of entities and their relationships from their spatial attributes and relationships, which ignores various semantics of entities, resulting in poor embeddings on geographic knowledge graphs. This study proposes a two-stage multimodal geographic knowledge graph representation (MGKGR) model that integrates multiple kinds of semantics to improve the embedding learning of geographic knowledge graph representation. Specifically, in the first stage, a spatial feature fusion method for modality enhancement is proposed to combine the structural features of geographic knowledge graphs with two modal semantic features. In the second stage, a multi-level modality feature fusion method is proposed to integrate heterogeneous features from different modalities. By fusing the semantics of text and images, the performance of geographic knowledge graph representation is improved, providing accurate representations for downstream geographic intelligence tasks. Extensive experiments on two datasets show that the proposed MGKGR model outperforms the baselines. Moreover, the results demonstrate that integrating textual and image data into geographic knowledge graphs can effectively enhance the model's performance.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Geographic Knowledge Graph (GeoKG): A Formalized Geographic Knowledge Representation
    Wang, Shu
    Zhang, Xueying
    Ye, Peng
    Du, Mi
    Lu, Yanxu
    Xue, Haonan
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2019, 8 (04)
  • [2] Geographic knowledge graph-guided remote sensing image semantic segmentation
    Li Y.
    Wu K.
    Ouyang S.
    Yang K.
    Li H.
    Zhang Y.
    National Remote Sensing Bulletin, 2024, 28 (02) : 455 - 469
  • [3] HGeoKG: A Hierarchical Geographic Knowledge Graph for Geographic Knowledge Reasoning
    Li, Tailong
    Chen, Renyao
    Duan, Yilin
    Yao, Hong
    Li, Shengwen
    Li, Xinchuan
    ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2025, 14 (01)
  • [4] Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion
    Chen, Xiang
    Zhang, Ningyu
    Li, Lei
    Deng, Shumin
    Tan, Chuanqi
    Xu, Changliang
    Huang, Fei
    Si, Luo
    Chen, Huajun
    PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22), 2022, : 904 - 915
  • [5] Knowledge graph representation method for semantic 3D modeling of Chinese grottoes
    Yang, Su
    Hou, Miaole
    HERITAGE SCIENCE, 2023, 11 (01)
  • [6] Knowledge graph representation method for semantic 3D modeling of Chinese grottoes
    Su Yang
    Miaole Hou
    Heritage Science, 11
  • [7] Negative Sample Generation for Geographic Knowledge Graph Embedding via Joint Entity Semantic Similarity and Clustering
    Qiu, Qinjun
    Lu, Siqi
    Ma, Kai
    Zhu, Yunqiang
    Huang, Zehua
    Xie, Zhong
    Tao, Liufeng
    Wang, Shu
    TRANSACTIONS IN GIS, 2025, 29 (02)
  • [8] AugGKG: a grid-augmented geographic knowledge graph representation and spatio-temporal query model
    Han, Bing
    Qu, Tengteng
    Tong, Xiaochong
    Wang, Haipeng
    Liu, Hao
    Huo, Yuhao
    Cheng, Chengqi
    INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2023, 16 (02) : 4934 - 4957
  • [9] SMTDKD: A Semantic-Aware Multimodal Transformer Fusion Decoupled Knowledge Distillation Method for Action Recognition
    Quan, Zhenzhen
    Chen, Qingshan
    Wang, Wei
    Zhang, Moyan
    Li, Xiang
    Li, Yujun
    Liu, Zhi
    IEEE SENSORS JOURNAL, 2024, 24 (02) : 2289 - 2304
  • [10] Deep multimodal fusion for semantic image segmentation: A survey
    Zhang, Yifei
    Sidibe, Desire
    Morel, Olivier
    Meriaudeau, Fabrice
    IMAGE AND VISION COMPUTING, 2021, 105