Multi-modal Representation Learning for Social Post Location Inference

被引:0
|
作者
Dai, RuiTing [1 ]
Luo, Jiayi [1 ]
Luo, Xucheng [1 ]
Mo, Lisi [1 ]
Ma, Wanlun [2 ]
Zhou, Fan [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu 610054, Peoples R China
[2] Swinburne Univ Technol, Melbourne, Vic, Australia
来源
ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS | 2023年
关键词
Social geographic location; multi-modal social post dataset; multi-modal representation learning; multi-head attention mechanism; PREDICTION;
D O I
10.1109/ICC45041.2023.10279649
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Inferring geographic locations via social posts is essential for many practical location-based applications such as product marketing, point-of-interest recommendation, and infector tracking for COVID-19. Unlike image-based location retrieval or social-post text embedding-based location inference, the combined effect of multi-modal information (i.e., post images, text, and hashtags) for social post positioning receives less attention. In this work, we collect real datasets of social posts with images, texts, and hashtags from Instagram and propose a novel Multi-modal Representation Learning Framework (MRLF) capable of fusing different modalities of social posts for location inference. MRLF integrates a multi-head attention mechanism to enhance location-salient information extraction while significantly improving location inference compared with single domain-based methods. To overcome the noisy user-generated textual content, we introduce a novel attention-based character-aware module that considers the relative dependencies between characters of social post texts and hashtags for flexible multimodel information fusion. The experimental results show that MRLF can make accurate location predictions and open a new door to understanding the multi-modal data of social posts for online inference tasks.
引用
收藏
页码:6331 / 6336
页数:6
相关论文
共 50 条
  • [31] Joint detection and clinical score prediction in Parkinson's disease via multi-modal sparse learning
    Lei, Haijun
    Huang, Zhongwei
    Zhang, Jian
    Yang, Zhang
    Tan, Ee-Leng
    Zhou, Feng
    Lei, Baiying
    EXPERT SYSTEMS WITH APPLICATIONS, 2017, 80 : 284 - 296
  • [32] JOINT DETECTION AND CLINICAL SCORE PREDICTION IN PARKINSON'S DISEASE VIA MULTI-MODAL SPARSE LEARNING
    Lei, Haijun
    Zhang, Jian
    Yang, Zhang
    Tan, Ee-leng
    Lei, Baiying
    Luo, Qiuming
    2017 IEEE 14TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2017), 2017, : 1231 - 1234
  • [33] Deep learning-based multi-modal approach for predicting brain radionecrosis after proton therapy
    Seetha, Sithin Thulasi
    Fontana, Giulia
    Bazani, Alessia
    Riva, Giulia
    Molinelli, Silvia
    Goodyear, Christina Amanda
    Ciccone, Lucia Pia
    Iannalfi, Alberto
    Orlandi, Ester
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S5027 - S5030
  • [34] Artificial intelligence accelerates multi-modal biomedical process: A Survey
    Li, Jiajia
    Han, Xue
    Qin, Yiming
    Tan, Feng
    Chen, Yulong
    Wang, Zikai
    Song, Haitao
    Zhou, Xi
    Zhang, Yuan
    Hu, Lun
    Hu, Pengwei
    NEUROCOMPUTING, 2023, 558
  • [35] MultiMediate : Multi-modal Group Behaviour Analysis for Artificial Mediation
    Mueller, Philipp
    Dietz, Michael
    Schiller, Dominik
    Thomas, Dominike
    Zhang, Guanhua
    Gebhard, Patrick
    Andre, Elisabeth
    Bulling, Andreas
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 4878 - 4882
  • [36] A Hybrid Degradation Modeling and Prognostic Method for the Multi-Modal System
    Peng, Jun
    Wang, Shengnan
    Gao, Dianzhu
    Zhang, Xiaoyong
    Chen, Bin
    Cheng, Yijun
    Yang, Yingze
    Yu, Wentao
    Huang, Zhiwu
    APPLIED SCIENCES-BASEL, 2020, 10 (04):
  • [37] Multi-modal feature fusion for better understanding of human personality traits in social human-robot interaction
    Shen, Zhihao
    Elibol, Armagan
    Chong, Nak Young
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 146
  • [38] Integrating multi-modal deep learning on knowledge graph for the discovery of synergistic drug combinations against infectious diseases
    Ye, Qing
    Xu, Ruolan
    Li, Dan
    Kang, Yu
    Deng, Yafeng
    Zhu, Feng
    Chen, Jiming
    He, Shibo
    Hsieh, Chang-Yu
    Hou, Tingjun
    CELL REPORTS PHYSICAL SCIENCE, 2023, 4 (08):
  • [39] Prediction of Progressive Mild Cognitive Impairment by Multi-Modal Neuroimaging Biomarkers
    Xu, Lele
    Wu, Xia
    Li, Rui
    Chen, Kewei
    Long, Zhiying
    Zhang, Jiacai
    Guo, Xiaojuan
    Yao, Li
    JOURNAL OF ALZHEIMERS DISEASE, 2016, 51 (04) : 1045 - 1056
  • [40] Research on Satellite Fault Diagnosis and Prediction Using Multi-modal Reasoning
    Yang Tianshe 1
    2.Xi’an Satellite Control Center of China
    EngineeringSciences, 2004, (02) : 48 - 51