Interactive Multimodal Learning for Venue Recommendation

被引:17
|
作者
Zahalka, Jan [1 ]
Rudinac, Stevan [1 ]
Worring, Marcel [1 ]
机构
[1] Univ Amsterdam, Inst Informat, NL-1098 Amsterdam, Netherlands
关键词
Deep nets; interactive city exploration; location-based social networks; semantic concept detectors; topic models; user-centered design;
D O I
10.1109/TMM.2015.2480007
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we propose City Melange, an interactive and multimodal content-based venue explorer. Our framework matches the interacting user to the users of social media platforms exhibiting similar taste. The data collection integrates location-based social networks such as Foursquare with general multimedia sharing platforms such as Flickr or Picasa. In City Melange, the user interacts with a set of images and thus implicitly with the underlying semantics. The semantic information is captured through convolutional deep net features in the visual domain and latent topics extracted using Latent Dirichlet allocation in the text domain. These are further clustered to provide representative user and venue topics. A linear SVM model learns the interacting user's preferences and determines similar users. The experiments show that our content-based approach outperforms the user-activity-based and popular vote baselines even from the early phases of interaction, while also being able to recommend mainstream venues to mainstream users and off-the-beaten-track venues to afficionados. City Melange is shown to be a well-performing venue exploration approach.
引用
收藏
页码:2235 / 2244
页数:10
相关论文
共 50 条
  • [1] Semantic-Based Location Recommendation With Multimodal Venue Semantics
    Wang, Xiangyu
    Zhao, Yi-Liang
    Nie, Liqiang
    Gao, Yue
    Nie, Weizhi
    Zha, Zheng-Jun
    Chua, Tat-Seng
    IEEE TRANSACTIONS ON MULTIMEDIA, 2015, 17 (03) : 409 - 419
  • [2] Multimodal Interactive Network for Sequential Recommendation
    Teng-Yue Han
    Peng-Fei Wang
    Shao-Zhang Niu
    Journal of Computer Science and Technology, 2023, 38 : 911 - 926
  • [3] Multimodal Interactive Network for Sequential Recommendation
    Han, Teng-Yue
    Wang, Peng-Fei
    Niu, Shao-Zhang
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2023, 38 (04) : 911 - 926
  • [4] Interactive Interior Design Recommendation via Coarse-to-fine Multimodal Reinforcement Learning
    Zhang, He
    Sun, Ying
    Guo, Weiyu
    Liu, Yafei
    Lu, Haonan
    Lin, Xiaodong
    Xiong, Hui
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6472 - 6480
  • [5] MIFNet: multimodal interactive fusion network for medication recommendation
    Huo, Jiazhen
    Hong, Zhikai
    Chen, Mingzhou
    Duan, Yongrui
    JOURNAL OF SUPERCOMPUTING, 2024, 80 (09): : 12313 - 12345
  • [6] Interactive multimodal learning environments
    Moreno, Roxana
    Mayer, Richard
    EDUCATIONAL PSYCHOLOGY REVIEW, 2007, 19 (03) : 309 - 326
  • [7] Scalable Multimodal Learning and Multimedia Recommendation
    Shen, Jialie
    Morrison, Marie
    Li, Zhu
    2023 IEEE 9TH INTERNATIONAL CONFERENCE ON COLLABORATION AND INTERNET COMPUTING, CIC, 2023, : 121 - 124
  • [8] Disentangled Multimodal Representation Learning for Recommendation
    Liu, Fan
    Chen, Huilin
    Cheng, Zhiyong
    Liu, Anan
    Nie, Liqiang
    Kankanhalli, Mohan
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 7149 - 7159
  • [9] Effects of webcams on multimodal interactive learning
    Codreanu, Tatiana
    Celik, Christelle Combe
    RECALL, 2013, 25 : 30 - 47
  • [10] Variational Invariant Representation Learning for Multimodal Recommendation
    Yang, Wei
    Zhang, Haoran
    Zhang, Li
    PROCEEDINGS OF THE 2024 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2024, : 752 - 760