TAP with ease: a generic recommendation system for trigger-action programming based on multi-modal representation learning

被引:0
|
作者
Wu, Gang [1 ]
Wang, Ming [1 ]
Wang, Feng [1 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Jilin, Peoples R China
关键词
Trigger-action programming; Knowledge graph embedding; Natural language processing; Multi-modal representation learning; MODEL;
D O I
10.1016/j.asoc.2024.112163
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The escalating popularity of smart devices has given rise to an increasing trend wherein users leverage customized trigger-action programming (TAP) rules within the Internet of Things (IoT) to automate various aspects of their lives. This article addresses the challenge of effectively combining functions provided by many smart devices and online services by introducing a novel multi-modal representation learning model called TAP-TAG. This model integrates both textual and graph structures inherent in TAP rules, offering a holistic method to rule recommendation. TAP-TAG comprises two branches: the Knowledge Graph Embedding model, which projects triplets extracted from the TAP dataset into embeddings, and convolution neural networks that extract semantic features from the textual content of TAP rules. Extensive experiments are conducted on real- world TAP datasets to evaluate our model's ability to recommend relevant rules based on user preferences. The experimental results show that TAP-TAG can outperform the state-of-the-art method by 5% in Precision@5, indicating that TAP-TAG is highly effective in providing accurate and diverse recommendations for TAP rules.
引用
收藏
页数:11
相关论文
共 9 条
  • [1] A Recommendation System for Trigger-Action Programming Rules via Graph Contrastive Learning
    Kuang, Zhejun
    Xiong, Xingbo
    Wu, Gang
    Wang, Feng
    Zhao, Jian
    Sun, Dawen
    SENSORS, 2024, 24 (18)
  • [2] User intention prediction for trigger-action programming rule using multi-view representation learning
    Wu, Gang
    Hu, Liang
    Hu, Yuxiao
    Xing, Yongheng
    Wang, Feng
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 267
  • [3] CLMTR: a generic framework for contrastive multi-modal trajectory representation learning
    Liang, Anqi
    Yao, Bin
    Xie, Jiong
    Zheng, Wenli
    Shen, Yanyan
    Ge, Qiqi
    GEOINFORMATICA, 2024, : 233 - 253
  • [4] Multi-Modal Human Action Recognition With Sub-Action Exploiting and Class-Privacy Preserved Collaborative Representation Learning
    Liang, Chengwu
    Liu, Deyin
    Qi, Lin
    Guan, Ling
    IEEE ACCESS, 2020, 8 : 39920 - 39933
  • [5] Small molecule drug and biotech drug interaction prediction based on multi-modal representation learning
    Huang, Dingkai
    He, Hongjian
    Ouyang, Jiaming
    Zhao, Chang
    Dong, Xin
    Xie, Jiang
    BMC BIOINFORMATICS, 2022, 23 (01)
  • [6] Small molecule drug and biotech drug interaction prediction based on multi-modal representation learning
    Dingkai Huang
    Hongjian He
    Jiaming Ouyang
    Chang Zhao
    Xin Dong
    Jiang Xie
    BMC Bioinformatics, 23
  • [7] A task-centric knowledge graph construction method based on multi-modal representation learning for industrial maintenance automation
    Liu, Zengkun
    Lu, Yuqian
    ENGINEERING REPORTS, 2024, 6 (12)
  • [8] DeepMPF: deep learning framework for predicting drug–target interactions based on multi-modal representation with meta-path semantic analysis
    Zhong-Hao Ren
    Zhu-Hong You
    Quan Zou
    Chang-Qing Yu
    Yan-Fang Ma
    Yong-Jian Guan
    Hai-Ru You
    Xin-Fei Wang
    Jie Pan
    Journal of Translational Medicine, 21
  • [9] DeepMPF: deep learning framework for predicting drug-target interactions based on multi-modal representation with meta-path semantic analysis
    Ren, Zhong-Hao
    You, Zhu-Hong
    Zou, Quan
    Yu, Chang-Qing
    Ma, Yan-Fang
    Guan, Yong-Jian
    You, Hai-Ru
    Wang, Xin-Fei
    Pan, Jie
    JOURNAL OF TRANSLATIONAL MEDICINE, 2023, 21 (01)