Multi-perspective contrastive learning framework guided by sememe knowledge and label information for sarcasm detection

被引:7
|
作者
Wen, Zhiyuan [1 ,3 ]
Wang, Rui [1 ,3 ]
Luo, Xuan [1 ,3 ]
Wang, Qianlong [1 ,3 ]
Liang, Bin [1 ,3 ]
Du, Jiachen [1 ,3 ]
Yu, Xiaoqi [5 ]
Gui, Lin [2 ]
Xu, Ruifeng [1 ,3 ,4 ]
机构
[1] Harbin Inst Technol Shenzhen, Joint Lab HITSZ CMS, Shenzhen 518055, Guangdong, Peoples R China
[2] Kings Coll London, London, England
[3] Guangdong Prov Key Lab Novel Secur Intelligence T, Shenzhen 518000, Guangdong, Peoples R China
[4] Peng Cheng Lab, Shenzhen 518000, Guangdong, Peoples R China
[5] China Merchants Secur Co Ltd, Shenzhen 518000, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Sarcasm detection; Contrastive learning; Sememe knowledge; Deep learning; IRONY; MODEL;
D O I
10.1007/s13042-023-01884-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sarcasm is a prevailing rhetorical device that intentionally uses words that literally meaning opposite the real meaning. Due to this deliberate ambiguity, accurately detecting sarcasm can encourage the comprehension of users' real intentions. Therefore, sarcasm detection is a critical and challenging task for sentiment analysis. In previous research, neural network-based models are generally unsatisfactory when dealing with complex sarcastic expressions. To ameliorate this situation, we propose a multi-perspective contrastive learning framework for sarcasm detection, called SLGC, which is guided by sememe knowledge and label information based on the pre-trained neural model. For the in-instance perspective, we leverage the sememe, the minimum meaning unit, to guide the contrastive learning to produce high-quality sentence representations. For the between-instance perspective, we utilize label information to guide contrastive learning to mine potential interaction relationships between sarcastic expressions. Experiments on two public benchmark sarcasm detection dataset demonstrate that our approach significantly outperforms the current state-of-the-art model.
引用
收藏
页码:4119 / 4134
页数:16
相关论文
共 28 条
  • [21] A Multi-modal Framework with Contrastive Learning and Sequential Encoding for Enhanced Sleep Stage Detection
    Wang, Zehui
    Zhang, Zhihan
    Wang, Hongtao
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 3 - 17
  • [22] Multi-perspective feature collaborative perception learning network for non-destructive detection of pavement defects
    Liang, Jiadong
    Li, Guoyan
    Liu, Zeshuai
    DIGITAL SIGNAL PROCESSING, 2024, 154
  • [23] A Question-centric Multi-experts Contrastive Learning Framework for Improving the Accuracy and Interpretability of Deep Sequential Knowledge Tracing Models
    Zhang, Hengyuan
    Li, Zitao
    Shang, Chenming
    Li, Dawei
    Jiang, Yong
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2025, 19 (02)
  • [24] CMVC plus : A Multi-View Clustering Framework for Open Knowledge Base Canonicalization Via Contrastive Learning
    Yang, Yang
    Shen, Wei
    Shu, Junfeng
    Liu, Yinan
    Curry, Edward
    Li, Guoliang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2025, 37 (05) : 2296 - 2310
  • [25] Information-guided signal multi-granularity contrastive feature learning for fault diagnosis with few labeled data
    Lin, Yanzhuo
    Wang, Yu
    Zhang, Mingquan
    Wang, Zenghui
    Zhang, Haijun
    Zhao, Ming
    ADVANCED ENGINEERING INFORMATICS, 2024, 61
  • [26] A Toxic Euphemism Detection framework for online social network based on Semantic Contrastive Learning and dual channel knowledge augmentation
    Zhou, Gang
    Wang, Haizhou
    Jin, Di
    Wang, Wenxian
    Jiang, Shuyu
    Tang, Rui
    Chen, Xingshu
    INFORMATION PROCESSING & MANAGEMENT, 2025, 62 (04)
  • [27] Multi-label Movie Genre Detection from a Movie Poster Using Knowledge Transfer Learning
    Kaushil Kundalia
    Yash Patel
    Manan Shah
    Augmented Human Research, 2020, 5 (1)
  • [28] AVR (advancing video retrieval): A new framework guided by multi-level fusion of visual and semantic Features for deep learning-based concept detection
    Mohamed Hamroun
    Sonia Lajmi
    Maryam Jallouli
    Multimedia Tools and Applications, 2025, 84 (5) : 2715 - 2777