SEQ2SEQ-SC: END-TO-END SEMANTIC COMMUNICATION SYSTEMS WITH PRE-TRAINED LANGUAGE MODEL

被引:3
|
作者
Lee, Ju-Hyung [1 ]
Lee, Dong-Ho [1 ]
Sheen, Eunsoo [1 ]
Choi, Thomas [1 ]
Pujara, Jay [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
关键词
Semantic communication; natural language processing (NLP); link-level simulation;
D O I
10.1109/IEEECONF59524.2023.10476895
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a realistic semantic network called seq2seq-SC, designed to be compatible with 5G NR and capable of working with generalized text datasets using a pre-trained language model. The goal is to achieve unprecedented communication efficiency by focusing on the meaning of messages in semantic communication. We employ a performance metric called semantic similarity, measured by BLEU for lexical similarity and SBERT for semantic similarity. Our findings demonstrate that seq2seq-SC outperforms previous models in extracting semantically meaningful information while maintaining superior performance. This study paves the way for continued advancements in semantic communication and its prospective incorporation with future wireless systems in 6G networks.
引用
收藏
页码:260 / 264
页数:5
相关论文
共 50 条
  • [31] BERTIVITS: The Posterior Encoder Fusion of Pre-Trained Models and Residual Skip Connections for End-to-End Speech Synthesis
    Wang, Zirui
    Song, Minqi
    Zhou, Dongbo
    APPLIED SCIENCES-BASEL, 2024, 14 (12):
  • [32] Self-Supervised Pre-Trained Speech Representation Based End-to-End Mispronunciation Detection and Diagnosis of Mandarin
    Shen, Yunfei
    Liu, Qingqing
    Fan, Zhixing
    Liu, Jiajun
    Wumaier, Aishan
    IEEE ACCESS, 2022, 10 : 106451 - 106462
  • [33] Model-Free Training of End-to-End Communication Systems
    Aoudia, Faycal Ait
    Hoydis, Jakob
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2019, 37 (11) : 2503 - 2516
  • [34] FG2SEQ: EFFECTIVELY ENCODING KNOWLEDGE FOR END-TO-END TASK-ORIENTED DIALOG
    He, Zhenhao
    He, Yuhong
    Wu, Qingyao
    Chen, Jian
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8029 - 8033
  • [35] Speech Model Pre-training for End-to-End Spoken Language Understanding
    Lugosch, Loren
    Ravanelli, Mirco
    Ignoto, Patrick
    Tomar, Vikrant Singh
    Bengio, Yoshua
    INTERSPEECH 2019, 2019, : 814 - 818
  • [36] Auto-EM: End-to-end Fuzzy Entity-Matching using Pre-trained Deep Models and Transfer Learning
    Zhao, Chen
    He, Yeye
    WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, : 2413 - 2424
  • [37] Towards semantic versioning of open pre-trained language model releases on hugging face
    Ajibode, Adekunle
    Bangash, Abdul Ali
    Cogo, Filipe R.
    Adams, Bram
    Hassan, Ahmed E.
    EMPIRICAL SOFTWARE ENGINEERING, 2025, 30 (03)
  • [38] Seq2KG: An End-to-End Neural Model for Domain Agnostic Knowledge Graph (not Text Graph) Construction from Text
    Stewart, Michael
    Liu, Wei
    KR2020: PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PRINCIPLES OF KNOWLEDGE REPRESENTATION AND REASONING, 2020, : 748 - 757
  • [39] Soft cosine and extended cosine adaptation for pre-trained language model semantic vector analysis
    Ijebu, Funebi Francis
    Liu, Yuanchao
    Sun, Chengjie
    Usip, Patience Usoro
    APPLIED SOFT COMPUTING, 2025, 169
  • [40] Prediction of Single-Mutation Effects for Fluorescent Immunosensor Engineering with an End-to-End Trained Protein Language Model
    Inoue, Akihito
    Zhu, Bo
    Mizutani, Keisuke
    Kobayashi, Ken
    Yasuda, Takanobu
    Wellner, Alon
    Liu, Chang C.
    Kitaguchi, Tetsuya
    JACS AU, 2025, 5 (02): : 955 - 964