SEQ2SEQ-SC: END-TO-END SEMANTIC COMMUNICATION SYSTEMS WITH PRE-TRAINED LANGUAGE MODEL

被引:3
|
作者
Lee, Ju-Hyung [1 ]
Lee, Dong-Ho [1 ]
Sheen, Eunsoo [1 ]
Choi, Thomas [1 ]
Pujara, Jay [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
关键词
Semantic communication; natural language processing (NLP); link-level simulation;
D O I
10.1109/IEEECONF59524.2023.10476895
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a realistic semantic network called seq2seq-SC, designed to be compatible with 5G NR and capable of working with generalized text datasets using a pre-trained language model. The goal is to achieve unprecedented communication efficiency by focusing on the meaning of messages in semantic communication. We employ a performance metric called semantic similarity, measured by BLEU for lexical similarity and SBERT for semantic similarity. Our findings demonstrate that seq2seq-SC outperforms previous models in extracting semantically meaningful information while maintaining superior performance. This study paves the way for continued advancements in semantic communication and its prospective incorporation with future wireless systems in 6G networks.
引用
收藏
页码:260 / 264
页数:5
相关论文
共 50 条
  • [21] Seq2science: an end-to-end workflow for functional genomics analysis
    van der Sande, Maarten
    Frolich, Siebren
    Schafers, Tilman
    Smits, Jos G. A.
    Snabel, Rebecca R.
    Rinzema, Sybren
    van Heeringen, Simon J.
    PEERJ, 2023, 11
  • [22] FINE-TUNING OF PRE-TRAINED END-TO-END SPEECH RECOGNITION WITH GENERATIVE ADVERSARIAL NETWORKS
    Haidar, Md Akmal
    Rezagholizadeh, Mehdi
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 6204 - 6208
  • [23] On the Uses of Large Language Models to Design End-to-end Learning Semantic Communication
    Wang, Ying
    Sun, Zhuo
    Fan, Jinpo
    Ma, Hao
    2024 IEEE WIRELESS COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC 2024, 2024,
  • [24] Prosperous Human Gait Recognition: an end-to-end system based on pre-trained CNN features selection
    Mehmood, Asif
    Khan, Muhammad Attique
    Sharif, Muhammad
    Khan, Sajid Ali
    Shaheen, Muhammad
    Saba, Tanzila
    Riaz, Naveed
    Ashraf, Imran
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (05) : 14979 - 14999
  • [25] Reusing Monolingual Pre-Trained Models by Cross-Connecting Seq2seq Models for Machine Translation
    Oh, Jiun
    Choi, Yong-Suk
    APPLIED SCIENCES-BASEL, 2021, 11 (18):
  • [26] Prosperous Human Gait Recognition: an end-to-end system based on pre-trained CNN features selection
    Asif Mehmood
    Muhammad Attique Khan
    Muhammad Sharif
    Sajid Ali Khan
    Muhammad Shaheen
    Tanzila Saba
    Naveed Riaz
    Imran Ashraf
    Multimedia Tools and Applications, 2024, 83 : 14979 - 14999
  • [27] Three-Module Modeling For End-to-End Spoken Language Understanding Using Pre-trained DNN-HMM-Based Acoustic-Phonetic Model
    Wang, Nick J. C.
    Wang, Lu
    Sun, Yandan
    Kang, Haimei
    Zhang, Dejun
    INTERSPEECH 2021, 2021, : 4718 - 4722
  • [28] Continually adapting pre-trained language model to universal annotation of single-cell RNA-seq data
    Wan, Hui
    Yuan, Musu
    Fu, Yiwei
    Deng, Minghua
    BRIEFINGS IN BIOINFORMATICS, 2024, 25 (02)
  • [29] DIA-BERT: pre-trained end-to-end transformer models for enhanced DIA proteomics data analysis
    Zhiwei Liu
    Pu Liu
    Yingying Sun
    Zongxiang Nie
    Xiaofan Zhang
    Yuqi Zhang
    Yi Chen
    Tiannan Guo
    Nature Communications, 16 (1)
  • [30] Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems
    Madotto, Andrea
    Wu, Chien-Sheng
    Fung, Pascale
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, 2018, : 1468 - 1478