SEQ2SEQ-SC: END-TO-END SEMANTIC COMMUNICATION SYSTEMS WITH PRE-TRAINED LANGUAGE MODEL

被引:3
|
作者
Lee, Ju-Hyung [1 ]
Lee, Dong-Ho [1 ]
Sheen, Eunsoo [1 ]
Choi, Thomas [1 ]
Pujara, Jay [1 ]
机构
[1] Univ Southern Calif, Los Angeles, CA 90007 USA
关键词
Semantic communication; natural language processing (NLP); link-level simulation;
D O I
10.1109/IEEECONF59524.2023.10476895
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we propose a realistic semantic network called seq2seq-SC, designed to be compatible with 5G NR and capable of working with generalized text datasets using a pre-trained language model. The goal is to achieve unprecedented communication efficiency by focusing on the meaning of messages in semantic communication. We employ a performance metric called semantic similarity, measured by BLEU for lexical similarity and SBERT for semantic similarity. Our findings demonstrate that seq2seq-SC outperforms previous models in extracting semantically meaningful information while maintaining superior performance. This study paves the way for continued advancements in semantic communication and its prospective incorporation with future wireless systems in 6G networks.
引用
收藏
页码:260 / 264
页数:5
相关论文
共 50 条
  • [1] Grounding End-to-End Pre-trained architectures for Semantic Role Labeling in multiple languages
    Hromei, Claudiu D.
    Croce, Danilo
    Basili, Roberto
    INTELLIGENZA ARTIFICIALE, 2023, 17 (02) : 173 - 191
  • [2] Pre-trained multimodal end-to-end network for spoken language assessment incorporating prompts
    Lin, Binghuai
    Wang, Liyuan
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1394 - 1398
  • [3] End-to-end speech topic classification based on pre-trained model Wavlm
    Cao, Tengfei
    He, Liang
    Niu, Fangjing
    2022 13TH INTERNATIONAL SYMPOSIUM ON CHINESE SPOKEN LANGUAGE PROCESSING (ISCSLP), 2022, : 369 - 373
  • [4] Disambiguation of Chinese Polyphones in an End-to-End Framework with Semantic Features Extracted by Pre-trained BERT
    Dai, Dongyang
    Wu, Zhiyong
    Kang, Shiyin
    Wu, Xixin
    Jia, Jia
    Su, Dan
    Yu, Dong
    Meng, Helen
    INTERSPEECH 2019, 2019, : 2090 - 2094
  • [5] End-to-End Pre-trained Dialogue System for Automatic Diagnosis
    Wang, Yuan
    Li, Zekun
    Zeng, Leilei
    Zhao, Tingting
    CCKS 2021 - EVALUATION TRACK, 2022, 1553 : 82 - 91
  • [6] End-to-End Visual Editing with a Generatively Pre-trained Artist
    Brown, Andrew
    Fu, Cheng-Yang
    Parkhi, Omkar
    Berg, Tamara L.
    Vedaldi, Andrea
    COMPUTER VISION - ECCV 2022, PT XV, 2022, 13675 : 18 - 35
  • [7] Transfer Learning from Pre-trained Language Models Improves End-to-End Speech Summarization
    Matsuura, Kohei
    Ashihara, Takanori
    Moriya, Takafumi
    Tanaka, Tomohiro
    Kano, Takatomo
    Ogawa, Atsunori
    Delcroix, Marc
    INTERSPEECH 2023, 2023, : 2943 - 2947
  • [8] INTEGRATION OF PRE-TRAINED NETWORKS WITH CONTINUOUS TOKEN INTERFACE FOR END-TO-END SPOKEN LANGUAGE UNDERSTANDING
    Seo, Seunghyun
    Kwak, Donghyun
    Lee, Bowon
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7152 - 7156
  • [9] IMPROVING NON-AUTOREGRESSIVE END-TO-END SPEECH RECOGNITION WITH PRE-TRAINED ACOUSTIC AND LANGUAGE MODELS
    Deng, Keqi
    Yang, Zehui
    Watanabe, Shinji
    Higuchi, Yosuke
    Cheng, Gaofeng
    Zhang, Pengyuan
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 8522 - 8526
  • [10] END-TO-END SPOKEN LANGUAGE UNDERSTANDING USING TRANSFORMER NETWORKS AND SELF-SUPERVISED PRE-TRAINED FEATURES
    Morais, Edmilson
    Kuo, Hong-Kwang J.
    Thomas, Samuel
    Tuske, Zoltan
    Kingsbury, Brian
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7483 - 7487