Data-Driven End-to-End Optimization of Radio Over Fiber Transmission System Based on Self-Supervised Learning

被引:0
|
作者
Zhu, Yue [1 ]
Ye, Jia [1 ]
Yan, Lianshan [1 ]
Zhou, Tao [2 ]
Yu, Xiao [1 ]
Zou, Xihua [1 ]
Pan, Wei [1 ]
机构
[1] Southwest Jiaotong Univ, Ctr Informat Photon & Commun, Sch Informat Sci & Technol, Chengdu 611756, Peoples R China
[2] Southwest China Res Inst Elect Equipment, Key Lab Elect Informat Control, Chengdu 610036, Peoples R China
基金
中国国家自然科学基金;
关键词
neural network; radio over fiber (RoF); End-to-end communication system; self-supervised learning;
D O I
10.1109/JLT.2024.3403129
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
End-to-End (E2E) learning has increasingly become a dominant method for enhancing the performance of communication systems. In this paper, an E2E Radio Over Fiber (RoF) transmission system utilizing self-supervised learning (SSL) is proposed. This E2E approach enables automatic optimization of the transmission system's transmitter modulator and receiver demodulator based on specific channel characteristics. The SSL architecture comprises four deep neural networks: TransNN for symbol mapping, SamplNN for upsampling, ChannelNN for channel modeling, and ReceivNN for demodulation, collectively replacing traditional components in the RoF link. Indeed, the E2E system adjusts the geometric shape of the constellation and upsampling rules according to RoF channel properties, facilitating transmission with enhanced received sensitivity. Perturbation noise is incorporated during the training phase to improve the ability to generalize the SSL. Notably, traditional demodulation methods cannot demodulate the RF signals transmitted in the E2E system, thereby introducing an additional layer of confidentiality to the transmission process. Numerical simulations have been conducted in 10 GHz 2 Gsym/s RoF transmission systems. The results indicate that compared to traditional approaches, the received sensitivity of the E2E system improved by 3.5 dB, under a BER limit of 2.4e-2. Compared to optimized only at the receiver side, achieved a sensitivity improvement of at least 3 dB. The numerical experiments also validate the importance of the SamplNN and perturbation training for the E2E system. These elements can improve the system's transmission performance and generalization ability, as well as enhance the system's security.
引用
收藏
页码:7532 / 7543
页数:12
相关论文
共 50 条
  • [11] End-to-end learning of self-rectification and self-supervised disparity prediction for stereo vision
    Zhang, Xuchong
    Zhao, Yongli
    Wang, Hang
    Zhai, Han
    Sun, Hongbin
    Zheng, Nanning
    NEUROCOMPUTING, 2022, 494 : 308 - 319
  • [12] Self-Supervised Representations Improve End-to-End Speech Translation
    Wu, Anne
    Wang, Changhan
    Pino, Juan
    Gu, Jiatao
    INTERSPEECH 2020, 2020, : 1491 - 1495
  • [13] Geometric Consistency for Self-Supervised End-to-End Visual Odometry
    Iyer, Ganesh
    Murthy, J. Krishna
    Gupta, Gunshi
    Krishna, K. Madhava
    Paull, Liam
    PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 380 - 388
  • [14] End-to-End Integration of Speech Recognition, Speech Enhancement, and Self-Supervised Learning Representation
    Chang, Xuankai
    Maekaku, Takashi
    Fujita, Yuya
    Watanabe, Shinji
    INTERSPEECH 2022, 2022, : 3819 - 3823
  • [15] End-to-end Jordanian dialect speech-to-text self-supervised learning framework
    Safieh, Ali A.
    Abu Alhaol, Ibrahim
    Ghnemat, Rawan
    FRONTIERS IN ROBOTICS AND AI, 2022, 9
  • [16] Fine-Tuning Self-Supervised Learning Models for End-to-End Pronunciation Scoring
    Zahran, Ahmed I.
    Fahmy, Aly A.
    Wassif, Khaled T.
    Bayomi, Hanaa
    IEEE ACCESS, 2023, 11 : 112650 - 112663
  • [17] END-TO-END INTEGRATION OF SPEECH RECOGNITION, DEREVERBERATION, BEAMFORMING, AND SELF-SUPERVISED LEARNING REPRESENTATION
    Masuyama, Yoshiki
    Chang, Xuankai
    Cornell, Samuele
    Watanabe, Shinji
    Ono, Nobutaka
    2022 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP, SLT, 2022, : 260 - 265
  • [18] End-to-End and Self-Supervised Learning for ComParE 2022 Stuttering Sub-Challenge
    Sheikh, Shakeel A.
    Sahidullah, Md
    Ouni, Slim
    Hirsch, Fabrice
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 7104 - 7108
  • [19] Exploration of Efficient End-to-End ASR using Discretized Input from Self-Supervised Learning
    Chang, Xuankai
    Yan, Brian
    Fujita, Yuya
    Maekaku, Takashi
    Watanabe, Shinji
    INTERSPEECH 2023, 2023, : 1399 - 1403
  • [20] Rethinking Self-Supervised Semantic Segmentation: Achieving End-to-End Segmentation
    Liu, Yue
    Zeng, Jun
    Tao, Xingzhen
    Fang, Gang
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2024, 46 (12) : 10036 - 10046