End-to-End Paired Ambisonic-Binaural Audio Rendering

被引:1
|
作者
Zhu, Yin [1 ]
Kong, Qiuqiang [2 ,3 ]
Shi, Junjie [2 ,3 ]
Liu, Shilei [2 ,3 ]
Ye, Xuzhou [2 ,3 ]
Wang, Ju-Chiang [2 ,3 ]
Shan, Hongming [4 ,5 ,6 ]
Zhang, Junping [1 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200433, Peoples R China
[2] Beijing ByteDance Technol Co Ltd, Shanghai 201102, Peoples R China
[3] Chinese Univ Hong Kong, Hong Kong, Peoples R China
[4] Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, Shanghai 200433, Peoples R China
[5] Fudan Univ, MOE Frontiers Ctr Brain Sci, Shanghai 200433, Peoples R China
[6] Shanghai Ctr Brain Sci & Brain Inspired Technol, Shanghai 200031, Peoples R China
基金
中国国家自然科学基金;
关键词
Measurement; Costs; Neural networks; Virtual reality; Rendering (computer graphics); Task analysis; Optimization; Ambisonic; attention; binaural rendering; neural network;
D O I
10.1109/JAS.2023.123969
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binaural rendering is of great interest to virtual reality and immersive media. Although humans can naturally use their two ears to perceive the spatial information contained in sounds, it is a challenging task for machines to achieve binaural rendering since the description of a sound field often requires multiple channels and even the metadata of the sound sources. In addition, the perceived sound varies from person to person even in the same sound field. Previous methods generally rely on individual-dependent head-related transferred function (HRTF) datasets and optimization algorithms that act on HRTFs. In practical applications, there are two major drawbacks to existing methods. The first is a high personalization cost, as traditional methods achieve personalized needs by measuring HRTFs. The second is insufficient accuracy because the optimization goal of traditional methods is to retain another part of information that is more important in perception at the cost of discarding a part of the information. Therefore, it is desirable to develop novel techniques to achieve personalization and accuracy at a low cost. To this end, we focus on the binaural rendering of ambisonic and propose 1) channel-shared encoder and channel-compared attention integrated into neural networks and 2) a loss function quantifying interaural level differences to deal with spatial information. To verify the proposed method, we collect and release the first paired ambisonic-binaural dataset and introduce three metrics to evaluate the content information and spatial information accuracy of the end-to-end methods. Extensive experimental results on the collected dataset demonstrate the superior performance of the proposed method and the shortcomings of previous methods.
引用
收藏
页码:502 / 513
页数:12
相关论文
共 50 条
  • [41] TFF-Codec: A High Fidelity End-to-End Neural Audio Codec
    Zhao, Yuhao
    Jia, Maoshen
    Ru, Jiawei
    Wang, Lizhong
    Wen, Liang
    CIRCUITS SYSTEMS AND SIGNAL PROCESSING, 2025,
  • [42] Guest Editorial: Special Issue on Computational Intelligence for End-to-End Audio Processing
    Squartini, S.
    Schuller, B.
    Uncini, A.
    Ting, C. -K.
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2018, 2 (02): : 89 - 91
  • [43] End-to-End Bloody Video Recognition by Audio-Visual Feature Fusion
    Hou, Congcong
    Wu, Xiaoyu
    Wang, Ge
    PATTERN RECOGNITION AND COMPUTER VISION (PRCV 2018), PT I, 2018, 11256 : 501 - 510
  • [44] Interaural Level Difference Optimisation of First-Order Binaural Ambisonic Rendering
    McKenzie, Thomas
    Murphy, Damian
    Kearney, Gavin
    2019 AES INTERNATIONAL CONFERENCE ON IMMERSIVE AND INTERACTIVE AUDIO, 2019,
  • [45] Psychoacoustic Calibration of Loss Functions for Efficient End-to-End Neural Audio Coding
    Zhen, Kai
    Lee, Mi Suk
    Sung, Jongmo
    Beack, Seungkwon
    Kim, Minje
    IEEE SIGNAL PROCESSING LETTERS, 2020, 27 (27) : 2159 - 2163
  • [46] END-TO-END MULTI-PERSON AUDIO/VISUAL AUTOMATIC SPEECH RECOGNITION
    Braga, Otavio
    Makino, Takaki
    Siohan, Olivier
    Liao, Hank
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 6994 - 6998
  • [47] FUSING INFORMATION STREAMS IN END-TO-END AUDIO-VISUAL SPEECH RECOGNITION
    Yu, Wentao
    Zeiler, Steffen
    Kolossa, Dorothea
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 3430 - 3434
  • [48] END-TO-END, CHAIN-LIKE ASSOCIATIONS OF PAIRED PACHYTENE CHROMOSOMES OF CREPIS CAPILLARIS
    WAGENAAR, EB
    SADASIVAIAH, RS
    CANADIAN JOURNAL OF GENETICS AND CYTOLOGY, 1969, 11 (02): : 403 - +
  • [49] The end of end-to-end security?
    Bradner, S
    IEEE SECURITY & PRIVACY, 2006, 4 (02) : 76 - 79
  • [50] Investigating Audio, Video, and Text Fusion Methods for End-to-End Automatic Personality Prediction
    Kampman, Onno
    Barezi, Elham J.
    Bertero, Dario
    Fung, Pascale
    PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 2, 2018, : 606 - 611