DHHN: Dual Hierarchical Hybrid Network for Weakly-Supervised Audio-Visual Video Parsing

被引:18
|
作者
Jiang, Xun [1 ]
Xu, Xing [1 ]
Chen, Zhiguo [1 ]
Zhang, Jingran [1 ]
Song, Jingkuan [1 ]
Shen, Fumin [1 ]
Lu, Huimin [2 ]
Shen, Heng Tao [1 ,3 ]
机构
[1] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[2] Kyushu Inst Technol, Kitakyushu, Fukuoka, Japan
[3] Peng Cheng Lab, Shenzhen, Peoples R China
来源
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022 | 2022年
关键词
Multimodality; Weakly-supervised Learning; Video Understanding; Audio-Visual Comprehension;
D O I
10.1145/3503161.3548309
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The Weakly-Supervised Audio-Visual Video Parsing (AVVP) task aims to parse a video into temporal segments and predict their event categories in terms of modalities, labeling them as either audible, visible, or both. Since the temporal boundaries and modalities annotations are not provided, only video-level event labels are available, this task is more challenging than conventional video understanding tasks. Most previous works attempt to analyze videos by jointly modeling the audio and video data and then learning information from the segment-level features with fixed lengths. However, such a design exist two defects: 1) The various semantic information hidden in temporal lengths is neglected, which may lead the models to learn incorrect information; 2) Due to the joint context modeling, the unique features of different modalities are not fully explored. In this paper, we propose a novel AVVP framework termed Dual Hierarchical Hybrid Network (DHHN) to tackle the above two problems. Our DHHN method consists of three components: 1) A hierarchical context modeling network for extracting different semantics in multiple temporal lengths; 2) A modality-wise guiding network for learning unique information from different modalities; 3) A dual-stream framework generating audio and visual predictions separately. It maintains the best adaptions on different modalities, further boosting the video parsing performance. Extensive quantitative and qualitative experiments demonstrate that our proposed method establishes the new state-of-the-art performance on the AVVP task.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] MAVT-FG: Multimodal Audio-Visual Transformer for Weakly-supervised Fine-Grained Recognition
    Zhou, Xiaoyu
    Song, Xiaotong
    Wu, Hao
    Zhang, Jingran
    Xu, Xing
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3811 - 3819
  • [22] Toward a perceptive pretraining framework for Audio-Visual Video Parsing
    Wu, Jianning
    Jiang, Zhuqing
    Chen, Qingchao
    Wen, Shiping
    Men, Aidong
    Wang, Haiying
    INFORMATION SCIENCES, 2022, 609 : 897 - 912
  • [23] Cross-Modal learning for Audio-Visual Video Parsing
    Lamba, Jatin
    Abhishek
    Akula, Jayaprakash
    Dabral, Rishabh
    Jyothi, Preethi
    Ramakrishnan, Ganesh
    INTERSPEECH 2021, 2021, : 1937 - 1941
  • [24] Weakly-supervised Disentanglement Network for Video Fingerspelling Detection
    Jiang, Ziqi
    Zhang, Shengyu
    Yao, Siyuan
    Zhang, Wenqiao
    Zhang, Sihan
    Li, Juncheng
    Zhao, Zhou
    Wu, Fei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5446 - 5455
  • [25] Audio-Visual Weakly Supervised Approach for Apathy Detection in the Elderly
    Sharma, Garima
    Joshi, Jyoti
    Zeghari, Radia
    Guerchouche, Rachid
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [26] Weakly Supervised Representation Learning for Audio-Visual Scene Analysis
    Parekh, Sanjeel
    Essid, Slim
    Ozerov, Alexey
    Ngoc Q K Duong
    Perez, Patrick
    Richard, Gael
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 (28) : 416 - 428
  • [27] Collecting Cross-Modal Presence-Absence Evidence for Weakly-Supervised Audio-Visual Event Perception
    Gao, Junyu
    Chen, Mengyuan
    Xu, Changsheng
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 18827 - 18836
  • [28] Label-Anticipated Event Disentanglement for Audio-Visual Video Parsing
    Zhou, Jinxing
    Guo, Dan
    Mao, Yuxin
    Zhong, Yiran
    Chang, Xiaojun
    Wang, Meng
    COMPUTER VISION - ECCV 2024, PT X, 2025, 15068 : 35 - 51
  • [29] Modality-Aware Contrastive Instance Learning with Self-Distillation for Weakly-Supervised Audio-Visual Violence Detection
    Yu, Jiashuo
    Liu, Jinyu
    Cheng, Ying
    Feng, Rui
    Zhang, Yuejie
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6278 - 6287
  • [30] Learning weakly supervised audio-visual violence detection in hyperbolic space
    Zhou, Xiao
    Peng, Xiaogang
    Wen, Hao
    Luo, Yikai
    Yu, Keyang
    Yang, Ping
    Wu, Zizhao
    IMAGE AND VISION COMPUTING, 2024, 151