ASiT: Local-Global Audio Spectrogram Vision Transformer for Event Classification

被引:2
作者
Ahmed, Sara Atito Ali [1 ,2 ]
Awais, Muhammad [1 ,2 ]
Wang, Wenwu [1 ,2 ]
Plumbley, Mark D. [1 ,2 ]
Kittler, Josef [1 ,2 ]
机构
[1] Univ Surrey, CVSSP, Guildford GU2 5XH, Surrey, England
[2] Univ Surrey, Surrey Inst People Ctr AI, Guildford GU27XH, Surrey, England
基金
英国工程与自然科学研究理事会;
关键词
Spectrogram; Transformers; Task analysis; Image reconstruction; Computational modeling; Context modeling; Similarity learning; Self-supervised learning; vision transformers; audio spectrogram; group masked model learning; audio classification;
D O I
10.1109/TASLP.2024.3428908
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Transformers, which were originally developed for natural language processing, have recently generated significant interest in the computer vision and audio communities due to their flexibility in learning long-range relationships. Constrained by the data hungry nature of transformers and the limited amount of labelled data, most transformer-based models for audio tasks are finetuned from ImageNet pretrained models, despite the huge gap between the domain of natural images and audio. This has motivated the research in self-supervised pretraining of audio transformers, which reduces the dependency on large amounts of labeled data and focuses on extracting concise representations of audio spectrograms. In this paper, we propose Local-Global Audio Spectrogram vIsion Transformer, namely ASiT, a novel self-supervised learning framework that captures local and global contextual information by employing group masked model learning and self-distillation. We evaluate our pretrained models on both audio and speech classification tasks, including audio event classification, keyword spotting, and speaker identification. We further conduct comprehensive ablation studies, including evaluations of different pretraining strategies. The proposed ASiT framework significantly boosts the performance on all tasks and sets a new state-of-the-art performance in five audio and speech classification tasks, outperforming recent methods, including the approaches that use additional datasets for pretraining.
引用
收藏
页码:3684 / 3693
页数:10
相关论文
共 62 条
[1]  
Al-Tahan H, 2021, PR MACH LEARN RES, V130
[2]   MAX-AST: COMBINING CONVOLUTION, LOCAL AND GLOBAL SELF-ATTENTIONS FOR AUDIO EVENT CLASSIFICATION [J].
Alex, Tony ;
Ahmed, Sara ;
Mustafa, Armin ;
Awais, Muhammad ;
Jackson, Philip J. B. .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, :1061-1065
[3]  
Atito M., 2021, P IEEE CVF C COMP VI, P9653
[4]  
Atito S., 2021, arXiv
[5]   GMML IS ALL YOU NEED [J].
Atito, Sara ;
Awais, Muhammed ;
Nandam, Srinivasa ;
Kittler, Josef .
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, :2125-2129
[6]   SB-SSL: Slice-Based Self-supervised Transformers for Knee Abnormality Classification from MRI [J].
Atito, Sara ;
Anwar, Syed Muhammad ;
Awais, Muhammad ;
Kittler, Josef .
MEDICAL IMAGE LEARNING WITH LIMITED AND NOISY DATA (MILLAND 2022), 2022, 13559 :86-95
[7]  
Baade P., inProc. Interspeech2022.10 D.
[8]  
Baevski A, 2020, ADV NEUR IN, V33
[9]  
Baevski A, 2020, Arxiv, DOI arXiv:1910.05453
[10]  
Bao L., 2022, P ICLR