AST: Audio Spectrogram Transformer

被引:255
作者
Gong, Yuan [1 ]
Chung, Yu-An [1 ]
Glass, James [1 ]
机构
[1] MIT, Comp Sci & Artificial Intelligence Lab, Cambridge, MA 02139 USA
来源
INTERSPEECH 2021 | 2021年
关键词
audio classification; self-attention; Transformer; EMOTION;
D O I
10.21437/Interspeech.2021-698
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
In the past decade, convolutional neural networks (CNNs) have been widely adopted as the main building block for end-to-end audio classification models, which aim to learn a direct mapping from audio spectrograms to corresponding labels. To better capture long-range global context, a recent trend is to add a self-attention mechanism on top of the CNN, forming a CNN-attention hybrid model. However, it is unclear whether the reliance on a CNN is necessary, and if neural networks purely based on attention are sufficient to obtain good performance in audio classification. In this paper, we answer the question by introducing the Audio Spectrogram Transformer (AST), the first convolution-free, purely attention-based model for audio classification. We evaluate AST on various audio classification benchmarks, where it achieves new state-of-the-art results of 0.485 mAP on AudioSet, 95.6% accuracy on ESC-50, and 98.1% accuracy on Speech Commands V2.
引用
收藏
页码:571 / 575
页数:5
相关论文
共 35 条
  • [1] [Anonymous], 2015, ICLR
  • [2] Breiman L, 1996, MACH LEARN, V24, P123, DOI 10.1007/BF00058655
  • [3] Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
  • [4] Devlin J., 2019, NAACL
  • [5] Dieleman S, 2014, INT CONF ACOUST SPEE
  • [6] Dosovitskiy A., 2021, PROC INT C LEARN RE, P1, DOI DOI 10.48550/ARXIV
  • [7] Eyben F., 2013, P 21 ACM INT C MULT, P835, DOI [10.1145/2502081.2502224, DOI 10.1145/2502081.2502224]
  • [8] Gemmeke JF, 2017, INT CONF ACOUST SPEE, P776, DOI 10.1109/ICASSP.2017.7952261
  • [9] Gong Y, 2021, Arxiv, DOI arXiv:2102.01243
  • [10] Gulati A., 2020, ARXIV200508100 INTERSPEECH