Generative Adversarial Networks (GANs) for Audio-Visual Speech Recognition in Artificial Intelligence IoT

被引:8
作者
He, Yibo [1 ]
Seng, Kah Phooi [1 ,2 ,3 ]
Ang, Li Minn [3 ]
机构
[1] Xian Jiaotong Liverpool Univ, Sch AI & Adv Comp, Suzhou 215000, Peoples R China
[2] Queensland Univ Technol, Sch Comp Sci, Brisbane, Qld 4000, Australia
[3] Univ Sunshine Coast, Sch Sci Technol & Engn, Sippy Downs, Qld 4556, Australia
关键词
Internet of things (IoT); generative adversarial networks (GANs); deep learning; audio-visual speech recognition;
D O I
10.3390/info14100575
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper proposes a novel multimodal generative adversarial network AVSR (multimodal AVSR GAN) architecture, to improve both the energy efficiency and the AVSR classification accuracy of artificial intelligence Internet of things (IoT) applications. The audio-visual speech recognition (AVSR) modality is a classical multimodal modality, which is commonly used in IoT and embedded systems. Examples of suitable IoT applications include in-cabin speech recognition systems for driving systems, AVSR in augmented reality environments, and interactive applications such as virtual aquariums. The application of multimodal sensor data for IoT applications requires efficient information processing, to meet the hardware constraints of IoT devices. The proposed multimodal AVSR GAN architecture is composed of a discriminator and a generator, each of which is a two-stream network, corresponding to the audio stream information and the visual stream information, respectively. To validate this approach, we used augmented data from well-known datasets (LRS2-Lip Reading Sentences 2 and LRS3) in the training process, and testing was performed using the original data. The research and experimental results showed that the proposed multimodal AVSR GAN architecture improved the AVSR classification accuracy. Furthermore, in this study, we discuss the domain of GANs and provide a concise summary of the proposed GANs.
引用
收藏
页数:23
相关论文
共 35 条
[1]   Deep Audio-Visual Speech Recognition [J].
Afouras, Triantafyllos ;
Chung, Joon Son ;
Senior, Andrew ;
Vinyals, Oriol ;
Zisserman, Andrew .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) :8717-8727
[2]  
[Anonymous], 2009, CIFAR-100 Dataset
[3]   The Internet of Things: A survey [J].
Atzori, Luigi ;
Iera, Antonio ;
Morabito, Giacomo .
COMPUTER NETWORKS, 2010, 54 (15) :2787-2805
[4]  
Brock A, 2019, Arxiv, DOI [arXiv:1809.11096, 10.48550/arXiv.1809.11096]
[5]  
Choi Y, 2018, Arxiv, DOI [arXiv:1711.09020, DOI 10.48550/ARXIV.1711.09020]
[6]  
Dabran I, 2017, IEEE INT CONF MICROW, P522
[7]  
Deng L., 2012, IEEE Signal Processing Magazine, V29, P141, DOI [10.1109/MSP.2012.2211477, DOI 10.1109/MSP.2012.2211477]
[8]   Audio-Visual Speech Modeling for Continuous Speech Recognition [J].
Dupont, Stephane ;
Luettin, Juergen .
IEEE TRANSACTIONS ON MULTIMEDIA, 2000, 2 (03) :141-151
[9]  
Karras T, 2018, Arxiv, DOI [arXiv:1710.10196, 10.48550/arXiv.1710.10196]
[10]   Analyzing and Improving the Image Quality of StyleGAN [J].
Karras, Tero ;
Laine, Samuli ;
Aittala, Miika ;
Hellsten, Janne ;
Lehtinen, Jaakko ;
Aila, Timo .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :8107-8116