Novel multi-convolutional neural network fusion approach for smile recognition

被引:3
作者
Chen, Jiongwei [1 ]
Jin, Yi [1 ]
Akram, Muhammad Waqar [1 ]
Li, Kuan [1 ]
Chen, Enhong [2 ]
机构
[1] Univ Sci & Technol China, Sch Engn Sci, Hefei 230026, Anhui, Peoples R China
[2] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei 230026, Anhui, Peoples R China
基金
美国国家科学基金会; 中国国家自然科学基金;
关键词
Smile recognition; Convolutional neural networks; Deep learning; Model fusion; Unconstrained face images;
D O I
10.1007/s11042-018-6945-x
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The smile is one of the most common human facial expressions encountered in our daily lives. Smile recognition can be used in many scenarios, such as emotion monitoring, human-to-robot games, and camera shutter control, which is why smile recognition has received significant attention of researchers. This topic is a significant but challenging problem, particularly in unconstrained scenarios. The variety of facial sizes, illumination conditions, head poses, occlusions, and other factors increases the difficulty of this problem. To address this problem, we propose a novel multiple convolutional neural network (CNN) fusion approach in which a face-based CNN and a mouth-based CNN are used to perform smile recognition. According to the results obtained using the two CNNs, we fuse the two networks using a specified weight and choose the higher-probability result as the final result. Experimental results indicate that the method is effective on a real-world smile dataset (GENKI-4K). The smile recognition rate of the proposed method is improved by 1.6% and 3.3% relative to the face-based CNN and mouth-based CNN, respectively, and the proposed method outperforms the most of previous methods.
引用
收藏
页码:15887 / 15907
页数:21
相关论文
共 50 条
  • [41] A fuzzy convolutional neural network for enhancing multi-focus image fusion
    Bhalla, Kanika
    Koundal, Deepika
    Sharma, Bhisham
    Hu, Yu-Chen
    Zaguia, Atef
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2022, 84
  • [42] Flower growth status recognition method based on feature fusion convolutional neural network
    Liu, Haiming
    Guan, Shixuan
    Lu, Weizhong
    Li, Haiou
    Wu, Hongjie
    JOURNAL OF COMPUTATIONAL METHODS IN SCIENCES AND ENGINEERING, 2021, 21 (06) : 1935 - 1946
  • [43] Multi-Stream Convolutional Neural Network for SAR Automatic Target Recognition
    Zhao, Pengfei
    Liu, Kai
    Zou, Hao
    Zhen, Xiantong
    REMOTE SENSING, 2018, 10 (09)
  • [44] Multi-region Ensemble Convolutional Neural Network for Facial Expression Recognition
    Fan, Yingruo
    Lam, Jacqueline C. K.
    Li, Victor O. K.
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT I, 2018, 11139 : 84 - 94
  • [45] A novel improved deep convolutional neural network model for medical image fusion
    Xia, Kai-jian
    Yin, Hong-sheng
    Wang, Jiang-qiang
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2019, 22 (Suppl 1): : 1515 - 1527
  • [46] A novel improved deep convolutional neural network model for medical image fusion
    Kai-jian Xia
    Hong-sheng Yin
    Jiang-qiang Wang
    Cluster Computing, 2019, 22 : 1515 - 1527
  • [47] A PARALLEL FUSION APPROACH TO PIANO MUSIC TRANSCRIPTION BASED ON CONVOLUTIONAL NEURAL NETWORK
    Cong, Fu'ze
    Liu, Shuchang
    Guo, Li
    Wiggins, Geraint A.
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 391 - 395
  • [48] DepthFuseNet: an approach for fusion of thermal and visible images using a convolutional neural network
    Patel, Heena
    Upla, Kishor P.
    OPTICAL ENGINEERING, 2021, 60 (01)
  • [49] A novel approach for detecting the horizon using a convolutional neural network and multi-scale edge detection
    Chiyoon Jeong
    Hyun S. Yang
    KyeongDeok Moon
    Multidimensional Systems and Signal Processing, 2019, 30 : 1187 - 1204
  • [50] A convolutional neural network for visual object recognition in marine sector
    Kumar, Aiswarya S.
    Sherly, Elizabeth
    2017 2ND INTERNATIONAL CONFERENCE FOR CONVERGENCE IN TECHNOLOGY (I2CT), 2017, : 304 - 307