Kids' Emotion Recognition Using Various Deep-Learning Models with Explainable AI

被引:10
|
作者
Rathod, Manish [1 ]
Dalvi, Chirag [1 ]
Kaur, Kulveen [1 ]
Patil, Shruti [2 ]
Gite, Shilpa [2 ]
Kamat, Pooja [1 ]
Kotecha, Ketan [2 ]
Abraham, Ajith [3 ]
Gabralla, Lubna Abdelkareim [4 ]
机构
[1] Deemed Univ, Symbiosis Int Univ, Symbiosis Ctr Appl Artificial Intelligence SCAAI, Pune 412115, Maharashtra, India
[2] Deemed Univ, Symbiosis Int Univ, Symbiosis Inst Technol, Comp Sci & Informat Technol Dept, Pune 412115, Maharashtra, India
[3] Machine Intelligence Res Labs MIR Labs, Auburn, WA 98071 USA
[4] Princess Nourah Bint Abdulrahman Univ, Coll Appl, Dept Comp Sci & Informat Technol, Riyadh 11671, Saudi Arabia
关键词
kids' emotion recognition; FER; explainable artificial intelligence; LIRIS; children emotion dataset; online learning;
D O I
10.3390/s22208066
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.
引用
收藏
页数:21
相关论文
共 50 条
  • [41] Towards the spatial analysis of motorway safety in the connected environment by using explainable deep learning
    Greguric, Martin
    Vrbanic, Filip
    Ivanjko, Edouard
    KNOWLEDGE-BASED SYSTEMS, 2023, 269
  • [42] Integrative gene expression analysis for the diagnosis of Parkinson's disease using machine learning and explainable AI
    Bhandari, Nikita
    Walambe, Rahee
    Kotecha, Ketan
    Kaliya, Mehul
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 163
  • [43] Automated processing of eXplainable Artificial Intelligence outputs in deep learning models for fault diagnostics of large infrastructures
    Floreale, G.
    Baraldi, P.
    Zio, E.
    Fink, O.
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 149
  • [44] Explainable deep learning for sEMG-based similar gesture recognition: A Shapley-value-based solution
    Wang, Feng
    Ao, Xiaohu
    Wu, Min
    Kawata, Seiichi
    She, Jinhua
    INFORMATION SCIENCES, 2024, 672
  • [45] Using Explainable AI to Identify Differences Between Clinical and Experimental Pain Detection Models Based on Facial Expressions
    Prajod, Pooja
    Huber, Tobias
    Andre, Elisabeth
    MULTIMEDIA MODELING (MMM 2022), PT I, 2022, 13141 : 311 - 322
  • [46] A novel fusion-based deep learning approach with PSO and explainable AI for batteries State of Charge estimation in Electric Vehicles
    Jafari, Sadiqa
    Kim, Jisoo
    Byun, Yung-Cheol
    ENERGY REPORTS, 2024, 12 : 3364 - 3385
  • [47] Comprehensive assessment of E. coli dynamics in river water using advanced machine learning and explainable AI
    Mallik, Santanu
    Saha, Bodhipriya
    Podder, Krishanu
    Muthuraj, Muthusivaramapandian
    Mishra, Umesh
    Deb, Sharbari
    PROCESS SAFETY AND ENVIRONMENTAL PROTECTION, 2025, 195
  • [48] Non-human primate epidural ECoG analysis using explainable deep learning technology
    Choi, Hoseok
    Lim, Seokbeen
    Min, Kyeongran
    Ahn, Kyoung-ha
    Lee, Kyoung-Min
    Jang, Dong Pyo
    JOURNAL OF NEURAL ENGINEERING, 2021, 18 (06)
  • [49] Neural additive time-series models: Explainable deep learning for multivariate time-series prediction
    Jo, Wonkeun
    Kim, Dongil
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 228
  • [50] Robust and Explainable Fault Diagnosis With Power-Perturbation-Based Decision Boundary Analysis of Deep Learning Models
    Gwak, Minseon
    Kim, Min Su
    Yun, Jong Pil
    Park, PooGyeon
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (05) : 6982 - 6992