Towards an explainable Artificial intelligence system for voice pathology identification and post-treatment characterisation

被引:0
|
作者
Cala, Federico [1 ]
Frassineti, Lorenzo [1 ]
Cantarella, Giovanna [2 ,3 ]
Buccichini, Giulia [3 ]
Battilocchi, Ludovica [2 ]
Manfredi, Claudia [1 ]
Lanata, Antonio [1 ]
机构
[1] Univ Firenze, Dept Informat Engn, Florence, Italy
[2] IRCCS CaGrande Fdn, Osped Maggiore Policlin Milano, Milan, Italy
[3] Univ Milan, Dept Clin Sci & Community Hlth, Milan, Italy
关键词
Artificial Intelligence; Machine Learning; Acoustic Analysis; BioVoice; Interpretable AI; Dysphonia; Benign lesions; Unilateral Vocal Fold Paralysis Post-treatment; AGE; ALGORITHMS; DISEASE; QUALITY; SPEECH;
D O I
10.1016/j.bspc.2025.107530
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The voice pathology identification task has recently gained great attention. However, several research questions remain open. This study proposes an explainable AI framework to address the implicit role of age in voice pathology recognition and to investigate vocal quality improvement after surgical treatment in organic voice disorders. The aim is also to define an optimal features subset through predictor importance analysis. A set of 287 patients diagnosed with benign lesions of vocal folds (BLVF) and unilateral vocal fold paralysis (UVFP) was enrolled. Classification experiments were performed for female (F) and male (M) groups: they aimed at distinguishing BLVF from UVFP in age-unbalanced (E1) and age-balanced (E2) datasets, differentiating BLVF subclasses (E3), and detecting pre- and post-treatment conditions (E4). The comparison between E1 and E2 suggests that age does not influence the classification performance. In E1, 76% (F) and 81% (M) accuracies were obtained. The best features concerned vocal fold dynamics and articulator positioning for F and M datasets. In E3, an accuracy of 60% was achieved, suggesting that larger datasets are required. In E4, the best models showed 76% (F) and 72% (M) accuracy, with a good sensitivity in detecting pre-treatment patients. The error rate analysis proved that UVFP was the most misclassified group. Moreover, an agreement between the AI outcome and perceptual evaluations was detected for misclassified recordings. These results suggest their clinical relevance to highlight key aspects of voice quality recovery and to define acoustic parameters that otolaryngologists could employ to monitor the patient's follow-up.
引用
收藏
页数:12
相关论文
共 50 条
  • [11] Explainable Artificial Intelligence-A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review
    Deshpande, Nilkanth Mukund
    Gite, Shilpa
    Pradhan, Biswajeet
    Assiri, Mazen Ebraheem
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2022, 133 (03): : 843 - 872
  • [12] AF'fective Design: Supporting Atrial Fibrillation Post-treatment with Explainable AI
    She, Wan-Jou
    Senoo, Keitaro
    Iwakoshi, Hibiki
    Kuwahara, Noriaki
    Siriaraya, Panote
    COMPANION PROCEEDINGS OF THE 27TH INTERNATIONAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2022 COMPANION, 2022, : 22 - 25
  • [13] Neural Network Based Identification of Terrorist Groups Using Explainable Artificial Intelligence
    Lamptey, Odartey
    Gegov, Alexander
    Ouelhadj, Djamila
    Hopgood, Adrian
    Da Deppo, Serge
    2023 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI, 2023, : 191 - 192
  • [14] Explainable artificial intelligence for microbiome data analysis in colorectal cancer biomarker identification
    Novielli, Pierfrancesco
    Romano, Donato
    Magarelli, Michele
    Di Bitonto, Pierpaolo
    Diacono, Domenico
    Chiatante, Annalisa
    Lopalco, Giuseppe
    Sabella, Daniele
    Venerito, Vincenzo
    Filannino, Pasquale
    Bellotti, Roberto
    De Angelis, Maria
    Iannone, Florenzo
    Tangaro, Sabina
    FRONTIERS IN MICROBIOLOGY, 2024, 15
  • [15] Classification and Automated Interpretation of Spinal Posture Data Using a Pathology-Independent Classifier and Explainable Artificial Intelligence (XAI)
    Dindorf, Carlo
    Konradi, Juergen
    Wolf, Claudia
    Taetz, Bertram
    Bleser, Gabriele
    Huthwelker, Janine
    Werthmann, Friederike
    Bartaguiz, Eva
    Kniepert, Johanna
    Drees, Philipp
    Betz, Ulrich
    Froehlich, Michael
    SENSORS, 2021, 21 (18)
  • [16] Towards explainable artificial intelligence in optical networks: the use case of lightpath QoT estimation
    Ayoub, Omran
    Troia, Sebastian
    Andreoletti, Davide
    Bianco, Andrea
    Tornatore, Massimo
    Giordano, Silvia
    Rottondi, Cristina
    JOURNAL OF OPTICAL COMMUNICATIONS AND NETWORKING, 2023, 15 (01) : A26 - A38
  • [17] Towards explainable artificial intelligence through expert-augmented supervised feature selection
    Rabiee, Meysam
    Mirhashemi, Mohsen
    Pangburn, Michael S.
    Piri, Saeed
    Delen, Dursun
    DECISION SUPPORT SYSTEMS, 2024, 181
  • [18] Towards Explainable Artificial Intelligence (XAI) in Supply Chain Management: A Typology and Research Agenda
    Mugurusi, Godfrey
    Oluka, Pross Nagitta
    ADVANCES IN PRODUCTION MANAGEMENT SYSTEMS: ARTIFICIAL INTELLIGENCE FOR SUSTAINABLE AND RESILIENT PRODUCTION SYSTEMS, APMS 2021, PT IV, 2021, 633 : 32 - 38
  • [19] Explainable Artificial Intelligence in communication networks: A use case for failure identification in microwave networks
    Ayoub, Omran
    Di Cicco, Nicola
    Ezzeddine, Fatima
    Bruschetta, Federica
    Rubino, Roberto
    Nardecchia, Massimo
    Milano, Michele
    Musumeci, Francesco
    Passera, Claudio
    Tornatore, Massimo
    COMPUTER NETWORKS, 2022, 219
  • [20] Software Defects Identification: Results Using Machine Learning and Explainable Artificial Intelligence Techniques
    Begum, Momotaz
    Shuvo, Mehedi Hasan
    Ashraf, Imran
    Al Mamun, Abdullah
    Uddin, Jia
    Samad, Md Abdus
    IEEE ACCESS, 2023, 11 : 132750 - 132765