Fine-Tuning of Pre-Trained Deep Face Sketch Models Using Smart Switching Slime Mold Algorithm

被引:1
|
作者
Alhashash, Khaled Mohammad [1 ]
Samma, Hussein [2 ]
Suandi, Shahrel Azmin [1 ]
机构
[1] Univ Sains Malaysia, Sch Elect & Elect Engn, Intelligent Biometr Grp, USM Engn Campus, Nibong Tebal 14300, Penang, Malaysia
[2] King Fahd Univ Petr & Minerals, SDAIA KFUPM Joint Res Ctr Artificial Intelligence, Dhahran 31261, Saudi Arabia
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 08期
关键词
deep face sketch recognition; slime mold algorithm; fine-tuning; RECOGNITION;
D O I
10.3390/app13085102
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
There are many pre-trained deep learning-based face recognition models developed in the literature, such as FaceNet, ArcFace, VGG-Face, and DeepFace. However, performing transfer learning of these models for handling face sketch recognition is not applicable due to the challenge of limited sketch datasets (single sketch per subject). One promising solution to mitigate this issue is by using optimization algorithms, which will perform a fine-tuning and fitting of these models for the face sketch problem. Specifically, this research introduces an enhanced optimizer that will evolve these models by performing automatic weightage/fine-tuning of the generated feature vector guided by the recognition accuracy of the training data. The following are the key contributions to this work: (i) this paper introduces a novel Smart Switching Slime Mold Algorithm (S(2)SMA), which has been improved by embedding several search operations and control rules; (ii) the proposed S(2)SMA aims to fine-tune the pre-trained deep learning models in order to improve the accuracy of the face sketch recognition problem; and (iii) the proposed S(2)SMA makes simultaneous fine-tuning of multiple pre-trained deep learning models toward further improving the recognition accuracy of the face sketch problem. The performance of the S(2)SMA has been evaluated on two face sketch databases, which are XM2VTS and CUFSF, and on CEC's 2010 large-scale benchmark. In addition, the outcomes were compared to several variations of the SMA and related optimization techniques. The numerical results demonstrated that the improved optimizer obtained a higher level of fitness value as well as better face sketch recognition accuracy. The statistical data demonstrate that S(2)SMA significantly outperforms other optimization techniques with a rapid convergence curve.
引用
收藏
页数:36
相关论文
共 27 条
  • [1] Fine-tuning the hyperparameters of pre-trained models for solving multiclass classification problems
    Kaibassova, D.
    Nurtay, M.
    Tau, A.
    Kissina, M.
    COMPUTER OPTICS, 2022, 46 (06) : 971 - 979
  • [2] An Empirical Study of Parameter-Efficient Fine-Tuning Methods for Pre-trained Code Models
    Liu, Jiaxing
    Sha, Chaofeng
    Peng, Xin
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 397 - 408
  • [3] ON FINE-TUNING PRE-TRAINED SPEECH MODELS WITH EMA-TARGET SELF-SUPERVISED LOSS
    Yang, Hejung
    Kang, Hong-Goo
    2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, : 6360 - 6364
  • [4] TIBW: Task-Independent Backdoor Watermarking with Fine-Tuning Resilience for Pre-Trained Language Models
    Mo, Weichuan
    Chen, Kongyang
    Xiao, Yatie
    MATHEMATICS, 2025, 13 (02)
  • [5] Improving Pre-Trained Weights through Meta-Heuristics Fine-Tuning
    de Rosa, Gustavo H.
    Roder, Mateus
    Papa, Joao Paulo
    dos Santos, Claudio F. G.
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [6] Enhancing recognition and interpretation of functional phenotypic sequences through fine-tuning pre-trained genomic models
    Du, Duo
    Zhong, Fan
    Liu, Lei
    JOURNAL OF TRANSLATIONAL MEDICINE, 2024, 22 (01)
  • [7] Improving Fine-tuning Pre-trained Models on Small Source Code Datasets via Variational Information Bottleneck
    Liu, Jiaxing
    Sha, Chaofeng
    Peng, Xin
    2023 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE ANALYSIS, EVOLUTION AND REENGINEERING, SANER, 2023, : 331 - 342
  • [8] CSS-LM: A Contrastive Framework for Semi-Supervised Fine-Tuning of Pre-Trained Language Models
    Su, Yusheng
    Han, Xu
    Lin, Yankai
    Zhang, Zhengyan
    Liu, Zhiyuan
    Li, Peng
    Zhou, Jie
    Sun, Maosong
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 2930 - 2941
  • [9] Advancing the Boundary of Pre-Trained Models for Drug Discovery: Interpretable Fine-Tuning Empowered by Molecular Physicochemical Properties
    Lian, Xiaoqing
    Zhu, Jie
    Lv, Tianxu
    Hong, Xiaoyan
    Ding, Longzhen
    Chu, Wei
    Ni, Jianming
    Pan, Xiang
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (12) : 7633 - 7646
  • [10] Attempts on detecting Alzheimer's disease by fine-tuning pre-trained model with Gaze Data
    Nagasawa, Junichi
    Nakata, Yuichi
    Hiroe, Mamoru
    Zheng, Yujia
    Kawaguchi, Yutaka
    Maegawa, Yuji
    Hojo, Naoki
    Takiguchi, Tetsuya
    Nakayama, Minoru
    Uchimura, Maki
    Sonoda, Yuma
    Kowa, Hisatomo
    Nagamatsu, Takashi
    PROCEEDINGS OF THE 2024 ACM SYMPOSIUM ON EYE TRACKING RESEARCH & APPLICATIONS, ETRA 2024, 2024,