Unified deep learning model for multitask representation and transfer learning: image classification, object detection, and image captioning

被引:4
作者
Bayisa, Leta Yobsan [1 ]
Wang, Weidong [2 ]
Wang, Qingxian [2 ]
Ukwuoma, Chiagoziem C. [3 ,4 ,5 ]
Gutema, Hirpesa Kebede [1 ]
Endris, Ahmed [6 ]
Abu, Turi [7 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Peoples R China
[2] Univ Elect Sci & Technol China, Sch Informat & Software Engn, Chengdu, Sichuan, Peoples R China
[3] Chengdu Univ Technol, Oxford Brooks Univ, Sino British Collaborat Educ, Chengdu 610059, Sichuan, Peoples R China
[4] Chengdu Univ Technol, Coll Nucl Technol & Automat Engn, Chengdu 610059, Sichuan, Peoples R China
[5] Chengdu Univ Technol, Sichuan Engn Technol Res Ctr Ind Internet Intelli, Chengdu 610059, Sichuan, Peoples R China
[6] Chinese Acad Sci, Shenzhen Inst Adv Technol, Paul C Lauterbur Res Ctr Biomed Imaging, Shenzhen, Peoples R China
[7] Tsinghua Univ, Dept Comp Sci & Technol, Beijing, Peoples R China
关键词
Multitask representation learning; Transfer learning; Encoder-decoder; Attention mechanism;
D O I
10.1007/s13042-024-02177-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The application of deep learning has demonstrated impressive performance in computer vision tasks such as object detection, image classification, and image captioning. Though most models excel at performing single vision or language tasks, designing a single architecture that balances task specialization, performance, and adaptability across diverse tasks is challenging. To effectively address vision and language integration challenges, a combination of text embeddings and visual representation is necessary to understand dependencies of each subarea for multiple tasks. This paper proposes a single architecture that can handle various tasks in computer vision with fine-tuning capabilities for other specific vision and language tasks. The proposed model employs a modified DenseNet201 as a feature extractor (network backbone), an encoder-decoder architecture, and a task-specific head for inference. To tackle overfitting and improve precision, enhanced data augmentation and normalization techniques are employed. The model's robustness is evaluated on over five datasets for different tasks: image classification, object detection, image captioning, and adversarial attack and defense. The experimental results demonstrate competitive performance compared to other works on CIFAR-10, CIFAR-100, Flickr8, Flickr30, Caltech10, and other task-specific datasets such as OCT, BreakHis, and so on. The proposed model is flexible and easy to adapt to new tasks, as it can also be extended to other vision and language tasks through fine-tuning with task-specific input indices.
引用
收藏
页码:4617 / 4637
页数:21
相关论文
共 77 条
[1]  
Bakalo R, 2021, NEUROCOMPUTING, V421, P15
[2]  
Barbella M., 2022, SSRN ELECT J, DOI [10.2139/ssrn.4120317, DOI 10.2139/SSRN.4120317]
[3]  
Belhasin O, 2022, ADV NEURAL INFORM PR
[4]  
Borkowski AA, 2019, LUNG COLON CANC HIST
[5]  
Chen L, 2017, CAN CON EL COMP EN
[6]  
Cho J., 2021, PMLR, P1931
[7]   Cyber-Physical System for Smart Factories: Case Study of the Taiwan Instrument Research Institute of the National Applied Research Laboratories [J].
Chou, Shih-Jie ;
Chen, Yi-Chih ;
Ting, Vipin ;
Weng, Rui-Cian ;
Chang, Chun-Ming ;
Chang, Chun-Li ;
Wu, Wen-Hong ;
Wang, Jung-Hsing ;
Wu, Tzong-Dar .
2020 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS - TAIWAN (ICCE-TAIWAN), 2020,
[8]   Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention [J].
Cornia, Marcella ;
Baraldi, Lorenzo ;
Serra, Giuseppe ;
Cucchiara, Rita .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2018, 14 (02)
[9]  
Dagli R, 2023, ICLR 2023
[10]  
Dosovitskiy A., 2021, 9 INT C LEARN REPR I