The classification of the bladder cancer based on Vision Transformers (ViT)

被引:0
|
作者
Ola S. Khedr
Mohamed E. Wahed
Al-Sayed R. Al-Attar
E. A. Abdel-Rehim
机构
[1] Suez Canal University,Department of Mathematics
[2] Suez Canal University,Computer Science, Faculty of Science
[3] Zagazig University,Department of Computer Science, Faculty of Computers and Informatics
[4] Suez Canal University,Department of Pathology, Faculty of Vetrinary Medicine
来源
Scientific Reports | / 13卷
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Bladder cancer is a prevalent malignancy with diverse subtypes, including invasive and non-invasive tissue. Accurate classification of these subtypes is crucial for personalized treatment and prognosis. In this paper, we present a comprehensive study on the classification of bladder cancer into into three classes, two of them are the malignant set as non invasive type and invasive type and one set is the normal bladder mucosa to be used as stander measurement for computer deep learning. We utilized a dataset containing histopathological images of bladder tissue samples, split into a training set (70%), a validation set (15%), and a test set (15%). Four different deep-learning architectures were evaluated for their performance in classifying bladder cancer, EfficientNetB2, InceptionResNetV2, InceptionV3, and ResNet50V2. Additionally, we explored the potential of Vision Transformers with two different configurations, ViT_B32 and ViT_B16, for this classification task. Our experimental results revealed significant variations in the models’ accuracies for classifying bladder cancer. The highest accuracy was achieved using the InceptionResNetV2 model, with an impressive accuracy of 98.73%. Vision Transformers also showed promising results, with ViT_B32 achieving an accuracy of 99.49%, and ViT_B16 achieving an accuracy of 99.23%. EfficientNetB2 and ResNet50V2 also exhibited competitive performances, achieving accuracies of 95.43% and 93%, respectively. In conclusion, our study demonstrates that deep learning models, particularly Vision Transformers (ViT_B32 and ViT_B16), can effectively classify bladder cancer into its three classes with high accuracy. These findings have potential implications for aiding clinical decision-making and improving patient outcomes in the field of oncology.
引用
收藏
相关论文
共 50 条
  • [31] FM-ViT: Flexible Modal Vision Transformers for Face Anti-Spoofing
    Liu, Ajian
    Tan, Zichang
    Yu, Zitong
    Zhao, Chenxu
    Wan, Jun
    Liang, Yanyan
    Lei, Zhen
    Zhang, Du
    Li, Stan Z.
    Guo, Guodong
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 4775 - 4786
  • [32] Vision Transformers Applied to Indoor Room Classification
    Veiga, Bruno
    Pinto, Tiago
    Teixeira, Ruben
    Ramos, Carlos
    PROGRESS IN ARTIFICIAL INTELLIGENCE, EPIA 2023, PT II, 2023, 14116 : 561 - 573
  • [33] MG-ViT: A Multi-Granularity Method for Compact and Efficient Vision Transformers
    Zhang, Yu
    Liu, Yepeng
    Miao, Duoqian
    Zhang, Qi
    Shi, Yiwei
    Hu, Liang
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [34] CLASSIFICATION OF INTRACRANIAL HEMORRHAGE BASED ON CT-SCAN IMAGE WITH VISION TRANSFORMER (VIT) METHOD
    Faiz, Muhammad Nur
    Badriyah, Tessy
    Kusuma, Selvia Ferdiana
    2024 INTERNATIONAL ELECTRONICS SYMPOSIUM, IES 2024, 2024, : 454 - 459
  • [35] NN2ViT: Neural Networks and Vision Transformers based approach for Visual Anomaly Detection in Industrial Images
    Wahid, Junaid Abdul
    Ayoub, Muhammad
    Xu, Mingliang
    Jiang, Xiaoheng
    Shi, Lei
    Hussain, Shabir
    NEUROCOMPUTING, 2025, 615
  • [36] AnisotropicBreast-ViT: Breast Cancer Classification in Ultrasound Images Using Anisotropic Filtering and Vision Transformer
    Diniz, Joao Otavio Bandeira
    Ribeiro, Neilson P.
    Dias, Domingos A., Jr.
    da Cruz, Luana B.
    da Silva, Giovanni L. F.
    Gomes, Daniel L., Jr.
    de Paiva, Anselmo C.
    Silva, Aristofanes C.
    INTELLIGENT SYSTEMS, BRACIS 2024, PT III, 2025, 15414 : 95 - 109
  • [37] FGPTQ-ViT: Fine-Grained Post-training Quantization for Vision Transformers
    Liu, Caihua
    Shi, Hongyang
    He, Xinyu
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT IX, 2024, 14433 : 79 - 90
  • [38] PEANO-ViT: Power-Efficient Approximations of Non-Linearities in Vision Transformers
    Sadeghi, Mohammad Erfan
    Fayyazi, Arash
    Azizi, Seyedarmin
    Pedram, Massoud
    PROCEEDINGS OF THE 29TH ACM/IEEE INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN, ISLPED 2024, 2024,
  • [39] CellViT: Vision Transformers for precise cell segmentation and classification
    Hoerst, Fabian
    Rempe, Moritz
    Heine, Lukas
    Seibold, Constantin
    Keyl, Julius
    Baldini, Giulia
    Ugurel, Selma
    Siveke, Jens
    Gruenwald, Barbara
    Egger, Jan
    Kleesiek, Jens
    MEDICAL IMAGE ANALYSIS, 2024, 94
  • [40] Quantum Vision Transformers for Quark-Gluon Classification
    Comajoan Cara, Marcal
    Dahale, Gopal Ramesh
    Dong, Zhongtian
    Forestano, Roy T.
    Gleyzer, Sergei
    Justice, Daniel
    Kong, Kyoungchul
    Magorsch, Tom
    Matchev, Konstantin T.
    Matcheva, Katia
    Unlu, Eyup B.
    AXIOMS, 2024, 13 (05)