Image Retrieval Using Convolutional Autoencoder, InfoGAN, and Vision Transformer Unsupervised Models

被引:6
|
作者
Sabry, Eman S. [1 ]
Elagooz, Salah S. [1 ]
Abd El-Samie, Fathi E. [2 ]
El-Shafai, Walid [2 ,3 ]
El-Bahnasawy, Nirmeen A. [4 ]
El-Banby, Ghada M. [5 ]
Algarni, Abeer D. [6 ]
Soliman, Naglaa F. [6 ]
Ramadan, Rabie A. [7 ]
机构
[1] El Shorouk Acad, Higher Inst Engn, Dept Commun & Comp Engn, El Shorouk 11837, Egypt
[2] Menoufia Univ, Fac Elect Engn, Dept Elect & Elect Commun Engn, Menoufia 32952, Egypt
[3] Prince Sultan Univ, Comp Sci Dept, Secur Engn Lab, Riyadh 11586, Saudi Arabia
[4] Menoufia Univ, Fac Elect Engn, Comp Sci & Engn Dept, Menoufia 32952, Egypt
[5] Menoufia Univ, Fac Elect Engn, Dept Ind Elect & Control Engn, Menoufia 32952, Egypt
[6] Princess Nourah Bint Abdulrahman Univ, Coll Comp & Informat Sci, Dept Informat Technol, Riyadh 11671, Saudi Arabia
[7] Cairo Univ, Coll Engn, Comp Engn Dept, Giza 12613, Egypt
关键词
Feature extraction; InfoGAN; sketched-real image retrieval; object matching; spatial distance measurement; vision transformer; 3D VIDEO COMMUNICATION; ALGORITHMS;
D O I
10.1109/ACCESS.2023.3241858
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Query by Image Content (QBIC), subsequently known as Content-Based Image Retrieval (CBIR), offers an advantageous solution in a variety of applications, including medical, meteorological, search by image, and other applications. Such CBIR systems primarily use similarity matching algorithms to compare image content to get matched images from datasets. They essentially measure the spatial distance between extracted visual features from a query image and its similar versions in the dataset. One of the most challenging query retrieval problems is Facial Sketched-Real Image Retrieval (FSRIR), which is based on content similarity matching. These facial retrieval systems are employed in a variety of contexts, including criminal justice. The difficulties of retrieving such sorts come from the composition of the human face and its distinctive parts. In addition, the comparison between these types of images is made within two different domains. Besides, to our knowledge, there is a few large-scale facial datasets that can be used to assess the performance of the retrieval systems. The success of the retrieval process is governed by the method used to estimate similarity and the efficient representation of compared images. However, by effectively representing visual features, the main challenge-posing component of such systems might be resolved. Hence, this paper has several contributions that fill the research gap in content-based similarity matching and retrieval. The first contribution is extending the Chinese University Face Sketch (CUFS) dataset by including augmented images, introducing to the community a novel dataset named Extended Sketched-Real Image Retrieval (ESRIR). The CUFS dataset has been extended from 100 images to include 53,000 facial sketches and 53,000 real facial images. The paper second contribution is presenting three new systems for sketched-real image retrieval based on convolutional autoencoder, InfoGAN, and Vision Transformer (ViT) unsupervised models for large datasets. Furthermore, to meet the subjective demands of the users due to the prevalence of multiple query formats, the third contribution of the paper is to train and assess the performance of the proposed models on two additional facial datasets of different image types. Recently, the majority of people have preferred searching for brand logo images, but it may be tricky to separate certain brand logo features their alternatives and even from other features in an image. Thus, the fourth contribution is to compare logo image retrieval performance based on visual features derived from each of the three suggested retrieval systems. The paper also presents cloud-based energy and computational complexity saving approaches on large-scale datasets. Due to the ubiquity of touchscreen devices, users often make drawings based on their fantasies for certain object image searches. Thus, the proposed models are tested and assessed on a tough dataset of doodle-scratched human artworks. They are also studied on a multi-category dataset to cover practically all possible image types and situations. The results are compared with those of the most recent algorithms found in the literature. The results show that the proposed systems outperform the recent counterparts.
引用
收藏
页码:20445 / 20477
页数:33
相关论文
共 50 条
  • [31] GViT: Locality enhanced Vision Transformer using Spectral Graph Convolutional Network
    Jin, Longbin
    Kim, Eun Yi
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [32] IC-CViT: Inverse-Consistent Convolutional Vision Transformer for Diffeomorphic Image Registration
    Xu, Tao
    Jiang, Ting
    Li, Xiaoning
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [33] Enhanced Facial Emotion Recognition Using Vision Transformer Models
    Fatima, N. Sabiyath
    Deepika, G.
    Anthonisamy, Arun
    Chitra, R. Jothi
    Muralidharan, J.
    Alagarsamy, Manjunathan
    Ramyasree, Kummari
    JOURNAL OF ELECTRICAL ENGINEERING & TECHNOLOGY, 2025, 20 (02) : 1143 - 1152
  • [34] East Nusa Tenggara Weaving Image Retrieval Using Convolutional Neural Network
    Tena, Silvester
    Hartanto, Rudy
    Ardiyanto, Igi
    2021 4TH INTERNATIONAL SEMINAR ON RESEARCH OF INFORMATION TECHNOLOGY AND INTELLIGENT SYSTEMS (ISRITI 2021), 2020,
  • [35] Unsupervised Histological Image Registration Using Structural Feature Guided Convolutional Neural Network
    Ge, Lin
    Wei, Xingyue
    Hao, Yayu
    Luo, Jianwen
    Xu, Yan
    IEEE TRANSACTIONS ON MEDICAL IMAGING, 2022, 41 (09) : 2414 - 2431
  • [36] Res-MGCA-SE: a lightweight convolutional neural network based on vision transformer for medical image classification
    Soleimani-Fard S.
    Ko S.-B.
    Neural Computing and Applications, 2024, 36 (28) : 17631 - 17644
  • [37] Stroke Disease Classification Using CT Scan Image with Vision Transformer Method
    Yopiangga, Alfian Prisma
    Badriyah, Tessy
    Syarif, Iwan
    Sakinah, Nur
    2024 INTERNATIONAL ELECTRONICS SYMPOSIUM, IES 2024, 2024, : 436 - 441
  • [38] Classifying the molecular subtype of breast cancer using vision transformer and convolutional neural network features
    Kai, Chiharu
    Tamori, Hideaki
    Ohtsuka, Tsunehiro
    Nara, Miyako
    Yoshida, Akifumi
    Sato, Ikumi
    Futamura, Hitoshi
    Kodama, Naoki
    Kasai, Satoshi
    BREAST CANCER RESEARCH AND TREATMENT, 2025, : 771 - 782
  • [39] ViTH-RFG: Vision Transformer Hashing With Residual Fuzzy Generation for Targeted Attack in Medical Image Retrieval
    Ding, Weiping
    Liu, Chuansheng
    Huang, Jiashuang
    Cheng, Chun
    Ju, Hengrong
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2024, 32 (10) : 5571 - 5584
  • [40] 3D-CTM: Unsupervised Crop Type Mapping Based on 3-D Convolutional Autoencoder and Satellite Image Time Series
    Singh, Karan
    Ranjan, Rajiv
    Ghildiyal, Sushil
    Tamaskar, Shashank
    Goel, Neeraj
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21