Deep Learning Model for Retrieving Color Logo Images in Content Based Image Retrieval

被引:0
作者
Pinjarkar, Latika [1 ]
Bagga, Jaspal [2 ]
Agrawal, Poorva [1 ]
Kaur, Gagandeep [1 ]
Pinjarkar, Vedant [3 ]
Rajendra, Rutuja [4 ]
机构
[1] Symbiosis Int, Symbiosis Inst Technol, Nagpur Campus, Pune, India
[2] Shri Shankaracharya Tech Campus, Bhilai, India
[3] Shri Ramdeobaba Coll Engn & Management, Nagpur, India
[4] Vishwakarma Inst Informat Technol, Comp Sci Engn Artificial Intelligence & Machine Le, Pune, India
关键词
CBIR; Deep Learning; CNNs; Feature Extraction; Image Retrieval;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Content -Based Image retrieval (CBIR) has gained a magnificent deal of consideration because of the digital image data's epidemic growth. The advancement of deep learning has enabled Convolutional Neural Networks to become an influential technique for extraction of discriminative image features. In recent years, convolutional neural networks (CNNs) have proven extremely effective at extracting unique information from images. In contrast to text -based image retrieval, CBIR gathers comparable images based primarily on their visual content. The use of deep learning, especially CNNs, for feature extraction and image processing has been shown to perform better than other techniques. In the proposed study, we investigate CNNs for CBIR focusing on how well they extract discriminative visual features and facilitate accurate image retrieval. Also Principal Component Analysis and Linear Discriminant Analysis are combined for optimization of features resulting in boosting the retrieval results. Using hierarchical representations learned by CNNs, we aim to improve retrieval accuracy and efficiency. In comparison with conventional retrieval techniques, our proposed CBIR system shows superior performance on a benchmark dataset.
引用
收藏
页码:1325 / 1333
页数:9
相关论文
共 23 条
  • [1] Adhikari Tapomoy, 2023, Designing a Convolutional Neural Network for Image Recognition: A Comparative Study of Different Architectures and Training Techniques, DOI [10.2139/ssrn.4366645, DOI 10.2139/SSRN.4366645]
  • [2] All about VLAD
    Arandjelovic, Relja
    Zisserman, Andrew
    [J]. 2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 1578 - 1585
  • [3] Reasoning over Public and Private Data in Retrieval-Based Systems
    Arora, Simran
    Lewis, Patrick
    Fan, Angela
    Kahn, Jacob
    Re, Chistopher
    [J]. TRANSACTIONS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, 2023, 11 : 902 - 921
  • [4] Neural Codes for Image Retrieval
    Babenko, Artem
    Slesarev, Anton
    Chigorin, Alexandr
    Lempitsky, Victor
    [J]. COMPUTER VISION - ECCV 2014, PT I, 2014, 8689 : 584 - 599
  • [5] Research topics, author profiles, and collaboration networks in the top-ranked journal on educational technology over the past 40 years: a bibliometric analysis
    Chen, Xieling
    Yu, Guoxing
    Cheng, Gary
    Hao, Tianyong
    [J]. JOURNAL OF COMPUTERS IN EDUCATION, 2019, 6 (04) : 563 - 585
  • [6] Gordo A, 2017, Arxiv, DOI arXiv:1610.07940
  • [7] Four-way classification of Alzheimer's disease using deep Siamese convolutional neural network with triplet-loss function
    Hajamohideen, Faizal
    Shaffi, Noushath
    Mahmud, Mufti
    Subramanian, Karthikeyan
    Al Sariri, Arwa
    Vimbi, Viswan
    Abdesselam, Abdelhamid
    [J]. BRAIN INFORMATICS, 2023, 10 (01)
  • [8] A CNN Cascade for Landmark Guided Semantic Part Segmentation
    Jackson, Aaron S.
    Valstar, Michel
    Tzimiropoulos, Georgios
    [J]. COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 143 - 155
  • [9] Aggregating Local Image Descriptors into Compact Codes
    Jegou, Herve
    Perronnin, Florent
    Douze, Matthijs
    Sanchez, Jorge
    Perez, Patrick
    Schmid, Cordelia
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (09) : 1704 - 1716
  • [10] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90