FEATURE EXTRACTION FOR LOCALIZED CBIR What You Click is What you Get

被引:0
|
作者
Verstockt, Steven [1 ]
Lambert, Peter [1 ]
Van de Walle, Rik [1 ]
机构
[1] Univ Ghent, Dept Elect & Informat Syst, B-9050 Ledeberg Ghent, Belgium
来源
VISAPP 2009: PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON COMPUTER VISION THEORY AND APPLICATIONS, VOL 1 | 2009年
关键词
Object recognition; Feature extraction; Localized CBIR; Query by selection; SIFT;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the problem of localized content based image retrieval. Contrary to classic CBIR systems which rely upon a global view of the image, localized CBIR only focuses on the portion of the image where the user is interested in, i.e. the relevant content. Using the proposed algorithm, it is possible to recognize an object by clicking on it. The algorithm starts with an automatic gamma correction and bilateral filtering. These pre-processing steps simplify the image segmentation. The segmentation itself uses dynamic region growing, starting from the click position. Contrary to the majority of segmentation techniques, region growing only focuses on that part of the image that contains the object. The remainder of the image is not investigated. This simplifies the recognition process, speeds up the segmentation, and increases the quality of the outcome. Following the region growing, the algorithm starts the recognition process, i.e., feature extraction and matching. Based on our requirements and the reported robustness in many state-of-the-art papers, the Scale Invariant Feature Transform (SIFT) approach is used. Extensive experimentation of our algorithm on three different datasets achieved a retrieval efficiency of approximately 80%.
引用
收藏
页码:373 / 376
页数:4
相关论文
共 15 条
  • [1] You are what you eat: So measure what you eat!
    Pouladzadeh P.
    Shirmohammadi S.
    Yassine A.
    IEEE Instrumentation and Measurement Magazine, 2016, 19 (01) : 9 - 15
  • [2] What you get is not always what you see-pitfalls in solar array assessment using overhead imagery
    Hu, Wei
    Bradbury, Kyle
    Malof, Jordan M.
    Li, Boning
    Huang, Bohao
    Streltsov, Artem
    Fujita, K. Sydny
    Hoen, Ben
    APPLIED ENERGY, 2022, 327
  • [3] Changer: Feature Interaction is What You Need for Change Detection
    Fang, Sheng
    Li, Kaiyu
    Li, Zhe
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [4] What You See Is What You Hear: Sounds Alter the Contents of Visual Perception
    Williams, Jamal R.
    Markov, Yuri A.
    Tiurina, Natalia A.
    Stormer, Viola S.
    PSYCHOLOGICAL SCIENCE, 2022, 33 (12) : 2109 - 2122
  • [5] Video Aficionado: We Know What You Are Watching
    He, Jialing
    Zhang, Zijian
    Mao, Jian
    Ma, Liran
    Khoussainov, Bakh
    Jin, Rui
    Zhu, Liehuang
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2022, 21 (08) : 3041 - 3052
  • [6] What and where you have seen? Bag of Words based Local Feature Pooling for Visual Event Detection
    Kumar, N.
    Sukavanam, N.
    TENTH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING SYSTEMS, 2019, 2019, 11071
  • [7] What you see is not (always) what you hear: Induced gamma band responses reflect cross-modal interactions in familiar object recognition
    Yuval-Greenberg, Shlomit
    Deouell, Leon Y.
    JOURNAL OF NEUROSCIENCE, 2007, 27 (05) : 1090 - 1096
  • [8] What your visual system sees where you are not looking
    Rosenholtz, Ruth
    HUMAN VISION AND ELECTRONIC IMAGING XVI, 2011, 7865
  • [9] POMDP solving: what rewards do you really expect at execution?
    Chanel, Caroline Ponzoni Carvalho
    Farges, Jean-Loup
    Teichteil-Koenigsbuch, Florent
    Infantes, Guillaume
    STAIRS 2010: PROCEEDINGS OF THE FIFTH STARTING AI RESEARCHERS' SYMPOSIUM, 2011, 222 : 50 - 62
  • [10] Geospatial Transformer Is What You Need for Aircraft Detection in SAR Imagery
    Chen, Lifu
    Luo, Ru
    Xing, Jin
    Li, Zhenhong
    Yuan, Zhihui
    Cai, Xingmin
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60