Utilizing Grounded SAM for self-supervised frugal camouflaged human detection

被引:0
作者
Pijarowski, Matthias [1 ,2 ]
Wolpert, Alexander [1 ]
Heckmann, Martin [2 ]
Teutsch, Michael [1 ]
机构
[1] Hensoldt Optron GmbH, Oberkochen, Germany
[2] Aalen Univ Appl Sci, Aalen, Germany
来源
AUTOMATIC TARGET RECOGNITION XXXIV | 2024年 / 13039卷
关键词
camouflaged object detection; frugality; self-supervision; foundation models; object segmentation; SEGMENTATION; NETWORK;
D O I
10.1117/12.3021694
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Visually detecting camouflaged objects is a hard problem for both humans and computer vision algorithms. Strong similarities between object and background appearance make the task significantly more challenging than traditional object detection or segmentation tasks. Current state-of-the-art models use either convolutional neural networks or vision transformers as feature extractors. They are trained in a fully supervised manner and thus need a large amount of labeled training data. In this paper, both self-supervised and frugal learning methods are introduced to the task of Camouflaged Object Detection (COD). The overall goal is to fine-tune two COD reference methods, namely SINet-V2 and HitNet, pre-trained for camouflaged animal detection to the task of camouflaged human detection. Therefore, we use the public dataset CPD1K that contains camouflaged humans in a forest environment. We create a strong baseline using supervised frugal transfer learning for the fine-tuning task. Then, we analyze three pseudo-labeling approaches to perform the fine-tuning task in a self-supervised manner. Our experiments show that we achieve similar performance by pure self-supervision compared to fully supervised frugal learning.
引用
收藏
页数:17
相关论文
共 67 条
  • [1] PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing
    Barnes, Connelly
    Shechtman, Eli
    Finkelstein, Adam
    Goldman, Dan B.
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2009, 28 (03):
  • [2] Bommasani R., 2021, tech. rep.
  • [3] Cheng B., 2022, IEEE CVPR
  • [4] Deng-Ping Fan, 2020, Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. 23rd International Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12266), P263, DOI 10.1007/978-3-030-59725-2_26
  • [5] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [6] Dosovitskiy A., 2020, P INT C LEARN REPR I, DOI [DOI 10.48550/ARXIV.2010.11929, 10 . 48550 / ARXIV . 2010 . 11929]
  • [7] Evchenko M, 2021, Arxiv, DOI arXiv:2111.03731
  • [8] Concealed Object Detection
    Fan, Deng-Ping
    Ji, Ge-Peng
    Cheng, Ming-Ming
    Shao, Ling
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (10) : 6024 - 6042
  • [9] Structure-measure: A New Way to Evaluate Foreground Maps
    Fan, Deng-Ping
    Cheng, Ming-Ming
    Liu, Yun
    Li, Tao
    Borji, Ali
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4558 - 4567
  • [10] Camouflaged Object Detection
    Fan, Deng-Ping
    Ji, Ge-Peng
    Sun, Guolei
    Cheng, Ming-Ming
    Shen, Jianbing
    Shao, Ling
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2774 - 2784