Implicit Motion Handling for Video Camouflaged Object Detection

被引:42
作者
Cheng, Xuelian [1 ]
Xiong, Huan [3 ]
Fan, Deng-Ping [4 ]
Zhong, Yiran [6 ,7 ]
Harandi, Mehrtash [1 ,8 ]
Drummond, Tom [1 ]
Ge, Zongyuan [1 ,2 ,5 ]
机构
[1] Monash Univ, Fac Engn, Clayton, Vic, Australia
[2] Monash Univ, eRes Ctr, Clayton, Vic, Australia
[3] Mohamed Bin Zayed Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
[4] Swiss Fed Inst Technol, CVL, Zurich, Switzerland
[5] Airdoc Res Australia, Melbourne, Vic, Australia
[6] SenseTime Res, Shanghai, Peoples R China
[7] Shanghai AI Lab, Shanghai, Peoples R China
[8] CSIRO, Data61, Sydney, NSW, Australia
来源
2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR) | 2022年
关键词
D O I
10.1109/CVPR52688.2022.01349
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We propose a new video camouflaged object detection (VCOD) framework that can exploit both short-term dynamics and long-term temporal consistency to detect camouflaged objects from video frames. An essential property of camouflaged objects is that they usually exhibit patterns similar to the background and thus make them hard to identify from still images. Therefore, effectively handling temporal dynamics in videos becomes the key for the VCOD task as the camouflaged objects will be noticeable when they move. However, current VCOD methods often leverage homography or optical flows to represent motions, where the detection error may accumulate from both the motion estimation error and the segmentation error. On the other hand, our method unifies motion estimation and object segmentation within a single optimization framework. Specifically, we build a dense correlation volume to implicitly capture motions between neighbouring frames and utilize the final segmentation supervision to optimize the implicit motion estimation and segmentation jointly. Furthermore, to enforce temporal consistency within a video sequence, we jointly utilize a spatio-temporal transformer to refine the short-term predictions. Extensive experiments on VCOD benchmarks demonstrate the architectural effectiveness of our approach. We also provide a large-scale VCOD dataset named MoCA-Mask with pixel-level handcrafted groundtruth masks and construct a comprehensive VCOD benchmark with previous methods to facilitate research in this direction. Dataset Link: https:// xueliancheng. github.io/SLT-Net-project.
引用
收藏
页码:13854 / 13863
页数:10
相关论文
共 52 条
[1]  
[Anonymous], 2017, CVPR, DOI DOI 10.1109/CVPR.2017.64
[2]  
[Anonymous], 2016, ECCV
[3]  
[Anonymous], 2021, CVPR, DOI DOI 10.1109/CVPR46437.2021.01214
[4]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00137
[5]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00650
[6]  
[Anonymous], 2024, Russian Chemical Bulletin, DOI DOI 10.1007/S11172-024-4213-Y
[7]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00875
[8]  
[Anonymous], 2021, CVPR, DOI DOI 10.1109/CVPR46437.2021.00866
[9]  
Aydin R, 2014, IN C IND ENG ENG MAN, P1, DOI 10.1109/IEEM.2014.7058588
[10]   Context R-CNN: Long Term Temporal Context for Per-Camera Object Detection [J].
Beery, Sara ;
Wu, Guanhang ;
Rathod, Vivek ;
Votel, Ronny ;
Huang, Jonathan .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :13072-13082