OccCasNet: Occlusion-Aware Cascade Cost Volume for Light Field Depth Estimation

被引:1
|
作者
Chao, Wentao [1 ]
Duan, Fuqing [1 ]
Wang, Xuechun [1 ]
Wang, Yingqian [2 ]
Lu, Ke [3 ]
Wang, Guanghui [4 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing 100875, Peoples R China
[2] Natl Univ Def Technol, Coll Elect Sci & Technol, Changsha 410073, Peoples R China
[3] Univ Chinese Acad Sci, Coll Engn Sci, Beijing 100049, Peoples R China
[4] Toronto Metropolitan Univ, Dept Comp Sci, Toronto, ON M5B 2K3, Canada
基金
中国国家自然科学基金;
关键词
Light field; depth estimation; cascade network; occlusion-aware; cost volume; NETWORK;
D O I
10.1109/TCI.2024.3488563
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Depth estimation using the Light Field (LF) technique is an essential task with a wide range of practical applications. While mainstream approaches based on multi-view stereo techniques can attain exceptional accuracy by creating finer cost volumes, they are resource-intensive, time-consuming, and often overlook occlusion during cost volume construction. To address these issues and strike a better balance between accuracy and efficiency, we propose an occlusion-aware cascade cost volume for LF depth (disparity) estimation. Our cascaded strategy reduces the sampling number while maintaining a constant sampling interval, enabling the construction of a finer cost volume. We also introduce occlusion maps to enhance accuracy in constructing the occlusion-aware cost volume. Specifically, we first generate a coarse disparity map through a coarse disparity estimation network. Then, we warp the sub-aperture images (SAIs) of adjacent views to the center view based on the coarse disparity map to generate occlusion maps for each SAI by photo-consistency constraints. Finally, we seamlessly incorporate occlusion maps into cascade cost volume to construct an occlusion-aware refined cost volume, allowing the refined disparity estimation network to yield a more precise disparity map. Extensive experiments demonstrate the effectiveness of our method. Compared with the state-of-the-art techniques, our method achieves a superior balance between accuracy and efficiency, ranking first in the Q25 metric on the HCI 4D benchmark.
引用
收藏
页码:1680 / 1691
页数:12
相关论文
共 50 条
  • [11] Occlusion-Aware Unsupervised Light Field Depth Estimation Based on Multi-Scale GANs
    Yan, Wenbin
    Zhang, Xiaogang
    Chen, Hua
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (07) : 6318 - 6333
  • [12] Unsupervised light field disparity estimation using confidence weight and occlusion-aware
    Xiao, Bo
    Gao, Xiujing
    Zheng, Huadong
    Yang, Huibao
    Huang, Hongwu
    OPTICS AND LASERS IN ENGINEERING, 2025, 189
  • [13] Depth from Defocus with Learned Optics for Imaging and Occlusion-aware Depth Estimation
    Ikoma, Hayato
    Nguyen, Cindy M.
    Metzler, Christopher A.
    Peng, Yifan
    Wetzstein, Gordon
    2021 IEEE INTERNATIONAL CONFERENCE ON COMPUTATIONAL PHOTOGRAPHY (ICCP), 2021,
  • [14] Anti-occlusion light field depth estimation guided by Gini cost volume
    Zhang X.-D.
    Dong Y.-L.
    Shi M.-D.
    Kongzhi yu Juece/Control and Decision, 2020, 35 (08): : 1849 - 1858
  • [15] Occlusion-aware optical flow estimation
    Ince, Serdar
    Konrad, Janusz
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2008, 17 (08) : 1443 - 1451
  • [16] Depth Estimation with Cascade Occlusion Culling Filter for Light-field Cameras
    Zhou, Wenhui
    Lumsdaine, Andrew
    Lin, Lili
    2016 23RD INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2016, : 1887 - 1892
  • [17] Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation
    Zhou, Wenhui
    Lin, Lili
    Hong, Yongjie
    Li, Qiujian
    Shen, Xingfa
    Kuruoglu, Ercan Engin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15660 - 15674
  • [18] Beyond Photometric Consistency: Geometry-Based Occlusion-Aware Unsupervised Light Field Disparity Estimation
    Zhou, Wenhui
    Lin, Lili
    Hong, Yongjie
    Li, Qiujian
    Shen, Xingfa
    Kuruoglu, Ercan Engin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) : 15660 - 15674
  • [19] Fast Depth Densification for Occlusion-aware Augmented Reality
    Holynski, Aleksander
    Kopf, Johannes
    SIGGRAPH ASIA'18: SIGGRAPH ASIA 2018 TECHNICAL PAPERS, 2018,
  • [20] Fast Depth Densification for Occlusion-aware Augmented Reality
    Holynski, Aleksander
    Kopf, Johannes
    ACM TRANSACTIONS ON GRAPHICS, 2018, 37 (06):