Joint Object Detection and Depth Estimation in Multiplexed Image

被引:0
|
作者
Zhou, Changxin [1 ]
Liu, Yazhou [1 ]
机构
[1] Nanjing Univ Sci & Technol, Nanjing, Peoples R China
来源
INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING: VISUAL DATA ENGINEERING, PT I | 2019年 / 11935卷
关键词
Object detection; Depth estimation; Multiplexed image;
D O I
10.1007/978-3-030-36189-1_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents an object detection method that can simultaneously estimate the positions and depth of the objects from multiplexed images. Multiplexed image [28] is produced by a new type of imaging device that collects the light from different fields of view using a single image sensor, which is originally designed for stereo, 3D reconstruction and broad view generation using computation imaging. Intuitively, multiplexed image is a blended result of the images of multiple views and both of the appearance and disparities of objects are encoded in a single image implicitly, which provides the possibility for reliable object detection and depth/disparity estimation. Motivated by the recent success of CNN based detector, a multi-anchor detector method is proposed, which detects all the views of the same object as a clique and uses the disparity of different views to estimate the depth of the object. The method is interesting in the following aspects: firstly, both locations and depth of the objects can be simultaneously estimated from a single multiplexed image; secondly, there is almost no computation load increase comparing with the popular object detectors; thirdly, even in the blended multiplexed images, the detection and depth estimation results are very competitive. There is no public multiplexed image dataset yet, therefore the evaluation is based on simulated multiplexed image using the stereo images from KITTI, and very encouraging results have been obtained.
引用
收藏
页码:312 / 323
页数:12
相关论文
共 50 条
  • [41] Target-oriented deformable fast depth estimation based on stereo vision for space object detection
    Xu, Chengcheng
    Zhao, Haiyan
    Gao, Bingzhao
    Liu, Hangyu
    Xie, Hongbin
    MEASUREMENT, 2025, 245
  • [42] Geo-Temporal Selective Approach for Dynamic Depth Estimation in Outdoor Object Detection and Distance Measurement
    Faseeh, Muhammad
    Bibi, Misbah
    Khan, Murad Ali
    Rizwan, Atif
    Ahmad, Rashid
    Kim, Do Hyeun
    IEEE ACCESS, 2024, 12 : 155522 - 155533
  • [43] DEPTH MAP ESTIMATION FROM SINGLE-VIEW IMAGE USING OBJECT CLASSIFICATION BASED ON BAYESIAN LEARNING
    Jung, Jae-Il
    Ho, Yo-Sung
    2010 3DTV-CONFERENCE: THE TRUE VISION - CAPTURE, TRANSMISSION AND DISPLAY OF 3D VIDEO (3DTV-CON 2010), 2010,
  • [44] Pedestrian Detection with Sparse Depth Estimation
    Wang, Yu
    Kato, Jien
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2011, E94D (08): : 1690 - 1699
  • [45] A GENERALIZED DEPTH ESTIMATION ALGORITHM WITH A SINGLE IMAGE
    LAI, SH
    FU, CW
    CHANG, SY
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (04) : 405 - 411
  • [46] EFFICIENT DEPTH ESTIMATION FROM SINGLE IMAGE
    Zhou, Wei
    Dai, Yuchao
    He, Renjie
    2014 IEEE CHINA SUMMIT & INTERNATIONAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (CHINASIP), 2014, : 296 - 300
  • [47] eGAC3D: enhancing depth adaptive convolution and depth estimation for monocular 3D object pose detection
    Ngo, Duc Tuan
    Bui, Minh-Quan Viet
    Nguyen, Duc Dung
    Pham, Hoang-Anh
    PEERJ COMPUTER SCIENCE, 2022, 8
  • [48] ODD-M3D: Object-Wise Dense Depth Estimation for Monocular 3-D Object Detection
    Park, Chanyeong
    Kim, Heegwang
    Jang, Junbo
    Paik, Joonki
    IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, 2024, 70 (01) : 646 - 655
  • [49] Image Captioning with Object Detection and Localization
    Yang, Zhongliang
    Zhang, Yu-Jin
    Rehman, Sadaqat Ur
    Huang, Yongfeng
    IMAGE AND GRAPHICS (ICIG 2017), PT II, 2017, 10667 : 109 - 118
  • [50] UNSUPERVISED SINGLE IMAGE UNDERWATER DEPTH ESTIMATION
    Gupta, Honey
    Mitra, Kaushik
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 624 - 628