A Hybrid Framework for Object Distance Estimation using a Monocular Camera

被引:2
作者
Patel, Vaibhav [1 ]
Mehta, Varun [2 ]
Bolic, Miodrag [1 ]
Mantegh, Iraj [2 ]
机构
[1] Univ Ottawa, Sch Elect Engn & Comp Sci SEECS, 800 King Edward, Ottawa, ON, Canada
[2] Natl Res Council Canada, Montreal, PQ, Canada
来源
2023 IEEE/AIAA 42ND DIGITAL AVIONICS SYSTEMS CONFERENCE, DASC | 2023年
关键词
Object distance estimation; Monocular camera; Hybrid framework; Object detection;
D O I
10.1109/DASC58513.2023.10311189
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Object distance estimation using the monocular camera is a challenging problem in computer vision with many practical applications. Various algorithms are developed for distance estimation using a monocular camera; some involve traditional techniques, while others are based on Deep Learning (DL). Both methods have limitations, such as requiring camera calibration parameters, limited distance estimation range, or the object of interest should be relatively large to get accurate distance estimation. Due to these drawbacks, such algorithms cannot be easily generalized for many practical applications. In this paper, we propose a hybrid monocular distance estimation framework that consists of You Look Only Once version 7 (YOLOv7) algorithm for visual object detection and linear regression model for distance estimation. For our use case, this framework is trained on our field-captured Unmanned Aerial Vehicle (UAV) dataset to detect and estimate distance of UAVs. The dataset includes videos of UAVs obtained from different Point of View (POV) using a Pan-Tilt-Zoom (PTZ) camera that captures and tracks UAVs in the large field of view. Video frames are synchronized with the distance range data obtained from Radio Detection and Ranging (RADAR) sensor which will act as ground truth for regression model. The regression model is trained on input features such as bounding box coordinates, the average number of red, blue, and yellow pixels within the bounding box, and embedded features of detected objects obtained from YOLOv7 and output were RADAR range measurements. Trained UAV detection network has mAP(0.5) of 0.854, mAP(.5:.95) of 0.595 and distance estimation regressor has Mean Squared Error (MSE) of 0.06375 on independent test set. We validated this framework on our field dataset and demonstrated that our approach could detect and estimate distance efficiently and accurately. This framework can be extended for any real-world monocular distance estimation use case just by retraining the YOLOv7 model for desired object detection class and regression model for object-specific distance estimation.
引用
收藏
页数:7
相关论文
共 50 条
  • [41] Real-Time Depth Estimation from a Monocular Moving Camera
    Handa, Aniket
    Sharma, Prateek
    [J]. CONTEMPORARY COMPUTING, 2012, 306 : 494 - 495
  • [42] Dynamic Rigid Bodies Mining and Motion Estimation Based on Monocular Camera
    Gao, Xuanchang
    Liu, Xilong
    Cao, Zhiqiang
    Tan, Min
    Yu, Junzhi
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (09) : 5655 - 5666
  • [43] SCARA Self Posture Recognition Using a Monocular Camera
    Tinoco, Vitor
    Silva, Manuel F.
    Santos, Filipe N.
    Morais, Raul
    Filipe, Vitor
    [J]. IEEE ACCESS, 2022, 10 : 25883 - 25891
  • [44] Representation based regression for object distance estimation
    Ahishali, Mete
    Yamac, Mehmet
    Kiranyaz, Serkan
    Gabbouj, Moncef
    [J]. NEURAL NETWORKS, 2023, 158 : 15 - 29
  • [45] MonoGRNet: A General Framework for Monocular 3D Object Detection
    Qin, Zengyi
    Wang, Jinglu
    Lu, Yan
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5170 - 5184
  • [46] Realtime Object-aware Monocular Depth Estimation in Onboard Systems
    Sangil Lee
    Chungkeun Lee
    Haram Kim
    H. Jin Kim
    [J]. International Journal of Control, Automation and Systems, 2021, 19 : 3179 - 3189
  • [47] Pose Estimation of Rotational Stationary Platform Using Monocular Camera and 6-Axis Inertial Measurement Unit
    Lee, Hohyeong
    Ahn, Hyungjoo
    Im, Sukjae
    Jang, Kwangwoo
    Park, Junwoo
    Lim, Chulsoo
    Han, Jiho
    Heo, Junhoi
    Han, Yugeun
    Bang, Hyochoong
    [J]. JOURNAL OF THE KOREAN SOCIETY FOR AERONAUTICAL AND SPACE SCIENCES, 2024, 52 (09) : 699 - 707
  • [48] Joint vehicle detection and distance prediction via monocular depth estimation
    Shen, Chao
    Zhao, Xiangmo
    Liu, Zhanwen
    Gao, Tao
    Xu, Jiang
    [J]. IET INTELLIGENT TRANSPORT SYSTEMS, 2020, 14 (07) : 753 - 763
  • [49] LVIO-based map generation and pose estimation of an unmanned drone using a monocular camera for an indoor flight
    Kim K.-W.
    Jung T.-K.
    Choi Y.-D.
    Jee G.-I.
    [J]. Journal of Institute of Control, Robotics and Systems, 2019, 25 (06) : 498 - 505
  • [50] Realtime Object-aware Monocular Depth Estimation in Onboard Systems
    Lee, Sangil
    Lee, Chungkeun
    Kim, Haram
    Kim, H. Jin
    [J]. INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2021, 19 (09) : 3179 - 3189