Indoor Camera Pose Estimation From Room Layouts and Image Outer Corners

被引:0
|
作者
Chen, Xiaowei [1 ]
Fan, Guoliang [1 ]
机构
[1] Oklahoma State Univ, Sch Elect & Comp Engn, Stillwater, OK 74078 USA
基金
美国国家卫生研究院;
关键词
Image outer corners (IOCs); PnL (perspective-n-line) problem; camera pose estimation; NSGA-II; GENETIC ALGORITHM; POINTS;
D O I
10.1109/TMM.2022.3233308
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To support indoor scene understanding, room layouts have been recently introduced that define a few typical space configurations according to junctions and boundary lines. In this paper, we study camera pose estimation from eight common room layouts with at least two boundary lines that is cast as a PnL (Perspective-n-Line) problem. Specifically, the intersecting points between image borders and room layout boundaries, named image outer corners (IOCs), are introduced to create additional auxiliary lines for PnL optimization. Therefore, a new PnL-IOC algorithm is proposed which has two implementations according to the room layout types. The first one considers six layouts with more than two boundary lines where 3D correspondence estimation of IOCs creates sufficient line correspondences for camera pose estimation. The second one is an extended version to handle two challenging layouts with only two coplanar boundaries where correspondence estimation of IOCs is ill-posed due to insufficient conditions. Thus the powerful NSGA-II algorithm is embedded in PnL-IOC to estimate the correspondences of IOCs. In the last step, the camera pose is jointly optimized with 3D correspondence refinement of IOCs in the iterative Gauss-Newton algorithm. Experiment results on both simulated and real images show the advantages of the proposed PnL-IOC method on the accuracy and robustness of camera pose estimation from eight different room layouts over the existing PnL methods.
引用
收藏
页码:7992 / 8005
页数:14
相关论文
共 50 条
  • [41] Summarizing image/surface registration for 6DOF robot/camera pose estimation
    Batlle, Elisabet
    Matabosch, Carles
    Salvi, Joaquim
    PATTERN RECOGNITION AND IMAGE ANALYSIS, PT 2, PROCEEDINGS, 2007, 4478 : 105 - +
  • [42] Monocular-based Pose Estimation Using Vanishing Points for Indoor Image Correction
    Gakne, Paul Verlaine
    O'Keefe, Kyle
    2017 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 2017,
  • [43] Voxel-Based Scene Representation for Camera Pose Estimation of a Single RGB Image
    Lee, Sangyoon
    Hong, Hyunki
    Eem, Changkyoung
    APPLIED SCIENCES-BASEL, 2020, 10 (24): : 1 - 15
  • [44] Image Uncertainty-Based Absolute Camera Pose Estimation with Fibonacci Outlier Elimination
    Nagarajan Pitchandi
    Saravana Perumaal Subramanian
    Journal of Intelligent & Robotic Systems, 2019, 96 : 65 - 81
  • [45] Enhancing Localization Accuracy of Indoor Occupancy Tracking Using Optical Camera Communication and Human Pose Estimation
    Zhao, Guanliang
    Du, Pengfei
    Geng, Dongxian
    Alphones, Arokiaswami
    Chen, Chen
    2022 IEEE 14TH INTERNATIONAL CONFERENCE ON ADVANCED INFOCOMM TECHNOLOGY (ICAIT 2022), 2022, : 42 - 48
  • [46] HUMAN POSE ESTIMATION FROM MONOCULAR IMAGE CAPTURES
    Lin, Huei-Yung
    Chen, Ting-Wen
    Chen, Chih-Chang
    Hsieh, Chia-Hao
    Lie, Wen-Nung
    ICME: 2009 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, VOLS 1-3, 2009, : 994 - +
  • [47] A Review of Human Pose Estimation from Single Image
    Khan, Naimat Ullah
    Wan, Wanggen
    2018 INTERNATIONAL CONFERENCE ON AUDIO, LANGUAGE AND IMAGE PROCESSING (ICALIP), 2018, : 230 - 236
  • [48] OSiMa: Human Pose Estimation from a Single Image
    Pande, Nipun
    Guha, Prithwijit
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, 2011, 6744 : 200 - 205
  • [49] Camera pose estimation from lines: a fast, robust and general method
    Wang, Ping
    Xu, Guili
    Cheng, Yuehua
    Yu, Qida
    MACHINE VISION AND APPLICATIONS, 2019, 30 (04) : 603 - 614
  • [50] Camera pose estimation from lines: a fast, robust and general method
    Ping Wang
    Guili Xu
    Yuehua Cheng
    Qida Yu
    Machine Vision and Applications, 2019, 30 : 603 - 614