MCGNET: MULTI-LEVEL CONTEXT-AWARE AND GEOMETRIC-AWARE NETWORK FOR 3D OBJECT DETECTION

被引:2
|
作者
Chen, Keng [1 ]
Zhou, Feng [1 ]
Dai, Ju [2 ]
Shen, Pei [1 ]
Cai, Xingquan [1 ]
Zhang, Fengquan [1 ]
机构
[1] North China Univ Technol, Beijing, Peoples R China
[2] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
3D Point Clouds; 3D Object Detection; Attention; 3D Bounding Boxes;
D O I
10.1109/ICIP46576.2022.9897465
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Hough voting based on PointNet++ [1] is effective against 3D object detection, which has been verified by VoteNet [2], H3DNet [3], etc. However, we find there is still room for improvements in two aspects. The first is that most existing methods ignores the particular significance of different format inputs and geometric primitives for predicting object proposals. The second is that the feature extracted by PointNet++ overlooks contextual information about each object. In this paper, to tackle the above issues, we introduce MCGNet to learn multi-level geometric-aware and scale-aware contextual information for 3D object detection. Specifically, our network mainly consists of the baseline module based on H3DNet, geometric-aware module, and context-aware module. The baseline module feeding with four-types inputs (Point, Edge, Surface, and Line) concentrates on extracting diversified geometric primitives, i.e., BB centers, BB face centers, and BB edge centers. The geometric-aware module is proposed to learn the different contributions among the four-types feature maps and the three geometric primitives. The context-aware module aims to establish long-range dependencies features for either four-types feature maps or three geometric primitives. Extensive experiments on two large datasets with real 3D scans, SUN RGB-D and ScanNet datasets, demonstrate that our method is effective against 3D object detection.
引用
收藏
页码:1846 / 1850
页数:5
相关论文
共 50 条
  • [1] CONTEXT-AWARE DATA AUGMENTATION FOR LIDAR 3D OBJECT DETECTION
    Hu, Xuzhong
    Duan, Zaipeng
    Huang, Xiao
    Xu, Ziwen
    Ming, Delie
    Ma, Jie
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 11 - 15
  • [2] Multi-scale Fusion with Context-aware Network for Object Detection
    Wang, Hanyuan
    Xu, Jie
    Li, Linke
    Tian, Ye
    Xu, Du
    Xu, Shizhong
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2486 - 2491
  • [3] Improving 3D Object Detection with Context-Aware and Dimensional Interaction Attention
    Jing Zhou
    Zixin Gong
    Junchi Zhang
    Neural Processing Letters, 56
  • [4] Improving 3D Object Detection with Context-Aware and Dimensional Interaction Attention
    Zhou, Jing
    Gong, Zixin
    Zhang, Junchi
    NEURAL PROCESSING LETTERS, 2024, 56 (01)
  • [5] Context-aware network for RGB-D salient object detection
    Liang, Fangfang
    Duan, Lijuan
    Ma, Wei
    Qiao, Yuanhua
    Miao, Jun
    Ye, Qixiang
    PATTERN RECOGNITION, 2021, 111
  • [6] Context-Aware 3D Object Streaming for Mobile Games
    Rahimi, Hesam
    Shirehjini, Ali Asghar Nazari
    Shirmohammadi, Shervin
    2011 10TH ANNUAL WORKSHOP ON NETWORK AND SYSTEMS SUPPORT FOR GAMES (NETGAMES 2011), 2011,
  • [7] Context-aware 3D object anchoring for mobile robots
    Guenther, Martin
    Ruiz-Sarmiento, J. R.
    Galindo, Cipriano
    Gonzalez-Jimenez, Javier
    Hertzberg, Joachim
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2018, 110 : 12 - 32
  • [8] Context-aware knowledge distillation network for object detection
    Chu, Jing-Hui
    Shi, Li-Dong
    Jing, Pei-Guang
    Lv, Wei
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (03): : 503 - 509
  • [9] Spatial context-aware network for salient object detection
    Kong, Yuqiu
    Feng, Mengyang
    Li, Xin
    Lu, Huchuan
    Liu, Xiuping
    Yin, Baocai
    PATTERN RECOGNITION, 2021, 114
  • [10] Discriminative context-aware network for camouflaged object detection
    Ike, Chidiebere Somadina
    Muhammad, Nazeer
    Bibi, Nargis
    Alhazmi, Samah
    Eoghan, Furey
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 7