3D Directional Encoding for Point Cloud Analysis

被引:0
|
作者
Jung, Yoonjae [1 ]
Lee, Sang-Hyun [2 ]
Seo, Seung-Woo [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul 08826, South Korea
[2] Ajou Univ, Dept AI Mobil Engn, Suwon 16499, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Feature extraction; Vectors; Point cloud compression; Three-dimensional displays; Encoding; Transformers; Network architecture; Data mining; Computer architecture; Neural networks; Information retrieval; Classification; deep learning; directional feature extraction; efficient neural network; point cloud; segmentation;
D O I
10.1109/ACCESS.2024.3472301
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Extracting informative local features in point clouds is crucial for accurately understanding spatial information inside 3D point data. Previous works utilize either complex network designs or simple multi-layer perceptrons (MLP) to extract the local features. However, complex networks often incur high computational cost, whereas simple MLP may struggle to capture the spatial relations among local points effectively. These challenges limit their scalability to delicate and real-time tasks, such as autonomous driving and robot navigation. To address these challenges, we propose a novel 3D Directional Encoding Network (3D-DENet) capable of effectively encoding spatial relations with low computational cost. 3D-DENet extracts spatial and point features separately. The key component of 3D-DENet for spatial feature extraction is Directional Encoding (DE), which encodes the cosine similarity between direction vectors of local points and trainable direction vectors. To extract point features, we also propose Local Point Feature Multi-Aggregation (LPFMA), which integrates various aspects of local point features using diverse aggregation functions. By leveraging DE and LPFMA in a hierarchical structure, 3D-DENet efficiently captures both detailed spatial and high-level semantic features from point clouds. Experiments show that 3D-DENet is effective and efficient in classification and segmentation tasks. In particular, 3D-DENet achieves an overall accuracy of 90.7% and a mean accuracy of 90.1% on ScanObjectNN, outperforming the current state-of-the-art method while using only 47% floating point operations.
引用
收藏
页码:144533 / 144543
页数:11
相关论文
共 50 条
  • [21] CloudWalker: Random walks for 3D point cloud shape analysis
    Mesika A.
    Ben-Shabat Y.
    Tal A.
    Computers and Graphics (Pergamon), 2022, 106 : 110 - 118
  • [22] Affinity-Point Graph Convolutional Network for 3D Point Cloud Analysis
    Wang, Yang
    Xiao, Shunping
    APPLIED SCIENCES-BASEL, 2022, 12 (11):
  • [23] Point Discriminative Learning for Data-efficient 3D Point Cloud Analysis
    Liu, Fayao
    Lin, Guosheng
    Foo, Chuan-Sheng
    Joshi, Chaitanya K.
    Lin, Jie
    2022 INTERNATIONAL CONFERENCE ON 3D VISION, 3DV, 2022, : 42 - 51
  • [24] Representing 3D Point Cloud Data
    Poux, Florent
    GIM INTERNATIONAL-THE WORLDWIDE MAGAZINE FOR GEOMATICS, 2022, 36 (04): : 36 - +
  • [25] Guided 3D point cloud filtering
    Xian-Feng Han
    Jesse S. Jin
    Ming-Jie Wang
    Wei Jiang
    Multimedia Tools and Applications, 2018, 77 : 17397 - 17411
  • [26] 3D Point Cloud Compression: A Survey
    Cao, Chao
    Preda, Marius
    Zaharia, Titus
    PROCEEDINGS WEB3D 2019: THE 24TH INTERNATIONAL ACM CONFERENCE ON 3D WEB TECHNOLOGY, 2019,
  • [27] 3D Point Cloud Guides Restoration
    Khalil, Jesse
    GPS World, 2025, 36 (01): : 28 - 30
  • [28] 3D Point Cloud Segmentation: A survey
    Anh Nguyen
    Le, Bac
    PROCEEDINGS OF THE 2013 6TH IEEE CONFERENCE ON ROBOTICS, AUTOMATION AND MECHATRONICS (RAM), 2013, : 225 - 230
  • [29] Geometric 3D point cloud compression
    Morell, Vicente
    Orts, Sergio
    Cazorla, Miguel
    Garcia-Rodriguez, Jose
    PATTERN RECOGNITION LETTERS, 2014, 50 : 55 - 62
  • [30] Guided 3D point cloud filtering
    Han, Xian-Feng
    Jin, Jesse S.
    Wang, Ming-Jie
    Jiang, Wei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2018, 77 (13) : 17397 - 17411