3D Reconstruction of End-Effector in Autonomous Positioning Process Using Depth Imaging Device

被引:46
作者
Hu, Yanzhu [1 ]
Li, Leiyuan [1 ]
机构
[1] Beijing Univ Posts & Telecommun, Coll Automat, Beijing 100876, Peoples R China
关键词
CALIBRATION;
D O I
10.1155/2016/8972764
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
Thereal-time calculation of positioning error, error correction, and state analysis has always been a difficult challenge in the process of manipulator autonomous positioning. In order to solve this problem, a simple depth imaging equipment (Kinect) is used and Kalman filtering method based on three-frame subtraction to capture the end-effector motion is proposed in this paper. Moreover, backpropagation (BP) neural network is adopted to recognize the target. At the same time, batch point cloud model is proposed in accordance with depth video stream to calculate the space coordinates of the end-effector and the target. Then, a 3D surface is fitted by using the radial basis function (RBF) and the morphology. The experiments have demonstrated that the end-effector positioning error can be corrected in a short time. The prediction accuracies of both position and velocity have reached 99% and recognition rate of 99.8% has been achieved for cylindrical object. Furthermore, the gradual convergence of the end-effector center (EEC) to the target center (TC) shows that the autonomous positioning is successful. Simultaneously, 3D reconstruction is also completed to analyze the positioning state. Hence, the proposed algorithmin this paper is competent for autonomous positioning of manipulator. The algorithm effectiveness is also validated by 3D reconstruction. The computational ability is increased and system efficiency is greatly improved.
引用
收藏
页数:16
相关论文
共 50 条
[21]   Accurate 3D Reconstruction using a Turntable-based and Telecentric Vision [J].
Gonzalez-Barbosa, Jose-Joel ;
Gomez-Loenzo, Roberto-Augusto ;
Jimenez-Hernandez, Hugo ;
Razo, Miguel ;
Gonzalez-Barbosa, Ricardo .
AUTOMATIKA, 2015, 56 (04) :508-521
[22]   3D Mobile Mapping of the Environment using Imaging Radar Sensors [J].
Glira, Philipp ;
Weidinger, Christoph ;
Kadiofsky, Thomas ;
Pointner, Wolfgang ;
Olsbock, Katharina ;
Zinner, Christian ;
Doostdar, Masrur .
2022 IEEE RADAR CONFERENCE (RADARCONF'22), 2022,
[23]   The 3D reconstruction method of a line-structured light vision sensor based on composite depth images [J].
Wang, Jiahao ;
Zhou, Zhehai .
MEASUREMENT SCIENCE AND TECHNOLOGY, 2021, 32 (07)
[24]   A Novel Phase Compensation Method for Urban 3D Reconstruction Using SAR Tomography [J].
Lu, Hongliang ;
Sun, Jili ;
Wang, Jili ;
Wang, Chunle .
REMOTE SENSING, 2022, 14 (16)
[25]   An implementation of uniform and simultaneous ART for 3D volume reconstruction in X-ray imaging system [J].
Roh, YJ ;
Park, WS ;
Cho, HS ;
Jeon, HJ .
OPTOMECHATRONIC SYSTEMS III, 2002, 4902 :576-587
[26]   A Post-Rectification Approach of Depth Images of Kinect v2 for 3D Reconstruction of Indoor Scenes [J].
Jiao, Jichao ;
Yuan, Libin ;
Tang, Weihua ;
Deng, Zhongliang ;
Wu, Qi .
ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION, 2017, 6 (11)
[27]   3D Hand Gestures Calibration Method for Multi-Display by Using a Depth Camera [J].
Kim, Hee-Kwon ;
Lee, Jisu ;
Yu, ChoRong ;
Shim, HeeSook ;
Gil, Youn-Hee ;
Jee, Hyung-Keun .
2017 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2017, :1044-1046
[28]   A Low-Cost 3-D Imaging Device Using 2-D LiDAR and Reflectors [J].
Yan, Bo ;
Wang, Wenxuan ;
Yan, Ying ;
Xu, Luping ;
Zhang, Hua .
IEEE SENSORS JOURNAL, 2023, 23 (08) :8797-8809
[29]   Image-based 3D reconstruction precision using a camera mounted on a robot arm [J].
Jezia, Yousfi ;
Samir, Lahouar ;
Abdelmajid, Ben Amara .
INTERNATIONAL JOURNAL OF NONLINEAR SCIENCES AND NUMERICAL SIMULATION, 2023, 24 (04) :1197-1214
[30]   A high-precision binocular 3D reconstruction system based on depth-of-field extension and feature point guidance [J].
Lyu, Yuxing ;
Liu, Zongming ;
Wang, Junhua ;
Jiang, Ying ;
Li, Yidan ;
Li, Xinglong ;
Kong, Lingbao ;
Li, Jing ;
Xu, Min .
MEASUREMENT, 2025, 248