The next-best-view for workpiece localization in robot workspace

被引:2
|
作者
Hu, Jie [1 ]
Pagilla, Prabhakar R. [1 ]
Darbha, Swaroop [1 ]
机构
[1] Texas A&M Univ, Dept Mech Engn, College Stn, TX 77843 USA
来源
2021 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS (AIM) | 2021年
关键词
workpiece localization; robotics; manufacturing; next-best-view;
D O I
10.1109/AIM46487.2021.9517657
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Workpiece localization is the process of obtaining the location of a workpiece in a reference frame of a robotic workspace. The location (position and orientation) is represented by the transformation between a local frame associated with the workpiece and the specified reference frame in the workspace. In this work, we study the workpiece localization problem without the two commonly adopted restrictive assumptions: the data used to calculate the transformation is readily available and the correspondence between the data sets used for calculation is known. The goal is to automate the localization process starting from efficient data collection to determining the workpiece location in the workspace. We describe a strategy that includes the following aspects: predicting the correspondence between the measured data and the workpiece CAD model data; generating representative vectors that would aid in determining the next-best-view for collecting new information of the workpiece location; evaluating a search region to find the next sensor location that satisfies both the robot kinematics as well as sensor field-of-view constraints while giving the maximum view gain; and calculating the rigid body transformation from the local frame to the world frame to localize the workpiece. Numerical simulation and experimental results are presented and discussed for the proposed strategy.
引用
收藏
页码:1201 / 1206
页数:6
相关论文
共 31 条
  • [1] Next-Best-View Selection for Robot Eye-in-Hand Calibration
    Yang, Jun
    Rebello, Jason
    Waslander, Steven L.
    2023 20TH CONFERENCE ON ROBOTS AND VISION, CRV, 2023, : 161 - 168
  • [2] Supervised learning of the next-best-view for 3d object reconstruction
    Mendoza, Miguel
    Irving Vasquez-Gomez, J.
    Taud, Hind
    Enrique Sucar, Luis
    Reta, Carolina
    PATTERN RECOGNITION LETTERS, 2020, 133 : 224 - 231
  • [3] Next-best-view regression using a 3D convolutional neural network
    J. Irving Vasquez-Gomez
    David Troncoso
    Israel Becerra
    Enrique Sucar
    Rafael Murrieta-Cid
    Machine Vision and Applications, 2021, 32
  • [4] Next-best-view regression using a 3D convolutional neural network
    Vasquez-Gomez, J. Irving
    Troncoso, David
    Becerra, Israel
    Sucar, Enrique
    Murrieta-Cid, Rafael
    MACHINE VISION AND APPLICATIONS, 2021, 32 (02)
  • [5] Volumetric Next-best-view Planning for 3D Object Reconstruction with Positioning Error
    Vasquez-Gomez, J. Irving
    Sucar, L. Enrique
    Murrieta-Cid, Rafael
    Lopez-Damian, Efrain
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2014, 11
  • [6] Next-Best-View Planning for 3D Object Reconstruction under Positioning Error
    Irving Vasquez, Juan
    Enrique Sucar, L.
    ADVANCES IN ARTIFICIAL INTELLIGENCE, PT I, 2011, 7094 : 429 - 442
  • [7] Assessment of next-best-view algorithms performance with various 3D scanners and manipulator
    Karaszewski, M.
    Adamczyk, M.
    Sitnik, R.
    ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2016, 119 : 320 - 333
  • [8] A Qualitative Comparison of the State-of-the-Art Next-Best-View Planners for 3D Scanning
    Aristovs, Andrejs
    Urtans, Evalds
    BALTIC JOURNAL OF MODERN COMPUTING, 2025, 13 (01): : 157 - 165
  • [9] SSL-NBV: A self-supervised-learning-based next-best-view algorithm for efficient 3D plant reconstruction by a robot
    Ci, Jianchao
    van Henten, Eldert J.
    Wang, Xin
    Burusa, Akshay K.
    Kootstra, Gert
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2025, 233
  • [10] A next-best-view method with self-termination in active modeling of 3D objects
    He, B. W.
    Li, Y. F.
    2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-12, 2006, : 5345 - +