Real-Time Reconstruction of 3-D Tactile Motion Field via Multitask Learning

被引:0
作者
Liu, Jin [1 ]
Yu, Hexi [2 ]
Zhao, Can [2 ]
Liu, Wenhai [3 ]
Ma, Daolin [2 ]
Wang, Weiming [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Mech Engn, Shanghai 200240, Peoples R China
[2] Shanghai Jiao Tong Univ, Sch Naval Architecture Ocean & Civil Engn, Shanghai 200240, Peoples R China
[3] Shanghai Jiao Tong Univ, Sch Elect Informat & Elect Engn, Shanghai 200240, Peoples R China
关键词
Three-dimensional displays; Tracking; Image reconstruction; Task analysis; Tactile sensors; Real-time systems; Surface reconstruction; 3-Dmotion field; multitask learning; vision-based tactile sensor; SENSOR;
D O I
10.1109/TIM.2024.3398136
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The 3-D motion field on the surface of the vision-based tactile sensor contains rich tactile information and serves as the foundation for many downstream tasks. However, achieving real-time and precise reconstruction of 3-D motion field is challenging. In this study, the 3-D motion field is decomposed into depth and 2-D motion field, and a multitask learning model is employed to simultaneously estimate both. The approach excels in accuracy and real-time performance (9.20 ms/frame). For depth reconstruction, the model achieves an average mean absolute error (MAE) of 0.062 mm. For 2-D motion field tracking, an effective structured marker tracking algorithm (SMTA) is introduced, and a marker tracking network (MaTnet) is constructed based on it. This network features dynamic sensing fields and a distance field (DF) auxiliary task, offering strong generalization, interpretability, and ease of training. It exhibits excellent tracking performance in various deformations, with an average error of 1.044 pixels (about 0.03 mm). Finally, the strong transferability of the multitask model is demonstrated, with a maximum decrease in depth reconstruction accuracy of 0.018 mm.
引用
收藏
页码:1 / 13
页数:13
相关论文
共 43 条
[1]  
Alspach A, 2019, 2019 2ND IEEE INTERNATIONAL CONFERENCE ON SOFT ROBOTICS (ROBOSOFT 2019), P597, DOI [10.1109/ROBOSOFT.2019.8722713, 10.1109/robosoft.2019.8722713]
[2]   DenseTact 2.0: Optical Tactile Sensor for Shape and Force Reconstruction [J].
Do, Won Kyung ;
Jurewicz, Bianca ;
Kennedy, Monroe, III .
2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, :12549-12555
[3]   DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction [J].
Do, Won Kyung ;
Kennedy, Monroe, III .
2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, :6188-6194
[4]   Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry [J].
Dong, Siyuan ;
Jha, Devesh K. ;
Romeres, Diego ;
Kim, Sangwoon ;
Nikovski, Daniel ;
Rodriguez, Alberto .
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, :6437-6443
[5]  
Dong SY, 2019, IEEE INT CONF ROBOT, P3818, DOI [10.1109/ICRA.2019.8793538, 10.1109/icra.2019.8793538]
[6]  
Donlon E, 2018, IEEE INT C INT ROBOT, P1927, DOI 10.1109/IROS.2018.8593661
[7]   3D Contact Point Cloud Reconstruction From Vision-Based Tactile Flow [J].
Du, Yipai ;
Zhang, Guanlan ;
Wang, Michael Yu .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (04) :12177-12184
[8]   Tactile and Vision Perception for Intelligent Humanoids [J].
Gao, Shuo ;
Dai, Yanning ;
Nathan, Arokia .
ADVANCED INTELLIGENT SYSTEMS, 2022, 4 (02)
[9]   GelStereo Palm: A Novel Curved Visuotactile Sensor for 3-D Geometry Sensing [J].
Hu, Jingyi ;
Cui, Shaowei ;
Wang, Shuo ;
Zhang, Chaofan ;
Wang, Rui ;
Chen, Lipeng ;
Li, Yuhao .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2023, 19 (11) :10853-10863
[10]   Contact Region Estimation Based on a Vision-Based Tactile Sensor Using a Deformable Touchpad [J].
Ito, Yuji ;
Kim, Youngwoo ;
Obinata, Goro .
SENSORS, 2014, 14 (04) :5805-5822