Feelit: Combining Compliant Shape Displays with Vision-Based Tactile Sensors for Real-Time Teletaction

被引:0
|
作者
Yu, Oscar [1 ]
She, Yu [2 ]
机构
[1] Purdue Univ, Sch Elect & Comp Engn, W Lafayette, IN 47907 USA
[2] Purdue Univ, Sch Ind Engn, W Lafayette, IN 47907 USA
关键词
D O I
10.1109/IROS58592.2024.10802027
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Teletaction, the transmission of tactile feedback or touch, is a crucial aspect in the field of teleoperation. High-quality teletaction feedback allows users to remotely manipulate objects and increase the quality of the human-machine interface between the operator and the robot, making complex manipulation tasks possible. Advances in the field of teletaction for teleoperation however, have yet to make full use of the high-resolution 3D data provided by modern vision-based tactile sensors. Existing solutions for teletaction lack in one or more areas of form or function, such as fidelity or hardware footprint. In this paper, we showcase our design for a low-cost teletaction device that can utilize real-time high-resolution tactile information from vision-based tactile sensors, through both physical 3D surface reconstruction and shear displacement. We present our device, the Feelit, which uses a combination of a pin-based shape display and compliant mechanisms to accomplish this task. The pin-based shape display utilizes an array of 24 servomotors with miniature Bowden cables, giving the device a resolution of 6x4 pins in a 15x10 mm display footprint. Each pin can actuate up to 3 mm in 200 ms, while providing 80 N of force and 1.5 um of depth resolution. Shear displacement and rotation is achieved using a compliant mechanism design, allowing a minimum of 1 mm displacement laterally and 10 degrees of rotation. This real-time 3D tactile reconstruction is achieved with the use of a vision-based tactile sensor, the GelSight [1], along with an algorithm that samples the depth data and marker tracking to generate actuator commands. Through a series of experiments including shape recognition and relative weight identification, we show that our device has the potential to expand teletaction capabilities in the teleoperation space.
引用
收藏
页码:13853 / 13860
页数:8
相关论文
共 50 条
  • [11] Vision-based real-time game interface
    Chai, YoungJoon
    Jang, DongHeon
    Chang, KyuSik
    Kim, TaeYong
    ICE-GIC: 2009 INTERNATIONAL IEEE CONSUMER ELECTRONICS SOCIETY'S GAMES INNOVATIONS CONFERENCE, 2009, : 246 - +
  • [12] REAL-TIME VISION-BASED ROBOT LOCALIZATION
    ATIYA, S
    HAGER, GD
    IEEE TRANSACTIONS ON ROBOTICS AND AUTOMATION, 1993, 9 (06): : 785 - 800
  • [13] A Real Time Vehicle Detection Algorithm for Vision-Based Sensors
    Placzek, Bartlomiej
    COMPUTER VISION AND GRAPHICS, PT II, 2010, 6375 : 211 - 218
  • [14] Real-time vision-based relative aircraft navigation
    Georgia Institute of Technology, Atlanta, GA 30332-0150
    不详
    J. Aerosp. Comput. Inf. Commun., 2007, 4 (707-738):
  • [15] Real-time vision-based detection of waiting pedestrians
    Kehtarnavaz, N
    Rajkotwala, F
    REAL-TIME IMAGING, 1997, 3 (06) : 433 - 440
  • [16] A Real-Time Monocular Vision-Based Obstacle Detection
    Wang, Szu-Hong
    Li, Xiang-Xuan
    2020 6TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2020, : 695 - 699
  • [17] Vision-Based Real-Time Exercise Instruction System
    Wang, Wen-Yang
    Kuo, Chien-Chun
    Liao, Duan-Li
    PROCEEDINGS OF THE 7TH INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTER ENTERTAINMENT TECHNOLOGY (ACE 2010), 2010, : 83 - 86
  • [18] Vision-based real-time traffic accident detection
    Zu Hui
    Xie Yaohua
    Ma Lu
    Fu Jiansheng
    2014 11TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA), 2014, : 1035 - 1038
  • [19] A real-time vision-based human motion capturing
    Huang, CL
    Shen, BC
    Shih, HC
    Visual Communications and Image Processing 2005, Pts 1-4, 2005, 5960 : 917 - 928
  • [20] Vision-based real-time laser cutting system
    Kung Yeh Kung Chieng Hsueh K'an, 2 (169):