Joint Inference of Kinematic and Force Trajectories with Visuo-Tactile Sensing

被引:0
|
作者
Lambert, Alexander [1 ,4 ]
Mukadam, Mustafa [1 ,4 ]
Sundaralingam, Balakumar [2 ,3 ,4 ]
Ratliff, Nathan [4 ]
Boots, Byron [1 ,4 ]
Fox, Dieter [4 ,5 ]
机构
[1] Georgia Inst Technol, Robot Learning Lab, Atlanta, GA 30332 USA
[2] Univ Utah, Robot Ctr, Salt Lake City, UT 84112 USA
[3] Univ Utah, Sch Comp, Salt Lake City, UT 84112 USA
[4] NVIDIA, Santa Clara, CA 95051 USA
[5] Univ Washington, Paul G Allen Sch Comp Sci & Engn, Seattle, WA 98195 USA
来源
2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA) | 2019年
关键词
D O I
10.1109/icra.2019.8794048
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
To perform complex tasks, robots must be able to interact with and manipulate their surroundings. One of the key challenges in accomplishing this is robust state estimation during physical interactions, where the state involves not only the robot and the object being manipulated, but also the state of the contact itself. In this work, within the context of planar pushing, we extend previous inference-based approaches to state estimation in several ways. We estimate the robot, object, and the contact state on multiple manipulation platforms configured with a vision-based articulated model tracker, and either a biomimetic tactile sensor or a force-torque sensor. We show how to fuse raw measurements from the tracker and tactile sensors to jointly estimate the trajectory of the kinematic states and the forces in the system via probabilistic inference on factor graphs, in both batch and incremental settings. We perform several benchmarks with our framework and show how performance is affected by incorporating various geometric and physics based constraints, occluding vision sensors, or injecting noise in tactile sensors. We also compare with prior work on multiple datasets and demonstrate that our approach can effectively optimize over multi-modal sensor data and reduce uncertainty to find better state estimates.
引用
收藏
页码:3165 / 3171
页数:7
相关论文
共 50 条
  • [1] A braille detection system based on visuo-tactile sensing
    Zhang, Shixin
    Du, Jiatong
    Sun, Yuhao
    Sun, Fuchun
    Liu, Huaping
    Yang, Yiyong
    Fang, Bin
    MEASUREMENT, 2025, 247
  • [2] Visuo-tactile heading perception
    Rosenblum, Lisa
    Kress, Alexander
    Schwenk, Jakob C. B.
    Bremmer, Frank
    JOURNAL OF NEUROPHYSIOLOGY, 2022, 128 (05) : 1355 - 1364
  • [3] Visuo-Tactile Transformers for Manipulation
    Chen, Yizhou
    Sipos, Andrea
    Van der Merwe, Mark
    Fazeli, Nima
    CONFERENCE ON ROBOT LEARNING, VOL 205, 2022, 205 : 2026 - 2040
  • [4] Predictive Visuo-Tactile Interactive Perception Framework for Object Properties Inference
    Dutta, Anirvan
    Burdet, Etienne
    Kaboli, Mohsen
    IEEE TRANSACTIONS ON ROBOTICS, 2025, 41 : 1386 - 1403
  • [5] Visuo-tactile multisensory interactions and fMRI
    Lloyd, D
    JOURNAL OF PSYCHOPHYSIOLOGY, 2005, 19 (01) : 61 - 61
  • [6] Visuo-tactile Integration in Personal Space
    Longo, Matthew R.
    Musil, Jason Jiri
    Haggard, Patrick
    JOURNAL OF COGNITIVE NEUROSCIENCE, 2012, 24 (03) : 543 - 552
  • [7] Amygdala function in a visuo-tactile discrimination
    Matsumura, H
    Okaichi, H
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 1996, 31 (3-4) : 38429 - 38429
  • [8] Effects of Tactile Textures on Preference in Visuo-Tactile Exploration
    Park, Wanjoo
    Jamil, Muhammad Hassan
    Gebremedhin, Ruth Ghidey
    Eid, Mohamad
    ACM TRANSACTIONS ON APPLIED PERCEPTION, 2021, 18 (02)
  • [9] Rapid temporal recalibration to visuo-tactile stimuli
    Lange, Joachim
    Kapala, Katharina
    Krause, Holger
    Baumgarten, Thomas J.
    Schnitzler, Alfons
    EXPERIMENTAL BRAIN RESEARCH, 2018, 236 (02) : 347 - 354
  • [10] Visuo-Tactile Keypoint Correspondences for Object Manipulation
    Kim, Jeong-Jung
    Koh, Doo-Yeol
    Kim, Chang-Hyun
    2024 IEEE INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS, AIM 2024, 2024, : 399 - 403