TechnoSapiens: merging humans with technology in augmented reality

被引:3
作者
Rudolph, Carsten [1 ]
Brunnett, Guido [1 ]
Bretschneider, Maximilian [1 ]
Meyer, Bertolt [1 ]
Asbrock, Frank [1 ]
机构
[1] Tech Univ Chemnitz, Chemnitz, Germany
关键词
Mixed reality; Augmented reality; Human computer interaction; Social perception; Stereotyping; VALIDATION; EMBODIMENT; ILLUSION; FABRIK; MODEL;
D O I
10.1007/s00371-023-02829-7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We present a marker-less AR/DR system that can replace the arm of the user with a virtual bionic prosthesis in real time including finger tracking. For this, we use a mixed reality HMD that provides the user with a stereo image based on video-see-through (VST). We apply chroma-keying to remove the user's arm from each captured image and input reconstructed background information into the removed pixels. Before rendering the prosthesis model into the image, we re-target motion capture data of the user's hand to the kinematic skeleton of the prosthesis to match the current hand pose. This system opens new research possibilities on self- and other-perception of bionic bodies. In a first evaluation study of the system, we propose that users perceive the virtual prosthesis model as a part of their body (i.e., that they experience a sense of ownership). We tested this assumption in a laboratory study with 27 individuals who used the system to perform a series of simple tasks in AR with their prosthesis. We measured body ownership and other measures with self-reports. In support of the hypothesis, users experienced a sense of body ownership. Also, a feeling of self-presence is induced during the task, and participants rated the overall experience as positive.
引用
收藏
页码:1021 / 1036
页数:16
相关论文
共 63 条
[1]  
Adikari SB., 2020, ADV HUMAN COMPUT INT, V110, P2020
[2]  
Agisoft LLC, 2022, AG MET
[3]   Where is my hand? Deep hand segmentation for visual self-recognition in humanoid robots [J].
Almeida, Alexandre ;
Vicente, Pedro ;
Bernardino, Alexandre .
ROBOTICS AND AUTONOMOUS SYSTEMS, 2021, 145
[4]  
Araki N., 2008, 2008 3 INT C DIG INF, P33
[5]  
Argelaguet F, 2016, P IEEE VIRT REAL ANN, P3, DOI 10.1109/VR.2016.7504682
[6]   Extending FABRIK with model constraints [J].
Aristidou, Andreas ;
Chrysanthou, Yiorgos ;
Lasenby, Joan .
COMPUTER ANIMATION AND VIRTUAL WORLDS, 2016, 27 (01) :35-57
[7]   FABRIK: A fast, iterative solver for the Inverse Kinematics problem [J].
Aristidou, Andreas ;
Lasenby, Joan .
GRAPHICAL MODELS, 2011, 73 :243-260
[8]   Classification of Aerial Photogrammetric 3D Point Clouds [J].
Becker, C. ;
Rosinskaya, E. ;
Hani, N. ;
d'Angelo, E. ;
Strecha, C. .
PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSING, 2018, 84 (05) :287-295
[9]   Perceptual correlates of successful body-prosthesis interaction in lower limb amputees: psychometric characterisation and development of the Prosthesis Embodiment Scale [J].
Bekrater-Bodmann, Robin .
SCIENTIFIC REPORTS, 2020, 10 (01)
[10]  
Blender Foundation, 2022, Blender