Portrait Vision Fusion for Augmented Reality

被引:0
|
作者
Juang, Li-Hong [1 ]
Wu, Ming-Ni [2 ]
Tsou, Feng-Mao [2 ]
机构
[1] Xiamen Univ Technol, Sch Elect Engn & Automat, 600 Ligong Rd, Xiamen 360124, Peoples R China
[2] Natl Taichung Univ Technol, Dept Informat Management, Taichung, Taiwan
来源
关键词
Kinect( plus openCV); Dynamic portrait segmentation; Skeletal tracking; Edge transparent processing; Video interactive; HUMAN-BODY; SEGMENTATION; RECONSTRUCTION; INFORMATION; OBJECTS; DEPTH; TIME;
D O I
10.1080/10798587.2017.1327549
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Video communication is a common way to communicate via interactive technology, especially using webcams for remote interaction and for each participant to see each other's characteristics from the screen display. In this paper, the main goal is to augment some dynamic interactive virtual environments. Towards this goal, a method using superimposing a segmented human portrait on a panoramic background is proposed, then the limb interactive element is added into these videos involved with a dynamic portrait segmentation method meanwhile using a Kinect (+openCV) device to extract a portrait for amendment, finally acquires a full portrait of information. Because the face is the most important identification region in a portrait, a head skeleton tracking method is also used to strengthen the remedy for its head segmentation, further uses the edge transparent processing to synthesize them into the video. The approach leds the users can verbally and physically communicate through these video interactive much more vibrantly.
引用
收藏
页码:739 / 746
页数:8
相关论文
共 50 条
  • [21] Augmented reality and stereo vision for remote scene characterisation
    Lawson, SW
    Pretlove, JRG
    TELEMANIPULATOR AND TELEPRESENCE TECHNOLOGIES VI, 1999, 3840 : 133 - 143
  • [22] Survey of vision-based augmented reality technologies
    Institute of Software Engineering, East China Normal University, Shanghai 200062, China
    Jiqiren, 2008, 4 (379-384):
  • [23] Towards Social Companions in Augmented Reality: Vision and Challenges
    Nijholt, Anton
    DISTRIBUTED, AMBIENT AND PERVASIVE INTERACTIONS: SMART LIVING, LEARNING, WELL-BEING AND HEALTH, ART AND CREATIVITY, PT II, 2022, 13326 : 304 - 319
  • [24] Confluence of computer vision and interactive graphics for augmented reality
    Klinker, GJ
    Ahlers, KH
    Breen, DE
    Chevalier, PY
    Crampton, C
    Greer, DS
    Koller, D
    Kramer, A
    Rose, E
    Tuceryan, M
    Whitaker, RT
    PRESENCE-VIRTUAL AND AUGMENTED REALITY, 1997, 6 (04): : 433 - 451
  • [25] Augmented reality, redefinition of vision and technology within art
    Betanzos Torres, Eber Omar
    Marquez Roa, Ubaldo
    SITUARTE, 2022, 17 (29): : 49 - 54
  • [26] Adaptive fusion framework based on augmented reality training
    Mignotte, P. -Y.
    Coiras, E.
    Rohou, H.
    Petillot, Y.
    Bell, J.
    Lebart, K.
    IET RADAR SONAR AND NAVIGATION, 2008, 2 (02): : 146 - 154
  • [27] On Sensor Fusion for Head Tracking in Augmented Reality Applications
    Ercan, Ali O.
    Erdem, A. Tanju
    2011 AMERICAN CONTROL CONFERENCE, 2011, : 1286 - 1291
  • [28] Two-step fusion method for augmented reality
    Conan, V
    Bisson, P
    TELEMANIPULATOR AND TELEPRESENCE TECHNOLOGIES IV, 1997, 3206 : 106 - 115
  • [29] An Analysis on the Range of Singular Fusion of Augmented Reality Devices
    Lee, Hanul
    Park, Minyoung
    Lee, Hyeontaek
    Choi, Hee-Jin
    CURRENT OPTICS AND PHOTONICS, 2020, 4 (06) : 540 - 544
  • [30] GAP BETWEEN THE PORTRAIT OF THE REALITY AND THE REALITY OF THE PORTRAIT
    Lapointe, Paul-Andre
    RECHERCHES SOCIOGRAPHIQUES, 2012, 53 (01) : 170 - 181