Customizable Multi-Modal Mixed Reality Framework

被引:0
|
作者
Omary, Danah [1 ]
机构
[1] Univ North Texas, Denton, TX 76205 USA
来源
2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW 2024 | 2024年
关键词
Extended Reality; Virtual Reality; Mixed Reality; Assistive Technology; Blind and Visually Impaired; Accessibility; User Interface; Haptic Feedback;
D O I
10.1109/VRW62533.2024.00364
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Mixed Reality (MR) has potential to be used not only in the entertainment industry and the work industry, but also for assistive technology. Mixed Reality is suited for assistive technology because of its utilization of other forms of physical feedback to the human senses besides vision while still making use of visual feedback. We propose a glove-based MR system framework that will use finger and hand movement tracking along with tactile feedback so that the blind and visually-impaired (BVI) can interact tactily with virtual objects. In addition to touch, our proposed framework will include robust interactions through other modalities such as a custom voice assistant and an audio interface as well as visual interfaces that tailor to the visual needs of BVI users. Through the various modalities of interaction in our proposed framework, BVI users will be able to obtain a more detailed sense of virtual objects of any 3D model and their experiences will not be limited by vision. The customizable features and modalities that will be available in our proposed system framework will allow for a more individual experience that can be tailored to the variety of the different needs of the BVI as well as general users.
引用
收藏
页码:1140 / 1141
页数:2
相关论文
共 50 条
  • [1] Multi-Modal Interactions of Mixed Reality Framework
    Omary, Danah
    Mehta, Gayatri
    17TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE, DCAS 2024, 2024,
  • [2] Multi-modal musical environments for mixed-reality performance
    Hamilton, Robert
    Caceres, Juan-Pablo
    Nanou, Chryssie
    Platz, Chris
    JOURNAL ON MULTIMODAL USER INTERFACES, 2011, 4 (3-4) : 147 - 156
  • [3] Mixed Reality Deictic Gesture for Multi-Modal Robot Communication
    Williams, Tom
    Bussing, Matthew
    Cabrol, Sebastian
    Boyle, Elizabeth
    Nhan Tran
    HRI '19: 2019 14TH ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2019, : 191 - 201
  • [4] Multi-modal musical environments for mixed-reality performance
    Robert Hamilton
    Juan-Pablo Caceres
    Chryssie Nanou
    Chris Platz
    Journal on Multimodal User Interfaces, 2011, 4 : 147 - 156
  • [5] The Influence of Collaborative and Multi-Modal Mixed Reality: Cultural Learning in Virtual Heritage
    Bekele, Mafkereseb Kassahun
    Champion, Erik
    McMeekin, David A.
    Rahaman, Hafizur
    MULTIMODAL TECHNOLOGIES AND INTERACTION, 2021, 5 (12)
  • [6] Clouds-Based Collaborative and Multi-Modal Mixed Reality for Virtual Heritage
    Bekele, Mafkereseb Kassahun
    HERITAGE, 2021, 4 (03) : 1447 - 1459
  • [7] Multi-modal Human-Computer Virtual Fusion Interaction In Mixed Reality
    Jia, Shengying
    JOURNAL OF APPLIED SCIENCE AND ENGINEERING, 2023, 26 (11): : 1609 - 1618
  • [8] Multi-modal event streams for virtual reality
    von Spiczak, J.
    Samset, E.
    DiMaio, S.
    Reitmayr, G.
    Schmalstieg, D.
    Burghart, C.
    Kikinis, R.
    MULTIMEDIA COMPUTING AND NETWORKING 2007, 2007, 6504
  • [9] MagicChem: A Multi-modal Mixed Reality System Based on Needs Theory for Chemical Education
    Luo, Tianren
    Cai, Ning
    Li, Zheng
    Miao, Jinda
    Pan, Zhipeng
    Shen, Yuze
    Pan, Zhigeng
    Zhang, Mingmin
    2021 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS (VRW 2021), 2021, : 544 - 545
  • [10] A Multi-Modal Haptic Interface for Virtual Reality and Robotics
    Michele Folgheraiter
    Giuseppina Gini
    Dario Vercesi
    Journal of Intelligent and Robotic Systems, 2008, 52 : 465 - 488