Multimodal Deep Reinforcement Learning for Visual Security of Virtual Reality Applications

被引:0
作者
Andam, Amine [1 ]
Bentahar, Jamal [2 ,3 ]
Hedabou, Mustapha [1 ]
机构
[1] Mohammed VI Polytech Univ, Sch Comp Sci, Ben Guerir 43150, Morocco
[2] Khalifa Univ, Res Ctr 6G, Abu Dhabi, U Arab Emirates
[3] Concordia Univ, Concordia Inst Informat Syst Engn, Montreal, PQ H3G 1M8, Canada
来源
IEEE INTERNET OF THINGS JOURNAL | 2024年 / 11卷 / 24期
基金
加拿大自然科学与工程研究理事会;
关键词
Security; Visualization; Avatars; Internet of Things; Web conferencing; Three-dimensional displays; Deep reinforcement learning; Deep reinforcement learning (DRL); multimodal neural network; output security; virtual reality (VR); SELECTIVE ATTENTION; DOMINANCE; ONSETS;
D O I
10.1109/JIOT.2024.3450686
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid development of virtual reality (VR) technologies is bringing unprecedented immersive experiences and unusual digital content. Nevertheless, these advancements introduce new security challenges, especially in safeguarding the visual content displayed by VR devices like VR glasses and head-mounted displays. Most existing approaches for visual output security rely exclusively on numerical data, such as object attributes and overlook the need of visual information necessary for thorough VR protection. Moreover, these approaches typically assume a fixed size input, failing to address the dynamic nature of VR where the number of virtual items is constantly changing. This article presents a multimodal deep reinforcement learning (MMDRL) approach to secure the visual outputs in VR applications. We formalize a Markov decision process (MDP) framework for the MMDRL agent that integrates both numerical and image data into the state space to effectively mitigate visual threats. Furthermore, our MMDRL agent is engineered to handle data of varying sizes, which makes it more suitable for VR environments. Results from our experiments demonstrate the agent's ability to successfully counteract visual attacks, significantly outperforming previous approaches. The ablation study confirms the important role of image data in improving the agent's performance, highlighting the efficacy of our multimodal approach. In addition, we provide a video demonstration to showcase these results. Finally, we open-source our VR testbed and source code for further testing and benchmarking.
引用
收藏
页码:39890 / 39900
页数:11
相关论文
共 36 条
  • [1] Ahn S., 2018, P MORN WORKSH VIRT R, P1
  • [2] Alexander J., 2018, 'Ugandan Knuckles' is overtaking VRChat
  • [3] [Anonymous], MEETINVR COP COP MEE
  • [4] [Anonymous], Six Use Cases for the Metaverse in Business
  • [5] [Anonymous], Uploading content & supported file types
  • [6] Basu Tanya., 2021, The metaverse has a groping problem already
  • [7] Virtual Fakes: DeepFakes for Virtual Reality
    Bose, Avishek Joey
    Aarabi, Parham
    [J]. 2019 IEEE 21ST INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP 2019), 2019,
  • [8] Security and Privacy Approaches in Mixed Reality: A Literature Survey
    De Guzman, Jaybie A.
    Thilakarathna, Kanchana
    Seneviratne, Aruna
    [J]. ACM COMPUTING SURVEYS, 2020, 52 (06)
  • [9] Engstrom L., 2019, P INT C LEARN REPR, P1
  • [10] Moving and looming stimuli capture attention
    Franconeri, SL
    Simons, DJ
    [J]. PERCEPTION & PSYCHOPHYSICS, 2003, 65 (07): : 999 - 1010