Towards Training an Agent in Augmented Reality World with Reinforcement Learning

被引:0
|
作者
Muvva, Veeera Venkata Ram Murali Krishna Rao [1 ]
Adhikari, Naresh [1 ]
Ghimire, Amrita D. [1 ]
机构
[1] Mississippi State Univ, Dept Comp Sci, Starkville, MS 39762 USA
来源
2017 17TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS) | 2017年
关键词
Reinforcement Learning; Virtual Reality; Augmented Reality; Fiducial Markers;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Reinforcement learning (RL) helps an agent to learn an optimal path within a specific environment while maximizing its performance. Reinforcement learning (RL) plays a crucial role on training an agent to accomplish a specific job in an environment. To train an agent an optimal policy, the robot must go through intensive training which is not cost-effective in the real-world. A cost-effective solution is required for training an agent by using a virtual environment so that the agent learns an optimal policy, which can be used in virtual as well as real environment for reaching the goal state. In this paper, a new method is purposed to train a physical robot to evade mix of physical and virtual obstacles to reach a desired goal state using optimal policy obtained by training the robot in an augmented reality (AR) world with one of the active reinforcement learning (RL) techniques, known as Q-learning.
引用
收藏
页码:1884 / 1888
页数:5
相关论文
共 50 条
  • [1] Towards augmented reality for corporate training
    Martins, Bruno Rodrigo
    Jorge, Joaquim Armando
    Zorzal, Ezequiel Roberto
    INTERACTIVE LEARNING ENVIRONMENTS, 2023, 31 (04) : 2305 - 2323
  • [2] Augmented Reality in Education Learning and Training
    Nasser, Doaa Nae'l
    2018 JCCO JOINT INTERNATIONAL CONFERENCE ON ICT IN EDUCATION AND TRAINING, INTERNATIONAL CONFERENCE ON COMPUTING IN ARABIC, AND INTERNATIONAL CONFERENCE ON GEOCOMPUTING (JCCO: TICET-ICCA-GECO), 2018, : 154 - 160
  • [3] Augmented Reality-Assisted Deep Reinforcement Learning-Based Model towards Industrial Training and Maintenance for NanoDrop Spectrophotometer
    Alatawi, Hibah
    Albalawi, Nouf
    Shahata, Ghadah
    Aljohani, Khulud
    Alhakamy, A'aeshah
    Tuceryan, Mihran
    SENSORS, 2023, 23 (13)
  • [4] TOWARDS WEARABLE AUGMENTED REALITY IN AUTOMOTIVE ASSEMBLY TRAINING
    Kreft, Sven
    Gausemeier, Jurgen
    Matysczok, Carsten
    ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, PROCEEDINGS, VOL 2, PTS A AND B, 2010, : 1537 - 1547
  • [5] Towards a Mobile Augmented Reality Prototype for Corporate Training
    Marengo, Agostino
    Pagano, Alessandro
    Ladisa, Lucia
    PROCEEDINGS OF THE 16TH EUROPEAN CONFERENCE ON E-LEARNING (ECEL 2017), 2017, : 362 - 366
  • [6] REINDEAR : REINforcement learning agent for Dynamic system control in Edge-Assisted Augmented Reality service
    Lee, KyungChae
    Youn, Chan-Hyun
    11TH INTERNATIONAL CONFERENCE ON ICT CONVERGENCE: DATA, NETWORK, AND AI IN THE AGE OF UNTACT (ICTC 2020), 2020, : 949 - 954
  • [7] Zombies Arena: fusion of reinforcement learning with augmented reality on NPC
    Saad Razzaq
    Fahad Maqbool
    Maham Khalid
    Iram Tariq
    Aqsa Zahoor
    Muhammad Ilyas
    Cluster Computing, 2018, 21 : 655 - 666
  • [8] Zombies Arena: fusion of reinforcement learning with augmented reality on NPC
    Razzaq, Saad
    Maqbool, Fahad
    Khalid, Maham
    Tariq, Iram
    Zahoor, Aqsa
    Ilyas, Muhammad
    CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS, 2018, 21 (01): : 655 - 666
  • [9] Delivering Resources for Augmented Reality by UAVs: a Reinforcement Learning Approach
    Brunori, Damiano
    Colonnese, Stefania
    Cuomo, Francesca
    Flore, Giovanna
    Iocchi, Luca
    FRONTIERS IN COMMUNICATIONS AND NETWORKS, 2021, 2
  • [10] Exploring Training Modes for Industrial Augmented Reality Learning
    Heinz, Mario
    Buettner, Sebastian
    Roecker, Carsten
    12TH ACM INTERNATIONAL CONFERENCE ON PERVASIVE TECHNOLOGIES RELATED TO ASSISTIVE ENVIRONMENTS (PETRA 2019), 2019, : 398 - 401