A novel approach for self-driving car in partially observable environment using life long reinforcement learning

被引:1
|
作者
Quadir, Md Abdul [1 ]
Jaiswal, Dibyanshu [1 ]
Mohan, Senthilkumar [2 ]
Innab, Nisreen [3 ]
Sulaiman, Riza [4 ]
Alaoui, Mohammed Kbiri [5 ]
Ahmadian, Ali [6 ,7 ]
机构
[1] Vellore Inst Technol, Sch Comp Sci & Engn, Chennai 600127, India
[2] Vellore Inst Technol, Sch Comp Sci Engn & Informat Syst, Vellore 632014, Tamilnadu, India
[3] AlMaarefa Univ, Coll Appl Sci, Dept Comp Sci & Informat Syst, Riyadh, Saudi Arabia
[4] Univ Kebangsaan Malaysia, Inst Visual Informat, Bangi 43600, Malaysia
[5] King Khalid Univ, Coll Sci, Dept Math, Abha 61413, POB 9004, Saudi Arabia
[6] Mediterranea Univ Reggio Calabria, Decis Lab, Reggio Di Calabria, Italy
[7] Istanbul Okan Univ, Fac Engn & Nat Sci, Istanbul, Turkiye
关键词
Reinforcement Learning; Lifelong Learning; Self-driving car; Lifelong reinforcement learning; Partially observable Environment; POLICY; GAMES;
D O I
10.1016/j.segan.2024.101356
中图分类号
TE [石油、天然气工业]; TK [能源与动力工程];
学科分类号
0807 ; 0820 ;
摘要
Despite ground-breaking advancements in robotics, gaming, and other challenging domains, reinforcement learning still faces significant challenges in solving dynamic, open-world problems. Since reinforcement learning algorithms usually perform poorly when exposed to new tasks outside of their data distribution, continuous learning algorithms have drawn significant attention. In parallel with work on lifelong learning algorithms, there is a need for challenging environments, properly planned trials, and metrics to measure research success. In this context, a Deep Asynchronous Autonomous Learning System (DAALS) is proposed in this paper for training a selfdriving car in a partially observable environment for real-world scenarios in a continuous state-action space. To cater to three different use cases, three different algorithms were used. To train their agents for learning and upgrading discrete state policies, DAALS used the Asynchronous Advantage Stager Reviewer (AASR) algorithm. To train its agent for continuous state spaces, DAALS also uses an Extensive Deterministic Policy Gradient (EDPG) algorithm. To train the agent in a lifelong form of learning for partially observable environments, DAALS uses a Deep Deterministic Policy Gradient Novel Lifelong Learning Algorithm (DDPGNLLA). The system offers flexibility to the user to train the agents for both discrete and continuous state-action spaces. Compared to previous models in continuous state-action spaces, Deep deterministic policy gradient lifelong learning algorithm outperforms previous models by 46.09%. Furthermore, the Deep Asynchronous Autonomous System tends to outperform all previous reinforcement learning algorithms, making our proposed approach a real-world solution. As DAALS has tested on number of different environments it provides the insights on how modern Artificial Intelligence (AI) solutions can be generalized making it one of the better solutions for AI general domain problems.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Deep Reinforcement Learning with external control: Self-driving car application
    Youssef, Fenjiro
    Houda, Benbrahim
    4TH INTERNATIONAL CONFERENCE ON SMART CITY APPLICATIONS (SCA' 19), 2019,
  • [2] End-to-End Reinforcement Learning for Self-driving Car
    Chopra, Rohan
    Roy, Sanjiban Sekhar
    ADVANCED COMPUTING AND INTELLIGENT ENGINEERING, 2020, 1082 : 53 - 61
  • [3] CNN based Reinforcement Learning for Driving Behavior of Simulated Self-Driving Car
    Cho Y.
    Lee J.
    Lee K.
    Transactions of the Korean Institute of Electrical Engineers, 2020, 69 (11): : 1740 - 1749
  • [4] Toward self-driving processes: A deep reinforcement learning approach to control
    Spielberg, Steven
    Tulsyan, Aditya
    Lawrence, Nathan P.
    Loewen, Philip D.
    Gopaluni, R. Bhushan
    AICHE JOURNAL, 2019, 65 (10)
  • [5] Confidence-Aware Reinforcement Learning for Self-Driving Cars
    Cao, Zhong
    Xu, Shaobing
    Peng, Huei
    Yang, Diange
    Zidek, Robert
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (07) : 7419 - 7430
  • [6] Reinforcement learning with augmented states in partially expectation and action observable environment
    Guirnaldo, SA
    Watanabe, K
    Izumi, K
    Kiguchi, K
    SICE 2002: PROCEEDINGS OF THE 41ST SICE ANNUAL CONFERENCE, VOLS 1-5, 2002, : 823 - 828
  • [7] Parallel, Angular and Perpendicular Parking for Self-Driving Cars using Deep Reinforcement Learning
    Sousa, Bruno
    Ribeiro, Tiago
    Coelho, Joana
    Lopes, Gil
    Ribeiro, A. Fernando
    2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC), 2022, : 40 - 46
  • [8] Partially observable environment estimation with uplift inference for reinforcement learning based recommendation
    Shang, Wenjie
    Li, Qingyang
    Qin, Zhiwei
    Yu, Yang
    Meng, Yiping
    Ye, Jieping
    MACHINE LEARNING, 2021, 110 (09) : 2603 - 2640
  • [9] Partially observable environment estimation with uplift inference for reinforcement learning based recommendation
    Wenjie Shang
    Qingyang Li
    Zhiwei Qin
    Yang Yu
    Yiping Meng
    Jieping Ye
    Machine Learning, 2021, 110 : 2603 - 2640
  • [10] Decision Making for Self-Driving Vehicles in Unexpected Environments Using Efficient Reinforcement Learning Methods
    Kim, Min-Seong
    Eoh, Gyuho
    Park, Tae-Hyoung
    ELECTRONICS, 2022, 11 (11)