Evaluating Task Optimization and Reinforcement Learning Models in Robotic Task Parameterization

被引:0
作者
Delledonne, Michele [1 ,2 ]
Villagrossi, Enrico [1 ]
Beschi, Manuel [1 ,2 ]
Rastegarpanah, Alireza [3 ]
机构
[1] Natl Res Council Italy, Inst Intelligent Ind Technol & Syst Adv Mfg, I-20133 Milan, Italy
[2] Univ Brescia, Dept Mech & Ind Engn, I-25123 Brescia, Italy
[3] Univ Birmingham, Sch Met & Mat, Birmingham B15 2TT, England
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Robots; Optimization; Programming; Reinforcement learning; Service robots; Artificial intelligence; Robot sensing systems; Software; Mathematical models; Libraries; robotic task optimization; task-oriented programming; intuitive robot programming;
D O I
10.1109/ACCESS.2024.3504354
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The rapid evolution of industrial robot hardware has created a technological gap with software, limiting its adoption. The software solutions proposed in recent years have yet to meet the industrial sector's requirements, as they focus more on the definition of task structure than the definition and tuning of its execution parameters. A framework for task parameter optimization was developed to address this gap. It breaks down the task using a modular structure, allowing the task optimization piece by piece. The optimization is performed with a dedicated hill-climbing algorithm. This paper revisits the framework by proposing an alternative approach that replaces the algorithmic component with reinforcement learning (RL) models. Five RL models are proposed with increasing complexity and efficiency. A comparative analysis of the traditional algorithm and RL models is presented, highlighting efficiency, flexibility, and usability. The results demonstrate that although RL models improve task optimization efficiency by 95%, they still need more flexibility. However, the nature of these models provides significant opportunities for future advancements.
引用
收藏
页码:173734 / 173748
页数:15
相关论文
共 50 条
  • [41] DeepEdge: A Deep Reinforcement Learning Based Task Orchestrator for Edge Computing
    Yamansavascilar, Baris
    Baktir, Ahmet Cihat
    Sonmez, Cagatay
    Ozgovde, Atay
    Ersoy, Cem
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2023, 10 (01): : 538 - 552
  • [42] Reinforcement learning tutor better supported lower performers in a math task
    Sherry Ruan
    Allen Nie
    William Steenbergen
    Jiayu He
    J. Q. Zhang
    Meng Guo
    Yao Liu
    Kyle Dang Nguyen
    Catherine Y. Wang
    Rui Ying
    James A. Landay
    Emma Brunskill
    Machine Learning, 2024, 113 : 3023 - 3048
  • [43] Edge Collaborative Online Task Offloading Method Based on Reinforcement Learning
    Sun, Ming
    Bao, Tie
    Xie, Dan
    Lv, Hengyi
    Si, Guoliang
    ELECTRONICS, 2023, 12 (18)
  • [44] Reinforcement learning tutor better supported lower performers in a math task
    Ruan, Sherry
    Nie, Allen
    Steenbergen, William
    He, Jiayu
    Zhang, J. Q.
    Guo, Meng
    Liu, Yao
    Nguyen, Kyle Dang
    Wang, Catherine Y.
    Ying, Rui
    Landay, James A.
    Brunskill, Emma
    MACHINE LEARNING, 2024, 113 (05) : 3023 - 3048
  • [45] A Task-Adaptive Deep Reinforcement Learning Framework for Dual-Arm Robot Manipulation
    Cui, Yuanzhe
    Xu, Zhipeng
    Zhong, Lou
    Xu, Pengjie
    Shen, Yichao
    Tang, Qirong
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2025, 22 : 466 - 479
  • [46] A Hybrid Multi-Task Learning Approach for Optimizing Deep Reinforcement Learning Agents
    Varghese, Nelson Vithayathil
    Mahmoud, Qusay H.
    IEEE ACCESS, 2021, 9 : 44681 - 44703
  • [47] Research on task decomposition and state abstraction in reinforcement learning
    Yu Lasheng
    Jiang Zhongbin
    Liu Kang
    Artificial Intelligence Review, 2012, 38 : 119 - 127
  • [48] Personalized task difficulty adaptation based on reinforcement learning
    Yaqian Zhang
    Wooi-Boon Goh
    User Modeling and User-Adapted Interaction, 2021, 31 : 753 - 784
  • [49] Hierarchical Task Decomposition through Symbiosis in Reinforcement Learning
    Doucette, John A.
    Lichodzijewski, Peter
    Heywood, Malcolm I.
    PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL CONFERENCE ON GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2012, : 97 - 104
  • [50] Research on task decomposition and state abstraction in reinforcement learning
    Yu Lasheng
    Jiang Zhongbin
    Liu Kang
    ARTIFICIAL INTELLIGENCE REVIEW, 2012, 38 (02) : 119 - 127