Intelligent Navigation of a Magnetic Microrobot with Model-Free Deep Reinforcement Learning in a Real-World Environment

被引:5
作者
Salehi, Amar [1 ]
Hosseinpour, Soleiman [1 ]
Tabatabaei, Nasrollah [2 ]
Soltani Firouz, Mahmoud [1 ]
Yu, Tingting [3 ]
机构
[1] Univ Tehran, Fac Agr, Dept Mech Engn Agr Machinery, Karaj 3158777871, Iran
[2] Univ Tehran Med Sci, Sch Adv Technol Med, Dept Med Nanotechnol, Tehran 1461884513, Iran
[3] South China Univ Technol, Guangzhou Int Campus, Guangzhou 511442, Peoples R China
关键词
autonomous navigation; deep reinforcement learning; intelligent microrobot; model-free control;
D O I
10.3390/mi15010112
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Microrobotics has opened new horizons for various applications, especially in medicine. However, it also witnessed challenges in achieving maximum optimal performance. One key challenge is the intelligent, autonomous, and precise navigation control of microrobots in fluid environments. The intelligence and autonomy in microrobot control, without the need for prior knowledge of the entire system, can offer significant opportunities in scenarios where their models are unavailable. In this study, two control systems based on model-free deep reinforcement learning were implemented to control the movement of a disk-shaped magnetic microrobot in a real-world environment. The training and results of an off-policy SAC algorithm and an on-policy TRPO algorithm revealed that the microrobot successfully learned the optimal path to reach random target positions. During training, the TRPO exhibited a higher sample efficiency and greater stability. The TRPO and SAC showed 100% and 97.5% success rates in reaching the targets in the evaluation phase, respectively. These findings offer basic insights into achieving intelligent and autonomous navigation control for microrobots to advance their capabilities for various applications.
引用
收藏
页数:16
相关论文
共 37 条
  • [1] Intelligent micro-/nanorobots as drug and cell carrier devices for biomedical therapeutic advancement: Promising development opportunities and translational challenges
    Agrahari, Vibhuti
    Agrahari, Vivek
    Chou, Ming-Li
    Chew, Chew Ho
    Noll, James
    Burnouf, Thierry
    [J]. BIOMATERIALS, 2020, 260
  • [2] Smart Magnetic Microrobots Learn to Swim with Deep Reinforcement Learning
    Behrens, Michael R.
    Ruder, Warren C.
    [J]. ADVANCED INTELLIGENT SYSTEMS, 2022, 4 (10)
  • [3] A MARKOVIAN DECISION PROCESS
    BELLMAN, R
    [J]. JOURNAL OF MATHEMATICS AND MECHANICS, 1957, 6 (05): : 679 - 684
  • [4] Reinforcement learning for pursuit and evasion of microswimmers at low Reynolds number
    Borra, Francesco
    Biferale, Luca
    Cencini, Massimo
    Celani, Antonio
    [J]. PHYSICAL REVIEW FLUIDS, 2022, 7 (02)
  • [5] Deep Reinforcement Learning Framework-Based Flow Rate Rejection Control of Soft Magnetic Miniature Robots
    Cai, Mingxue
    Wang, Qianqian
    Qi, Zhaoyang
    Jin, Dongdong
    Wu, Xinyu
    Xu, Tiantian
    Zhang, Li
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2023, 53 (12) : 7699 - 7711
  • [6] Recent Progress in Magnetically Actuated Microrobots for Targeted Delivery of Therapeutic Agents
    Choi, Junhee
    Hwang, Junsun
    Kim, Jin-young
    Choi, Hongsoo
    [J]. ADVANCED HEALTHCARE MATERIALS, 2021, 10 (06)
  • [7] Micro/nanomotor technology: the new era for food safety control
    Dan, Jie
    Shi, Shuo
    Sun, Hao
    Su, Zehui
    Liang, Yanmin
    Wang, Jianlong
    Zhang, Wentao
    [J]. CRITICAL REVIEWS IN FOOD SCIENCE AND NUTRITION, 2024, 64 (07) : 2032 - 2052
  • [8] Soft Actor-Critic for Navigation of Mobile Robots
    de Jesus, Junior Costa
    Kich, Victor Augusto
    Kolling, Alisson Henrique
    Grando, Ricardo Bedin
    Cuadros, Marco Antonio de Souza Leite
    Gamarra, Daniel Fernando Tello
    [J]. JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2021, 102 (02)
  • [9] An Introduction to Deep Reinforcement Learning
    Francois-Lavet, Vincent
    Henderson, Peter
    Islam, Riashat
    Bellemare, Marc G.
    Pineau, Joelle
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2018, 11 (3-4): : 219 - 354
  • [10] Haarnoja T, 2019, Arxiv, DOI arXiv:1812.05905