Smart Magnetic Microrobots Learn to Swim with Deep Reinforcement Learning

被引:28
|
作者
Behrens, Michael R. [1 ]
Ruder, Warren C. [1 ,2 ]
机构
[1] Univ Pittsburgh, Dept Bioengn, 300 Technol Dr, Pittsburgh, PA 15219 USA
[2] Carnegie Mellon Univ, Dept Mech Engn, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
基金
美国国家卫生研究院; 美国国家科学基金会;
关键词
artificial intelligence; control systems; machine learning; magnetics; microrobot; reinforcement learning; robotics; BEHAVIOR; DESIGN; ROBOT;
D O I
10.1002/aisy.202200023
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Swimming microrobots are increasingly developed with complex materials and dynamic shapes and are expected to operate in complex environments in which the system dynamics are difficult to model and positional control of the microrobot is not straightforward to achieve. Deep reinforcement learning is a promising method of autonomously developing robust controllers for creating smart microrobots, which can adapt their behavior to operate in uncharacterized environments without the need to model the system dynamics. This article reports the development of a smart helical magnetic hydrogel microrobot that uses the soft actor critic reinforcement learning algorithm to autonomously derive a control policy which allows the microrobot to swim through an uncharacterized biomimetic fluidic environment under control of a time-varying magnetic field generated from a three-axis array of electromagnets. The reinforcement learning agent learns successful control policies from both state vector input and raw images, and the control policies learned by the agent recapitulate the behavior of rationally designed controllers based on physical models of helical swimming microrobots. Deep reinforcement learning applied to microrobot control is likely to significantly expand the capabilities of the next generation of microrobots.
引用
收藏
页数:16
相关论文
共 50 条
  • [41] Electricity Theft Detection Using Deep Reinforcement Learning in Smart Power Grids
    El-Toukhy, Ahmed T.
    Badr, Mahmoud M.
    Mahmoud, Mohamed M. E. A.
    Srivastava, Gautam
    Fouda, Mostafa M.
    Alsabaan, Maazen
    IEEE ACCESS, 2023, 11 : 59558 - 59574
  • [42] Deep reinforcement learning for tuning active vibration control on a smart piezoelectric beam
    Febvre, Maryne
    Rodriguez, Jonathan
    Chesne, Simon
    Collet, Manuel
    JOURNAL OF INTELLIGENT MATERIAL SYSTEMS AND STRUCTURES, 2024, 35 (14) : 1149 - 1165
  • [43] Deep Reinforcement Learning for the management of Software-Defined Networks in Smart Farming
    Alonso, Ricardo S.
    Sitton-Candanedo, Ines
    Casado-Vara, Roberto
    Prieto, Javier
    Corchado, Juan M.
    2020 INTERNATIONAL CONFERENCE ON OMNI-LAYER INTELLIGENT SYSTEMS (IEEE COINS 2020), 2020, : 135 - 140
  • [44] Intelligent Reflecting Surface Configurations for Smart Radio Using Deep Reinforcement Learning
    Wang, Wei
    Zhang, Wei
    IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (08) : 2335 - 2346
  • [45] Deep Reinforcement Learning-Based Job Shop Scheduling of Smart Manufacturing
    Elsayed, Eman K.
    Elsayed, Asmaa K.
    Eldahshan, Kamal A.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 73 (03): : 5103 - 5120
  • [46] Fleet Optimization of Smart Electric Motorcycle System Using Deep Reinforcement Learning
    Anchuen, Patikorn
    Uthansakul, Peerapong
    Uthansakul, Monthippa
    Poochaya, Settawit
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (01): : 1925 - 1943
  • [47] TCP-Drinc: Smart Congestion Control Based on Deep Reinforcement Learning
    Xiao, Kefan
    Mao, Shiwen
    Tugnait, Jitendra K.
    IEEE ACCESS, 2019, 7 : 11892 - 11904
  • [48] Deep Reinforcement Learning Based Coalition Formation for Energy Trading in Smart Grid
    Sadeghi, Mohammad
    Erol-Kantarci, Melike
    2021 IEEE 4TH 5G WORLD FORUM (5GWF 2021), 2021, : 200 - 205
  • [49] Mobile Service Robot Path Planning Using Deep Reinforcement Learning
    Kumaar, A. A. Nippun
    Kochuvila, Sreeja
    IEEE ACCESS, 2023, 11 : 100083 - 100096
  • [50] Quantitative analysis of EXAFS data sets using deep reinforcement learning
    Eun-Suk Jeong
    In-Hui Hwang
    Sang-Wook Han
    Scientific Reports, 15 (1)