A novel reinforcement learning-based method for structure optimization

被引:0
|
作者
Mei, Zijian [1 ,2 ]
Yang, Zhouwang [3 ,4 ]
Chen, Jingrun [2 ,5 ]
机构
[1] Univ Sci & Technol China, Sch Artificial Intelligence & Data Sci, Suzhou, Peoples R China
[2] Univ Sci & Technol China, Suzhou Inst Adv Res, Suzhou, Peoples R China
[3] Univ Sci & Technol China, Sch Math Sci, Hefei, Peoples R China
[4] Univ Sci & Technol China, Sch Artificial Intelligence & Data Sci, Hefei, Peoples R China
[5] Univ Sci & Technol China, Sch Math Sci, Suzhou, Peoples R China
基金
国家重点研发计划;
关键词
Structure optimization; reinforcement learning; Monte Carlo tree search; deep learning; TOPOLOGY OPTIMIZATION; DESIGN; SHAPE;
D O I
10.1080/0305215X.2024.2411412
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
With the rapid development of deep learning technology, Reinforcement Learning (RL) has garnered considerable acclaim within the realm of structural optimization owing to its excellent exploration mechanism. However, the widespread application of RL in this field is limited owing to the excessive number of iterations required to converge and the expensive computational cost it brings. To address these challenges, this article presents a novel RL framework for structural optimization, combining Monte Carlo tree search with the proximal policy optimization method, called LMPOM. The key contributions of LMPOM encompass: (1) an enhanced Monte Carlo tree search strategy for partitioning the hybrid design space; (2) a strategy for adaptively updating surrogate models to reduce simulation costs; and (3) the introduction of a novel termination condition for the RL algorithms. Through tests on three benchmark problems, compared with previous RL algorithms, LMPOM consistently shows fewer iterations and better optimization results.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] Deep reinforcement learning-based sampling method for structural reliability assessment
    Xiang, Zhengliang
    Bao, Yuequan
    Tang, Zhiyi
    Li, Hui
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2020, 199
  • [42] A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method
    Xue, Mingfu
    Fu, Jinlong
    Li, Zhiyuan
    Ni, Shifeng
    Wu, Heyi
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    IEEE JOURNAL ON EMERGING AND SELECTED TOPICS IN CIRCUITS AND SYSTEMS, 2024, 14 (04) : 743 - 757
  • [43] Reinforcement Learning-Based Reactive Obstacle Avoidance Method for Redundant Manipulators
    Shen, Yue
    Jia, Qingxuan
    Huang, Zeyuan
    Wang, Ruiquan
    Fei, Junting
    Chen, Gang
    ENTROPY, 2022, 24 (02)
  • [44] Deep reinforcement learning-based reactive trajectory planning method for UAVs
    Cao, Lijia
    Wang, Lin
    Liu, Yang
    Xu, Weihong
    Geng, Chuang
    PROCEEDINGS OF THE INSTITUTION OF MECHANICAL ENGINEERS PART G-JOURNAL OF AEROSPACE ENGINEERING, 2024, 238 (10) : 1018 - 1037
  • [45] Reinforcement learning-based method for type B aortic dissection localization
    Zeng, An
    Lin, Xianyang
    Zhao, Jingliang
    Pan, Dan
    Yang, Baoyao
    Liu, Xin
    Shengwu Yixue Gongchengxue Zazhi/Journal of Biomedical Engineering, 2024, 41 (05): : 878 - 885
  • [46] A reinforcement learning-based multimodal scenario hazardous behaviour recognition method
    Sun, Di
    Li, Yanjing
    Han, Yuexia
    International Journal of Computational Intelligence Studies, 2023, 12 (1-2) : 52 - 71
  • [47] A novel method for designing S-box based on chaotic map and Teaching–Learning-Based Optimization
    Tarek Farah
    Rhouma Rhouma
    Safya Belghith
    Nonlinear Dynamics, 2017, 88 : 1059 - 1074
  • [48] Reinforcement Learning-Based Hybrid Multi-Objective Optimization Algorithm Design
    Palm, Herbert
    Arndt, Lorin
    INFORMATION, 2023, 14 (05)
  • [49] Deep Reinforcement Learning-Based Routing Optimization Algorithm for Edge Data Center
    Zhao, Jixin
    Zhang, Shukui
    Zhang, Yang
    Zhang, Li
    Long, Hao
    26TH IEEE SYMPOSIUM ON COMPUTERS AND COMMUNICATIONS (IEEE ISCC 2021), 2021,
  • [50] Deep Reinforcement Learning-Based Grant-Free NOMA Optimization for mURLLC
    Liu, Yan
    Deng, Yansha
    Zhou, Hui
    Elkashlan, Maged
    Nallanathan, Arumugam
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2023, 71 (03) : 1475 - 1490