Aero-engine life limit parts replacement policy optimization: Reinforcement learning method

被引:0
作者
Lin, Lin [1 ]
Liu, Jie [1 ]
Liu, Jinshan [2 ]
Zhong, Shisheng [1 ]
Guo, Feng [1 ]
机构
[1] Harbin Inst Technol, Sch Mechatron Engn, Harbin, Peoples R China
[2] China Aerosp Sci & Technol Corp, Beijing Spacecrafts, Beijing, Peoples R China
来源
2020 ASIA-PACIFIC INTERNATIONAL SYMPOSIUM ON ADVANCED RELIABILITY AND MAINTENANCE MODELING (APARM) | 2020年
基金
中国国家自然科学基金;
关键词
aero-engine; life limit part; replacement policy; reinforcement learning; Q-learning algorithm;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
An optimization method for aero-engine life limit parts (LLPs) replacement policy is proposed based on reinforcement learning method, aiming at optimizing the aero-engine LLPs replacement policy. In the proposed LLPs replacement policy optimization method, the real-life LLPs replacement rules are adopted as the constraints and the minimum long-term LLPs replacement discount cost is regarded as the optimization objective. In reinforcement learning framework, the Q-learning algorithm is adopted to optimize the LLPs replacement policy. Compared with the traditional methods, the proposed optimization method is simple in structure, and it can achieve better optimization results. To validate the proposed aero-engine LLPs replacement policy optimization method, the LLPs list of a civil turbofan aero-engine is adopted as the sample data. And the existing particle swam optimization algorithm is adopted as the comparative experimental method. The comparison experiment results show that the proposed LLPs replacement policy optimization method achieves obvious advantages. The proposed optimization method is able to provide decision-making supports for aero-engine LLPs replacement.
引用
收藏
页数:6
相关论文
共 17 条
[1]   Modified Recursive Least Squares algorithm to train the Hybrid Multilayered Perceptron (HMLP) network [J].
Al-Batah, Mohammad Subhi ;
Isa, Nor Ashidi Mat ;
Zamli, Kamal Zuhairi ;
Azizli, Khairun Azizi .
APPLIED SOFT COMPUTING, 2010, 10 (01) :236-244
[2]  
[Anonymous], 2010, MACAO PROGNOSTICS SY
[3]  
Ding Gang, 2009, Journal of Aerospace Power, V24, P1035
[4]  
Fu Xu-yun, 2014, Journal of Aerospace Power, V29, P1556
[5]  
[付旭云 Fu Xuyun], 2010, [航空动力学报, Journal of Aerospace Power], V25, P2195
[6]  
Ghobbar A. A., 2013, CONCURRENT ENG APPRO, P449
[7]  
Goldber D. E., 1988, Machine Learning, V3, P95, DOI 10.1023/A:1022602019183
[8]   Evaluation of reinforcement learning control for thermal energy storage systems [J].
Henze, GP ;
Schoenmann, J .
HVAC&R RESEARCH, 2003, 9 (03) :259-275
[9]   Smart Home in Smart Microgrid: A Cost-Effective Energy Ecosystem with Intelligent Hierarchical Agents [J].
Jiang, Bingnan ;
Fei, Yunsi .
IEEE TRANSACTIONS ON SMART GRID, 2015, 6 (01) :3-13
[10]   Case-based reinforcement learning for dynamic inventory control in a multi-agent supply-chain system [J].
Jiang, Chengzhi ;
Sheng, Zhaohan .
EXPERT SYSTEMS WITH APPLICATIONS, 2009, 36 (03) :6520-6526