Reinforcement Learning for guiding optimization processes in optical design

被引:1
作者
Fu, Cailing [1 ]
Stollenwerk, Jochen [1 ,2 ]
Holly, Carlo [1 ,2 ]
机构
[1] Rhein Westfal TH Aachen, Chair Technol Opt Syst TOS, Steinbachstr 15, D-52074 Aachen, Nrw, Germany
[2] Fraunhofer Inst Laser Technol ILT, Steinbachstr 15, D-52074 Aachen, Nrw, Germany
来源
APPLICATIONS OF MACHINE LEARNING 2022 | 2022年 / 12227卷
关键词
Optical design; reinforcement learning; machine learning; optimization; LENS DESIGN;
D O I
10.1117/12.2632425
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Nowadays, sophisticated ray-tracing software packages are used for the design of optical systems, including local and global optimization algorithms. Nevertheless, the design process is still time-consuming with many manual steps, and it can take days or even weeks until an optical design is finished. To address this shortcoming, artificial intelligence, especially reinforcement learning, is employed to support the optical designer. In this work, different use cases are presented, in which reinforcement learning agents are trained to optimize a lens system. Besides the possibility of bending lenses to reduce spherical aberration, the movement of lenses to optimize the lens positions for a varifocal lens system is shown. Finally, the optimization of lens surface curvatures and distances between lenses are analyzed. For a predefined Cooke Triplet, an agent can choose the curvature of the different surfaces as optimization parameters. The chosen surfaces and the distances between the lenses will then be optimized with a least-squares optimizer.(1). It is shown, that for a Cooke Triplet, setting all surfaces as variables is a good suggestion for most systems if the runtime is not an issue. Taking the runtime into account, the selected number of variable surfaces decreases. For optical systems with a large number of degrees of freedom an intelligent selection of optimization variables can probably be a powerful tool for an efficient and time-saving optimization.
引用
收藏
页数:6
相关论文
共 50 条
[31]   Deep reinforcement learning enables conceptual design of processes for mixtures without [J].
Goettl, Quirin ;
Pirnay, Jonathan ;
Burger, Jakob ;
Grimm, Dominik G. .
COMPUTERS & CHEMICAL ENGINEERING, 2025, 194
[32]   Guiding FPGA Detailed Placement via Reinforcement Learning [J].
Esmaeili, P. ;
Martin, T. ;
Areibi, S. ;
Grewal, G. .
PROCEEDINGS OF THE 2022 IFIP/IEEE 30TH INTERNATIONAL CONFERENCE ON VERY LARGE SCALE INTEGRATION (VLSI-SOC), 2022,
[33]   Nonzero-Sum Game Reinforcement Learning for Performance Optimization in Large-Scale Industrial Processes [J].
Li, Jinna ;
Ding, Jinliang ;
Chai, Tianyou ;
Lewis, Frank L. .
IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (09) :4132-4145
[34]   MENTOR: Guiding Hierarchical Reinforcement Learning With Human Feedback and Dynamic Distance Constraint [J].
Zhou, Xinglin ;
Yuan, Yifu ;
Yang, Shaofu ;
Hao, Jianye .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025, 9 (02) :1292-1306
[35]   Reinforcement learning in optimization problems. Applications to geophysical data inversion [J].
Dell'Aversana, Paolo .
AIMS GEOSCIENCES, 2022, 8 (03) :488-502
[36]   A Multi-objective Reinforcement Learning Solution for Handover Optimization in URLLC [J].
Arnaz, Azadeh ;
Lipman, Justin ;
Abolhasan, Mehran .
2023 28TH ASIA PACIFIC CONFERENCE ON COMMUNICATIONS, APCC 2023, 2023, :68-74
[37]   Aircraft collision avoidance modeling and optimization using deep reinforcement learning [J].
Park K.-W. ;
Kim J.-H. .
Journal of Institute of Control, Robotics and Systems, 2021, 27 (09) :652-659
[38]   Automated synthesis of steady-state continuous processes using reinforcement learning [J].
Goettl, Quirin ;
Grimm, Dominik G. ;
Burger, Jakob .
FRONTIERS OF CHEMICAL SCIENCE AND ENGINEERING, 2022, 16 (02) :288-302
[39]   Transfer Reinforcement Learning for Combinatorial Optimization Problems [J].
Souza, Gleice Kelly Barbosa ;
Santos, Samara Oliveira Silva ;
Ottoni, Andre Luiz Carvalho ;
Oliveira, Marcos Santos ;
Oliveira, Daniela Carine Ramires ;
Nepomuceno, Erivelton Geraldo .
ALGORITHMS, 2024, 17 (02)
[40]   REINFORCEMENT LEARNING FROM PIXELS: WATERFLOODING OPTIMIZATION [J].
Miftakhov, Ruslan ;
Efremov, Igor ;
Al-Qasim, Abdulaziz S. .
PROCEEDINGS OF THE ASME 39TH INTERNATIONAL CONFERENCE ON OCEAN, OFFSHORE AND ARCTIC ENGINEERING, OMAE2020, VOL 11, 2020,