Closed-Loop Control of Direct Ink Writing via Reinforcement Learning

被引:19
|
作者
Piovarci, Michal [1 ]
Foshey, Michael [2 ]
Xu, Jie [2 ]
Erps, Timmothy [2 ]
Babaei, Vahid [3 ]
Didyk, Piotr [4 ]
Rusinkiewicz, Szymon [5 ]
Matusik, Wojciech [2 ]
Bickel, Bernd [1 ]
机构
[1] IST Austria, Klosterneuburg, Austria
[2] MIT CSAIL, Cambridge, MA USA
[3] MPI Informat, Saarbrucken, Germany
[4] Univ Svizzera Italiana, Lugano, Switzerland
[5] Princeton Univ, Princeton, NJ 08544 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2022年 / 41卷 / 04期
基金
芬兰科学院; 奥地利科学基金会;
关键词
closed-loop control; reinforcement learning; additive manufacturing; POWDER BED FUSION; OPTIMIZATION; PARAMETERS; SIMULATION;
D O I
10.1145/3528223.3530144
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Enabling additive manufacturing to employ a wide range of novel, functional materials can be a major boost to this technology. However, making such materials printable requires painstaking trial-and-error by an expert operator, as they typically tend to exhibit peculiar rheological or hysteresis properties. Even in the case of successfully finding the process parameters, there is no guarantee of print-to-print consistency due to material differences between batches. These challenges make closed-loop feedback an attractive option where the process parameters are adjusted on-the-fly. There are several challenges for designing an efficient controller: the deposition parameters are complex and highly coupled, artifacts occur after long time horizons, simulating the deposition is computationally costly, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing using reinforcement learning. We show that approximate, but efficient, numerical simulation is sufficient as long as it allows learning the behavioral patterns of deposition that translate to real-world experiences. In combination with reinforcement learning, our model can be used to discover control policies that outperform baseline controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by applying our control policy in-vivo on a single-layer printer using low and high viscosity materials.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Closed-loop control of a noisy qubit with reinforcement learning
    Ding, Yongcheng
    Chen, Xi
    Magdalena-Benedito, Rafael
    Martin-Guerrero, Jose D.
    MACHINE LEARNING-SCIENCE AND TECHNOLOGY, 2023, 4 (02):
  • [2] Path planning via reinforcement learning with closed-loop motion control and field tests
    Feher, Arpad
    Domina, Adam
    Bardos, Adam
    Aradi, Szilard
    Becsi, Tamas
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 142
  • [3] Deep Reinforcement Learning for Closed-Loop Blood Glucose Control
    Fox, Ian
    Lee, Joyce
    Pop-Busui, Rodica
    Wiens, Jenna
    MACHINE LEARNING FOR HEALTHCARE CONFERENCE, VOL 126, 2020, 126 : 508 - 535
  • [4] Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
    Estiri, Elham
    Mirinejad, Hossein
    IEEE ACCESS, 2023, 11 : 140569 - 140581
  • [5] Reinforcement learning fbr mixed open-loop and closed-loop control
    Hansen, EA
    Barto, AG
    Zilberstein, S
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 9: PROCEEDINGS OF THE 1996 CONFERENCE, 1997, 9 : 1026 - 1032
  • [6] Learning of Closed-Loop Motion Control
    Farshidian, Farbod
    Neunert, Michael
    Buchli, Jonas
    2014 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS 2014), 2014, : 1441 - 1446
  • [7] Application of Reinforcement Learning to Electrical Power System Closed-Loop Emergency Control
    Druet, C.
    Ernest, D.
    Wehenkel, L.
    LECTURE NOTES IN COMPUTER SCIENCE <D>, 2000, 1910 : 86 - 95
  • [8] Reinforcement Q-learning for Closed-loop Hypnosis Depth Control in Anesthesia
    Calvi, Giulia
    Manzoni, Eleonora
    Rampazzo, Mirco
    2022 30TH MEDITERRANEAN CONFERENCE ON CONTROL AND AUTOMATION (MED), 2022, : 164 - 169
  • [9] Closed-loop control of anesthesia and mean arterial pressure using reinforcement learning
    Padmanabhan, Regina
    Meskin, Nader
    Haddad, Wassim M.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2015, 22 : 54 - 64
  • [10] Closed-Loop Control of Anesthesia and Mean Arterial Pressure Using Reinforcement Learning
    Padmanabhan, Regina
    Meskin, Nader
    Haddad, Wassim M.
    2014 IEEE SYMPOSIUM ON ADAPTIVE DYNAMIC PROGRAMMING AND REINFORCEMENT LEARNING (ADPRL), 2014, : 265 - 272