Closed-Loop Control of Direct Ink Writing via Reinforcement Learning

被引:19
作者
Piovarci, Michal [1 ]
Foshey, Michael [2 ]
Xu, Jie [2 ]
Erps, Timmothy [2 ]
Babaei, Vahid [3 ]
Didyk, Piotr [4 ]
Rusinkiewicz, Szymon [5 ]
Matusik, Wojciech [2 ]
Bickel, Bernd [1 ]
机构
[1] IST Austria, Klosterneuburg, Austria
[2] MIT CSAIL, Cambridge, MA USA
[3] MPI Informat, Saarbrucken, Germany
[4] Univ Svizzera Italiana, Lugano, Switzerland
[5] Princeton Univ, Princeton, NJ 08544 USA
来源
ACM TRANSACTIONS ON GRAPHICS | 2022年 / 41卷 / 04期
基金
奥地利科学基金会; 芬兰科学院;
关键词
closed-loop control; reinforcement learning; additive manufacturing; POWDER BED FUSION; OPTIMIZATION; PARAMETERS; SIMULATION;
D O I
10.1145/3528223.3530144
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Enabling additive manufacturing to employ a wide range of novel, functional materials can be a major boost to this technology. However, making such materials printable requires painstaking trial-and-error by an expert operator, as they typically tend to exhibit peculiar rheological or hysteresis properties. Even in the case of successfully finding the process parameters, there is no guarantee of print-to-print consistency due to material differences between batches. These challenges make closed-loop feedback an attractive option where the process parameters are adjusted on-the-fly. There are several challenges for designing an efficient controller: the deposition parameters are complex and highly coupled, artifacts occur after long time horizons, simulating the deposition is computationally costly, and learning on hardware is intractable. In this work, we demonstrate the feasibility of learning a closed-loop control policy for additive manufacturing using reinforcement learning. We show that approximate, but efficient, numerical simulation is sufficient as long as it allows learning the behavioral patterns of deposition that translate to real-world experiences. In combination with reinforcement learning, our model can be used to discover control policies that outperform baseline controllers. Furthermore, the recovered policies have a minimal sim-to-real gap. We showcase this by applying our control policy in-vivo on a single-layer printer using low and high viscosity materials.
引用
收藏
页数:10
相关论文
共 50 条
  • [21] Multi-asset closed-loop reservoir management using deep reinforcement learning
    Nasir, Yusuf
    Durlofsky, Louis J.
    COMPUTATIONAL GEOSCIENCES, 2024, 28 (01) : 23 - 42
  • [22] Closed-Loop Soft Robot Control Frameworks with Coordinated Policies Based on Reinforcement Learning and Proprioceptive Self-Sensing
    Ju, Hunpyo
    Cha, Baekdong
    Rus, Daniela
    Lee, Jongho
    ADVANCED FUNCTIONAL MATERIALS, 2023,
  • [23] Closed-Loop Soft Robot Control Frameworks with Coordinated Policies Based on Reinforcement Learning and Proprioceptive Self-Sensing
    Ju, Hunpyo
    Cha, Baekdong
    Rus, Daniela
    Lee, Jongho
    ADVANCED FUNCTIONAL MATERIALS, 2023, 33 (51)
  • [24] Closed-Loop Control of Chemical Injection Rate for a Direct Nozzle Injection System
    Cai, Xiang
    Walgenbach, Martin
    Doerpmond, Malte
    Lammers, Peter Schulze
    Sun, Yurui
    SENSORS, 2016, 16 (01):
  • [25] On-the-fly closed-loop materials discovery via Bayesian active learning
    Kusne, A. Gilad
    Yu, Heshan
    Wu, Changming
    Zhang, Huairuo
    Hattrick-Simpers, Jason
    DeCost, Brian
    Sarker, Suchismita
    Oses, Corey
    Toher, Cormac
    Curtarolo, Stefano
    Davydov, Albert V.
    Agarwal, Ritesh
    Bendersky, Leonid A.
    Li, Mo
    Mehta, Apurva
    Takeuchi, Ichiro
    NATURE COMMUNICATIONS, 2020, 11 (01)
  • [26] Closed-Loop Reinforcement Learning Based Deep Brain Stimulation Using SpikerNet: A Computational Model
    Coventry, Brandon S.
    Bartlett, Edward L.
    2023 11TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING, NER, 2023,
  • [27] Learning Realistic Traffic Agents in Closed-loop
    Zhang, Chris
    Tu, James
    Zhang, Lunjun
    Wong, Kelvin
    Suo, Simon
    Urtasun, Raquel
    CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [28] Reinforcement learning for closed-loop regulation of cardiovascular system with vagus nerve stimulation: a computational study
    Sarikhani, Parisa
    Hsu, Hao-Lun
    Zeydabadinezhad, Mahmoud
    Yao, Yuyu
    Kothare, Mayuresh
    Mahmoudi, Babak
    JOURNAL OF NEURAL ENGINEERING, 2024, 21 (03)
  • [29] Adaptive closed-loop maneuver planning for low-thrust spacecraft using reinforcement learning
    LaFarge, Nicholas B.
    Howell, Kathleen C.
    Folta, David C.
    ACTA ASTRONAUTICA, 2023, 211 : 142 - 154
  • [30] Inhibition of absence seizures in a reduced corticothalamic circuit via closed-loop control
    Xie, Yan
    Zhu, Rui
    Tan, Xiaolong
    Chai, Yuan
    ELECTRONIC RESEARCH ARCHIVE, 2023, 31 (05): : 2651 - 2666