Leveraging deep learning to control neural oscillators

被引:0
|
作者
Timothy D. Matchen
Jeff Moehlis
机构
[1] University of California,Department of Mechanical Engineering
[2] University of California,Department of Mechanical Engineering, Program in Dynamical Neuroscience
来源
Biological Cybernetics | 2021年 / 115卷
关键词
Oscillators; Machine learning; Neurons; Clustering; Control; Dynamic programming;
D O I
暂无
中图分类号
学科分类号
摘要
Modulation of the firing times of neural oscillators has long been an important control objective, with applications including Parkinson’s disease, Tourette’s syndrome, epilepsy, and learning. One common goal for such modulation is desynchronization, wherein two or more oscillators are stimulated to transition from firing in phase with each other to firing out of phase. The optimization of such stimuli has been well studied, but this typically relies on either a reduction of the dimensionality of the system or complete knowledge of the parameters and state of the system. This limits the applicability of results to real problems in neural control. Here, we present a trained artificial neural network capable of accurately estimating the effects of square-wave stimuli on neurons using minimal output information from the neuron. We then apply the results of this network to solve several related control problems in desynchronization, including desynchronizing pairs of neurons and achieving clustered subpopulations of neurons in the presence of coupling and noise.
引用
收藏
页码:219 / 235
页数:16
相关论文
共 50 条
  • [41] On Local Entropy, Stochastic Control, and Deep Neural Networks
    Pavon, Michele
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 437 - 441
  • [42] Framework for Control and Deep Reinforcement Learning in Traffic
    Wu, Cathy
    Parvate, Kanaad
    Kheterpal, Nishant
    Dickstein, Leah
    Mehta, Ankur
    Vinitsky, Eugene
    Bayen, Alexandre M.
    2017 IEEE 20TH INTERNATIONAL CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS (ITSC), 2017,
  • [43] Promises of Deep Kernel Learning for Control Synthesis
    Reed, Robert
    Laurenti, Luca
    Lahijanian, Morteza
    IEEE CONTROL SYSTEMS LETTERS, 2023, 7 : 3986 - 3991
  • [44] Deep reinforcement learning for inventory control: A roadmap
    Boute, Robert N.
    Gijsbrechts, Joren
    van Jaarsveld, Willem
    Vanvuchelen, Nathalie
    EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2022, 298 (02) : 401 - 412
  • [45] DEEP LEARNING FOR SPACE GUIDANCE, NAVIGATION, AND CONTROL
    Khoroshylov, S., V
    Redka, M. O.
    SPACE SCIENCE AND TECHNOLOGY-KOSMICNA NAUKA I TEHNOLOGIA, 2021, 27 (06): : 38 - 52
  • [46] Leveraging Machine Learning for Signal Processing in Surface Electromyography (sEMG) for Prosthetic Control
    Mhiriz, Zakariae
    Bourhaleb, Mohammed
    Rahmoune, Mohammed
    DIGITAL TECHNOLOGIES AND APPLICATIONS, ICDTA 2024, VOL 1, 2024, 1098 : 107 - 116
  • [47] Experience Selection in Deep Reinforcement Learning for Control
    de Bruin, Tim
    Kober, Jens
    Tuyls, Karl
    Babuska, Robert
    JOURNAL OF MACHINE LEARNING RESEARCH, 2018, 19
  • [48] Deep learning for control: the state of the art and prospects
    Duan Y.-J.
    Lv Y.-S.
    Zhang J.
    Zhao X.-L.
    Wang F.-Y.
    Wang, Fei-Yue (feiyue.wang@ia.ac.cn), 2016, Science Press (42): : 643 - 654
  • [49] Learning of neural network with optimal control tools
    Lipnicka, Marta
    Nowakowski, Andrzej
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2023, 121
  • [50] Deep Decentralized Reinforcement Learning for Cooperative Control
    Koepf, Florian
    Tesfazgi, Samuel
    Flad, Michael
    Hohmann, Soeren
    IFAC PAPERSONLINE, 2020, 53 (02): : 1555 - 1562