Unsupervised Model-Free Representation Learning

被引:0
|
作者
Ryabko, Daniil [1 ]
机构
[1] INRIA Lille, Lille, France
来源
关键词
PATTERN;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Numerous control and learning problems face the situation where sequences of high-dimensional highly dependent data are available, but no or little feedback is provided to the learner. In such situations it may be useful to find a concise representation of the input signal, that would preserve as much as possible of the relevant information. In this work we are interested in the problems where the relevant information is in the time-series dependence. Thus, the problem can be formalized as follows. Given a series of observations X-0 ,..., X-n coming from a large (high-dimensional) space chi, find a representation function f mapping chi to a finite space Y such that the series f(X-0) ,..., f(X-n) preserve as much information as possible about the original time-series dependence in X-0 ,..., X-n. For stationary time series, the function f can be selected as the one maximizing the time-series information I-infinity(f) = h(0)(f(X)) - h(infinity)(f(X)) where h0(f(X)) is the Shannon entropy of f(X-0) and h(infinity)(f(X)) is the entropy rate of the time series f(X-0) ,..., f(X-n),.... In this paper we study the functional I-infinity(f) from the learning-theoretic point of view. Specifically, we provide some uniform approximation results, and study the behaviour of I-infinity(f) in the problem of optimal control.
引用
收藏
页码:354 / 366
页数:13
相关论文
共 50 条
  • [21] Model-free learning control for unstable system
    Ribeiro, CHC
    Hemerly, EM
    ELECTRONICS LETTERS, 1998, 34 (21) : 2070 - 2071
  • [22] Model-Free Imitation Learning with Policy Optimization
    Ho, Jonathan
    Gupta, Jayesh K.
    Ermon, Stefano
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 48, 2016, 48
  • [23] Model-Free Reinforcement Learning Algorithms: A Survey
    Calisir, Sinan
    Pehlivanoglu, Meltem Kurt
    2019 27TH SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE (SIU), 2019,
  • [24] Recovering Robustness in Model-Free Reinforcement Learning
    Venkataraman, Harish K.
    Seiler, Peter J.
    2019 AMERICAN CONTROL CONFERENCE (ACC), 2019, : 4210 - 4216
  • [25] Online Nonstochastic Model-Free Reinforcement Learning
    Ghai, Udaya
    Gupta, Arushi
    Xia, Wenhan
    Singh, Karan
    Hazan, Elad
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [26] Model-free learning of wire winding control
    Rodriguez, Abdel
    Vrancx, Peter
    Nowe, Ann
    Hostens, Erik
    2013 9TH ASIAN CONTROL CONFERENCE (ASCC), 2013,
  • [27] Policy Learning with Constraints in Model-free Reinforcement Learning: A Survey
    Liu, Yongshuai
    Halev, Avishai
    Liu, Xin
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 4508 - 4515
  • [28] Model-based decision making and model-free learning
    Drummond, Nicole
    Niv, Yael
    CURRENT BIOLOGY, 2020, 30 (15) : R860 - R865
  • [29] Model-Free and Model-Based Active Learning for Regression
    O'Neill, Jack
    Delany, Sarah Jane
    MacNamee, Brian
    ADVANCES IN COMPUTATIONAL INTELLIGENCE SYSTEMS, 2017, 513 : 375 - 386
  • [30] Model-Free or Not?
    Zumpfe, Kai
    Smith, Albert A.
    FRONTIERS IN MOLECULAR BIOSCIENCES, 2021, 8