Online belief tracking using regression for contingent planning

被引:12
作者
Brafman, Ronen I. [1 ]
Shani, Guy [1 ]
机构
[1] Ben Gurion Univ Negev, Beer Sheva, Israel
关键词
Contingent planning; Partial observability; Non-deterministic planning; Regression; Belief;
D O I
10.1016/j.artint.2016.08.005
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In online contingent planning under partial observability an agent decides at each time step on the next action to execute, given its initial knowledge of the world, the actions executed so far, and the observation made. Such agents require some representation of their belief state to determine which actions are valid, or whether the goal has been achieved. Efficient maintenance of a belief state is, given its potential exponential size, a key research challenge in this area. In this paper we develop the theory of regression as a useful tool for belief-state maintenance. We provide a formal description of regression, discussing various alternatives and optimization techniques, and analyze its space and time complexity. In particular, we show that, with some care, the regressed formula will contain variables relevant to the current query only, rather than all variables in the problem description. Consequently, under suitable assumptions, the complexity of regression queries is at most exponential in its contextual width. This parameter is always upper bounded by Bonet and Geffner's width parameter, introduced in their state-of-the-art factored belief tracking (FBT) method. In addition, we show how to obtain a poly-sized circuit representation for the online regression formula even with non-deterministic actions. We provide an empirical comparison of regression with FBT-based belief maintenance, showing the power of regression for online belief tracking. We also suggest caching techniques for regression, and demonstrate their value in reducing runtime in current benchmarks. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:131 / 152
页数:22
相关论文
共 50 条
[31]   Towards the online learning with Kernels in classification and regression [J].
Li G. ;
Zhao G. ;
Yang F. .
Evolving Systems, 2014, 5 (01) :11-19
[32]   Revised Online Learning with Kernels for Classification and Regression [J].
Li, Guoqi ;
Ning, Ning ;
Ramanathan, Kiruthika ;
Shi, Luping .
2013 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING (CIDM), 2013, :275-279
[33]   Online Regression with Controlled Label Noise Rate [J].
Moroshko, Edward ;
Crammer, Koby .
MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2017, PT II, 2017, 10535 :355-369
[34]   A-DBNF: adaptive deep belief network framework for regression and classification tasks [J].
Bunyodbek Ibrokhimov ;
Cheonghwan Hur ;
Hyunseok Kim ;
Sanggil Kang .
Applied Intelligence, 2021, 51 :4199-4213
[35]   Support vector regression of membership functions and belief functions - Application for pattern recognition [J].
Laanaya, Hicham ;
Martin, Arnaud ;
Aboutajdine, Driss ;
Khenchaf, Ali .
INFORMATION FUSION, 2010, 11 (04) :338-350
[36]   H.264/AVC motion vector concealment solutions using online and offline polynomial regression [J].
Tamer Shanableh ;
Khaled Assaleh .
Signal, Image and Video Processing, 2015, 9 :581-588
[37]   A-DBNF: adaptive deep belief network framework for regression and classification tasks [J].
Ibrokhimov, Bunyodbek ;
Hur, Cheonghwan ;
Kim, Hyunseok ;
Kang, Sanggil .
APPLIED INTELLIGENCE, 2021, 51 (07) :4199-4213
[38]   Cooperative Semi-supervised Regression Algorithm based on Belief Functions Theory [J].
He, Hongshun ;
Han, Deqiang ;
Yang, Yi .
2019 22ND INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION 2019), 2019,
[39]   H.264/AVC motion vector concealment solutions using online and offline polynomial regression [J].
Shanableh, Tamer ;
Assaleh, Khaled .
SIGNAL IMAGE AND VIDEO PROCESSING, 2015, 9 (03) :581-588
[40]   Multiple Instance Models Regression for Robust Visual Tracking [J].
Zha, Yufei ;
Zhang, Yuanqiang ;
Ku, Tao ;
Huang, Hanqiao ;
Huang, Wei ;
Zhang, Peng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (03) :1125-1137