Case-Based Sample Generation Using Multi-Armed Bandits

被引:0
作者
Korger, Andreas [1 ]
Baumeister, Joachim [1 ]
机构
[1] Univ Wurzburg, D-97074 Wurzburg, Germany
来源
CASE-BASED REASONING RESEARCH AND DEVELOPMENT, ICCBR 2023 | 2023年 / 14141卷
关键词
Case-Based Reasoning; Multi-Armed Bandits; Agent-Based Modeling; Semantics; Knowledge Management; Sampling;
D O I
10.1007/978-3-031-40177-0_8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A central problem in knowledge-based tasks is to provide a collection of reusable knowledge samples extracted from a textual corpus. Often, such corpora are structured into different documents or topics, respectively. The samples need to be proven for usability and adapted by a domain expert requiring a certain processing time for each sample taken. The goal is to achieve an optimal retrieval and adaptation success meeting the time budget of the domain expert. In this work, we formulate this task as a constrained multi-armed bandit model. We combine it with the model of a configurable data-driven case-based learning agent. A case study evaluates the theoretical considerations in a scenario of regulatory knowledge acquisition. Therefore, a data set is constructed out of a corpus of nuclear safety documents. We use the model to optimize the evaluation process of sample generation of adaptational knowledge. The corresponding knowledge graph has been created in an information extraction step by automatically identifying semantic concepts and their relations.
引用
收藏
页码:118 / 133
页数:16
相关论文
共 29 条
[1]   NONSTOCHASTIC MULTI-ARMED BANDITS WITH GRAPH-STRUCTURED FEEDBACK [J].
Alon, Noga ;
Cesa-Bianchi, Nicolo ;
Gentile, Claudio ;
Mannor, Shie ;
Mansour, Yishay ;
Shamir, Ohad .
SIAM JOURNAL ON COMPUTING, 2017, 46 (06) :1785-1826
[2]  
Auer P, 2003, SIAM J COMPUT, V32, P48, DOI 10.1137/S0097539701398375
[3]  
Auer P., 2003, Journal of Machine Learning Research, V3, P397, DOI 10.1162/153244303321897663
[4]  
Banko M, 2007, 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2670
[5]   Multi-armed bandits with censored consumption of resources [J].
Bengs, Viktor ;
Huellermeier, Eyke .
MACHINE LEARNING, 2023, 112 (01) :217-240
[6]  
Bergmann R., 2002, EXPERIENCE MANAGEMEN
[7]  
Brändle F, 2022, DRIVE FOR KNOWLEDGE, P147
[8]  
Carlsson E., 2021, IJCAI INT JOINT C AR
[9]  
Cyganiak Richard, 2014, RDF 1.1 Concepts and Abstract Syntax
[10]  
Hartig O., 2017, A MEND WORKSH FDN DA