Normative Rule Extraction from Implicit Learning into Explicit Representation

被引:0
|
作者
Kadir, Mohd Rashdan Abdul [1 ]
Selamat, Ali [1 ,2 ,3 ]
Krejcar, Ondrej [3 ]
机构
[1] Univ Teknol Malaysia, Fac Engn, Sch Comp, Johor Baharu 81310, Johor, Malaysia
[2] Univ Teknol Malaysia, Malaysia Japan Int Inst Technol MJIIT, Jalan Sultan Yahya Petra, Kuala Lumpur 54100, Malaysia
[3] Univ Hradec Kralove, Fac Informat & Management, Rokitanskeho62, Hradec Kralove 50003, Czech Republic
来源
KNOWLEDGE INNOVATION THROUGH INTELLIGENT SOFTWARE METHODOLOGIES, TOOLS AND TECHNIQUES (SOMET_20) | 2020年 / 327卷
关键词
Multi-agent; Norm Synthesis; Norm Detection; Norm Representation; Deontic; Reinforcement learning; Q-learning; Sequential Decision Making; OpenAiGym; Rule Extraction; State Abstraction; NORMS;
D O I
10.3233/FAIA200555
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Normative multi-agent research is an alternative viewpoint in the design of adaptive autonomous agent architecture. Norms specify the standards of behaviors such as which actions or states should be achieved or avoided. The concept of norm synthesis is the process of generating useful normative rules. This study proposes a model for normative rule extraction from implicit learning, namely using the Q-learning algorithm, into explicit norm representation by implementing Dynamic Deontics and Hierarchical Knowledge Base (HKB) to synthesize useful normative rules in the form of weighted state-action pairs with deontic modality. OpenAi Gym is used to simulate the agent environment. Our proposed model is able to generate both obligative and prohibitive norms as well as deliberate and execute said norms. Results show the generated norms are best used as prior knowledge to guide agent behavior and performs poorly if not complemented by another agent coordination mechanism. Performance increases when using both obligation and prohibition norms, and in general, norms do speed up optimum policy reachability.
引用
收藏
页码:88 / 101
页数:14
相关论文
共 50 条