Reports, Observations, and Belief Change

被引:0
作者
Hunter, Aaron [1 ]
机构
[1] British Columbia Inst Technol, Burnaby, BC, Canada
来源
ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT II | 2024年 / 14472卷
关键词
Trust; Belief Revision; Knowledge Representation; REVISION; TRUST; LOGIC;
D O I
10.1007/978-981-99-8391-9_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider belief change in a context where information comes from reports, and the reporting agents may not be honest. In order to capture this process, we introduce an extended class of epistemic states that includes a history of past reports received. We present a set of postulates that describe how new reports should be incorporated. The postulates describe a new kind of belief change operator, where reported information can either be believed or ignored. We then provide a representation result for these postulates, which characterizes report revision in terms of an underlying set of agents that are perceived to be honest. We then extend our framework by adding observations. In this framework, observations are understood to be highly reliable. As such, when an observation conflicts with a report, we must question the honesty of the agent that provided the report. We introduce a flexible framework where we can set a threshold for the number of false reports an agent can send before they are labelled dishonest. Fundamental results are provided, along with a discussion on key future problems to be addressed in trust and belief revision.
引用
收藏
页码:54 / 65
页数:12
相关论文
共 11 条
  • [1] ON THE LOGIC OF THEORY CHANGE - PARTIAL MEET CONTRACTION AND REVISION FUNCTIONS
    ALCHOURRON, CE
    GARDENFORS, P
    MAKINSON, D
    [J]. JOURNAL OF SYMBOLIC LOGIC, 1985, 50 (02) : 510 - 530
  • [2] Trust as a Precursor to Belief Revision
    Booth, Richard
    Hunter, Aaron
    [J]. JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2018, 61 : 699 - 722
  • [3] On the logic of iterated belief revision
    Darwiche, A
    Pearl, J
    [J]. ARTIFICIAL INTELLIGENCE, 1997, 89 (1-2) : 1 - 29
  • [4] Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources
    Dong, Xin Luna
    Gabrilovich, Evgeniy
    Murphy, Kevin
    Dang, Van
    Horn, Wilko
    Lugaresi, Camillo
    Sun, Shaohua
    Zhang, Wei
    [J]. PROCEEDINGS OF THE VLDB ENDOWMENT, 2015, 8 (09): : 938 - 949
  • [5] Belief Revision with Dishonest Reports
    Hunter, Aaron
    [J]. AI 2022: ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, 13728 : 397 - 410
  • [6] An integrated trust and reputation model for open multi-agent systems
    Huynh, Trung Dong
    Jennings, Nicholas R.
    Shadbolt, Nigel R.
    [J]. AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, 2006, 13 (02) : 119 - 154
  • [7] PROPOSITIONAL KNOWLEDGE BASE REVISION AND MINIMAL CHANGE
    KATSUNO, H
    MENDELZON, AO
    [J]. ARTIFICIAL INTELLIGENCE, 1991, 52 (03) : 263 - 294
  • [8] Trust structures - Denotational and operational semantics
    Krukow, Karl
    Nielsen, Mogens
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2007, 6 (2-3) : 153 - 181
  • [9] Reasoning About Belief, Evidence and Trust in a Multi-agent Setting
    Liu, Fenrong
    Lorini, Emiliano
    [J]. PRINCIPLES AND PRACTICE OF MULTI-AGENT SYSTEMS (PRIMA 2017), 2017, 10621 : 71 - 89
  • [10] Schwind N., 2022, P INT C PRINC KNOWL