Determining Maximal Entropy Functions for Objective Bayesian Inductive Logic

被引:6
作者
Landes, Juergen [1 ]
Rad, Soroush Rafiee [2 ,3 ]
Williamson, Jon [4 ,5 ]
机构
[1] Univ Milan, Dept Philosophy Piero Martinetti, Milan, Italy
[2] Dutch Inst Emergent Phenomena DIEP, Amsterdam, Netherlands
[3] Inst Log Language & Computat ILLC, Amsterdam, Netherlands
[4] Univ Kent, Philosophy Dept, Canterbury, Kent, England
[5] Univ Kent, Ctr Reasoning, Canterbury, Kent, England
关键词
Inductive logic; Entropy; Maximum entropy principle; First order logic; Probability logic; INFERENCE;
D O I
10.1007/s10992-022-09680-6
中图分类号
B81 [逻辑学(论理学)];
学科分类号
010104 ; 010105 ;
摘要
According to the objective Bayesian approach to inductive logic, premisses inductively entail a conclusion just when every probability function with maximal entropy, from all those that satisfy the premisses, satisfies the conclusion. When premisses and conclusion are constraints on probabilities of sentences of a first-order predicate language, however, it is by no means obvious how to determine these maximal entropy functions. This paper makes progress on the problem in the following ways. Firstly, we introduce the concept of a limit in entropy and show that, if the set of probability functions satisfying the premisses contains a limit in entropy, then this limit point is unique and is the maximal entropy probability function. Next, we turn to the special case in which the premisses are categorical sentences of the logical language. We show that if the uniform probability function gives the premisses positive probability, then the maximal entropy function can be found by simply conditionalising this uniform prior on the premisses. We generalise our results to demonstrate agreement between the maximal entropy approach and Jeffrey conditionalisation in the case in which there is a single premiss that specifies the probability of a sentence of the language. We show that, after learning such a premiss, certain inferences are preserved, namely inferences to inductive tautologies. Finally, we consider potential pathologies of the approach: we explore the extent to which the maximal entropy approach is invariant under permutations of the constants of the language, and we discuss some cases in which there is no maximal entropy probability function.
引用
收藏
页码:555 / 608
页数:54
相关论文
共 44 条
[1]   Efficient numerical approximation of maximum entropy estimates [J].
Balestrino, A. ;
Caiti, A. ;
Crisostomi, E. .
INTERNATIONAL JOURNAL OF CONTROL, 2006, 79 (09) :1145-1155
[2]   Maximum entropy inference with quantified knowledge [J].
Barnett, Owen ;
Paris, Jeff .
LOGIC JOURNAL OF THE IGPL, 2008, 16 (01) :85-98
[3]  
Billingsley P., 1986, PROBABILITY MEASURE, V2nd
[4]  
Carnap R., 1952, CONTINUUM INDUCTIVE
[5]  
Caticha A, 2006, AIP CONF PROC, V872, P31
[6]  
Chen B., 2010, Signal Processing: An International Journal, V4, P114
[7]   Axiomatic Characterizations of Information Measures [J].
Csiszar, Imre .
ENTROPY, 2008, 10 (03) :261-273
[8]   CONCERNING MEASURES IN FIRST ORDER CALCULI [J].
GAIFMAN, H .
ISRAEL JOURNAL OF MATHEMATICS, 1964, 2 (01) :1-&
[9]  
Goldman S. A., 1987, THESIS
[10]  
Goldman S. A., 1988, UNCERTAINTY ARTIFICI, V2, P133, DOI DOI 10.1016/B978-0-444-70396-5.50018-2