Trajectorial dissipation and gradient flow for the relative entropy in Markov chains

被引:0
|
作者
Karatzas, Ioannis [1 ]
Maas, Jan [2 ]
Schachermayer, Walter [3 ]
机构
[1] Columbia Univ, Dept Math, 2990 Broadway, New York, NY 10027 USA
[2] IST Austria, Campus 1, A-3400 Klosterneuburg, Austria
[3] Univ Vienna, Fac Math, Oskar Morgenstern Pl 1, A-1090 Vienna, Austria
基金
奥地利科学基金会; 美国国家科学基金会; 欧洲研究理事会;
关键词
LOGARITHMIC SOBOLEV INEQUALITIES; EQUATIONS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study the temporal dissipation of variance and relative entropy for ergodic Markov Chains in continuous time, and compute explicitly the corresponding dissipation rates. These are identified, as is well known, in the case of the variance in terms of an appropriate Hilbertian norm; and in the case of the relative entropy, in terms of a Dirichlet form which morphs into a version of the familiar Fisher information under conditions of detailed balance. Here we obtain trajectorial versions of these results, valid along almost every path of the random motion and most transparent in the backwards direction of time. Martingale arguments and time reversal play crucial roles, as in the recent work of Karatzas, Schachermayer and Tschiderer for conservative diffusions. Extensions are developed to general "convex divergences" and to countable state-spaces. The steepest descent and gradient flow properties for the variance, the relative entropy, and appropriate generalizations, are studied along with their respective geometries under conditions of detailed balance, leading to a very direct proof for the HWI inequality of Otto and Villani in the present context.
引用
收藏
页码:481 / 536
页数:56
相关论文
共 50 条