Universal and Composite Hypothesis Testing via Mismatched Divergence

被引:32
作者
Unnikrishnan, Jayakrishnan [1 ,2 ]
Huang, Dayu [1 ,2 ]
Meyn, Sean P. [1 ,2 ]
Surana, Amit [3 ]
Veeravalli, Venugopal V. [1 ,2 ]
机构
[1] Univ Illinois, Dept Elect & Comp Engn, Urbana, IL 61801 USA
[2] Univ Illinois, Coordinated Sci Lab, Urbana, IL 61801 USA
[3] United Technol Res Ctr, E Hartford, CT 06118 USA
关键词
Generalized likelihood-ratio test; hypothesis testing; Kullback-Leibler (K-L) information; online detection; LIKELIHOOD RATIO; LARGE DEVIATIONS; ASYMPTOTICS;
D O I
10.1109/TIT.2011.2104670
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
For the universal hypothesis testing problem, where the goal is to decide between the known null hypothesis distribution and some other unknown distribution, Hoeffding proposed a universal test in the nineteen sixties. Hoeffding's universal test statistic can be written in terms of Kullback-Leibler (K-L) divergence between the empirical distribution of the observations and the null hypothesis distribution. In this paper a modification of Hoeffding's test is considered based on a relaxation of the K-L divergence, referred to as the mismatched divergence. The resulting mismatched test is shown to be a generalized likelihood-ratio test (GLRT) for the case where the alternate distribution lies in a parametric family of distributions characterized by a finite-dimensional parameter, i.e., it is a solution to the corresponding composite hypothesis testing problem. For certain choices of the alternate distribution, it is shown that both the Hoeffding test and the mismatched test have the same asymptotic performance in terms of error exponents. A consequence of this result is that the GLRT is optimal in differentiating a particular distribution from others in an exponential family. It is also shown that the mismatched test has a significant advantage over the Hoeffding test in terms of finite sample size performance for applications involving large alphabet distributions. This advantage is due to the difference in the asymptotic variances of the two test statistics under the null hypothesis.
引用
收藏
页码:1587 / 1603
页数:17
相关论文
共 24 条
[1]  
ABBE E, 2007, P INF THEOR APPL WOR, P284
[2]  
Billingsley Patrick, 1999, Convergence of probability measures, V2nd
[3]  
CLARKE B, 1989, 26 U ILL URB CHAMP D
[4]   INFORMATION-THEORETIC ASYMPTOTICS OF BAYES METHODS [J].
CLARKE, BS ;
BARRON, AR .
IEEE TRANSACTIONS ON INFORMATION THEORY, 1990, 36 (03) :453-471
[5]  
CSISZAR I, 2004, FDN TRENDS COMMUN IN, V1
[6]   ASYMPTOTIC EVALUATION OF CERTAIN MARKOV PROCESS EXPECTATIONS FOR LARGE TIME, I [J].
DONSKER, MD ;
VARADHAN, SRS .
COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 1975, 28 (01) :1-47
[7]   Testing Goodness-of-Fit via Rate Distortion [J].
Harremoes, Peter .
ITW: 2009 IEEE INFORMATION THEORY WORKSHOP ON NETWORKING AND INFORMATION THEORY, 2009, :17-21
[8]   ASYMPTOTICALLY OPTIMAL TESTS FOR MULTINOMIAL DISTRIBUTIONS [J].
HOEFFDING, W .
ANNALS OF MATHEMATICAL STATISTICS, 1965, 36 (02) :369-408
[9]   Feature Extraction for Universal Hypothesis Testing via Rank-Constrained Optimization [J].
Huang, Dayu ;
Meyn, Sean .
2010 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2010, :1618-1622
[10]  
Huang DY, 2009, ITW: 2009 IEEE INFORMATION THEORY WORKSHOP ON NETWORKING AND INFORMATION THEORY, P62, DOI 10.1109/ITWNIT.2009.5158542