Conditional information control to maximize and minimize information

被引:0
作者
Kamimura, Ryotaro [1 ]
机构
[1] Information Science Laboratory, Tokai University, Kanagawa 259-1296
来源
International Journal of Smart Engineering System Design | 2003年 / 5卷 / 03期
关键词
Complex information processing; Conditional information; Information; Information maximization; Information minimization; Unification;
D O I
10.1080/10255810305050
中图分类号
学科分类号
摘要
In this article, we propose a new information-theoretic method called conditional information control. The method is introduced to maximize and minimize information in one network. To maximize and minimize information, we use conditional information that can take different values for different input patterns. We introduce distortion between Shannon and Renyi information functions to control information. By minimizing this distortion, conditional information can be maximized and at the same time minimized, depending upon input patterns. We applied the method to character recognition, animal classification, and grammatical inference. In all cases, experimental results confirmed that conditional information is maximized or minimized, depending upon specific input patterns, and that a limited number of important or principal hidden units can be detected. In addition, internal representation obtained by conditional information maximization and minimization can clearly be interpreted.
引用
收藏
页码:187 / 202
页数:15
相关论文
共 27 条
[1]  
Abramson N., Information Theory and Coding, (1963)
[2]  
Akiyama Y., Furuya T., An extension of the back-propagation learning which performs entropy maximization as well as error minimization, Tech. Rep., NC91-6, (1991)
[3]  
Atiok J.J., Redlick A.N., Toward a theory of early visual processing, Neural Computation, 2, pp. 308-320, (1990)
[4]  
Atteave F., Some informational aspects of visual perception, Psychological Review, 61, pp. 183-193, (1990)
[5]  
Barlow H.B., Unsupervised learning, Neural Computation, 1, pp. 295-311, (1989)
[6]  
Barlow H.B., Kaushal T.P., Mitchison G.J., Finding minimum entropy code, Neural Computation, 1, pp. 412-423, (1989)
[7]  
Becker S., Mutual information maximization: Models of cortical self-organization, Network: Computation in Neural Systems, 7, pp. 7-31, (1996)
[8]  
Becker S., Minton G.E., Learning mixture models of spatial coherence, Neural Computation, 5, pp. 267-277, (1993)
[9]  
Bienenstock E.L., Cooper L.N., Munro P.W., Theory for the development of neuron selectivity, Journal of Neuroscience, 2, pp. 32-48, (1982)
[10]  
Brillouin L., Science and Information Theory, (1962)