The Latent Maximum Entropy Principle

被引:1
|
作者
Wang, Shaojun [1 ]
Schuurmans, Dale [2 ]
Zhao, Yunxin [3 ]
机构
[1] Wright State Univ, Dept Comp Sci & Engn, Dayton, OH 45435 USA
[2] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E8, Canada
[3] Univ Missouri, Dept Comp Sci, Columbia, MO 65211 USA
关键词
Maximum entropy; iterative scaling; expectation maximization; latent variable models; information geometry; EM; ALGORITHM; MINIMIZATION; LIKELIHOOD; GEOMETRY; MODELS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present an extension to Jaynes' maximum entropy principle that incorporates latent variables. The principle of latent maximum entropy we propose is different from both Jaynes' maximum entropy principle and maximum likelihood estimation, but can yield better estimates in the presence of hidden variables and limited training data. We first show that solving for a latent maximum entropy model poses a hard nonlinear constrained optimization problem in general. However, we then show that feasible solutions to this problem can be obtained efficiently for the special case of log-linear models-which forms the basis for an efficient approximation to the latent maximum entropy principle. We derive an algorithm that combines expectation-maximization with iterative scaling to produce feasible log-linear solutions. This algorithm can be interpreted as an alternating minimization algorithm in the information divergence, and reveals an intimate connection between the latent maximum entropy and maximum likelihood principles. To select a final model, we generate a series of feasible candidates, calculate the entropy of each, and choose the model that attains the highest entropy. Our experimental results show that estimation based on the latent maximum entropy principle generally gives better results than maximum likelihood when estimating latent variable models on small observed data samples.
引用
收藏
页数:42
相关论文
共 50 条