In this paper, we consider probabilistic context-free grammars, a class of generative devices that has been successfully exploited in several applications of syntactic pattern matching, especially in statistical natural language parsing. We investigate the problem of training probabilistic context-free grammars on the basis of distributions defined over an infinite set of trees or an infinite set of sentences by minimizing the cross-entropy. This problem has applications in cases of context-free approximation of distributions generated by more expressive statistical models. We show several interesting theoretical properties of probabilistic context-free grammars that are estimated in this way, including the previously unknown equivalence between the grammar cross-entropy with the input distribution and the so-called derivational entropy of the grammar itself. We discuss important consequences of these results involving the standard application of the maximum-likelihood estimator on finite tree and sentence samples, as well as other finite-state models such as Hidden Markov Models and probabilistic finite automata.
机构:
Stellenbosch Univ, Dept Comp Sci, Stellenbosch, South Africa
Natl Inst Theoret & Computat Sci, Stellenbosch, South AfricaStellenbosch Univ, Dept Comp Sci, Stellenbosch, South Africa
van der Merwe, Brink
论文数: 引用数:
h-index:
机构:
Berglund, Martin
IMPLEMENTATION AND APPLICATION OF AUTOMATA (CIAA 2022),
2022,
13266
: 53
-
66