{ "paper_id": "J12-3003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:00:00.421166Z" }, "title": "Empirical Risk Minimization for Probabilistic Grammars: Sample Complexity and Hardness of Learning", "authors": [ { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": { "postCode": "10027", "settlement": "New York", "region": "NY", "country": "United States" } }, "email": "scohen@cs.columbia.edu" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "", "affiliation": { "laboratory": "", "institution": "Columbia University", "location": { "postCode": "10027", "settlement": "New York", "region": "NY", "country": "United States" } }, "email": "nasmith@cs.cmu.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk.", "pdf_parse": { "paper_id": "J12-3003", "_pdf_hash": "", "abstract": [ { "text": "Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures. They are used ubiquitously in computational linguistics. We present a framework, reminiscent of structural risk minimization, for empirical risk minimization of probabilistic grammars using the log-loss. We derive sample complexity bounds in this framework that apply both to the supervised setting and the unsupervised setting. By making assumptions about the underlying distribution that are appropriate for natural language scenarios, we are able to derive distribution-dependent sample complexity bounds for probabilistic grammars. We also give simple algorithms for carrying out empirical risk minimization using this framework in both the supervised and unsupervised settings. In the unsupervised case, we show that the problem of minimizing empirical risk is NP-hard. We therefore suggest an approximate algorithm, similar to expectation-maximization, to minimize the empirical risk.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Learning from data is central to contemporary computational linguistics. It is in common in such learning to estimate a model in a parametric family using the maximum likelihood principle. This principle applies in the supervised case (i.e., using annotated data) as well as semisupervised and unsupervised settings (i.e., using unannotated data). Probabilistic grammars constitute a range of such parametric families we can estimate (e.g., hidden Markov models, probabilistic context-free grammars). These parametric families are used in diverse NLP problems ranging from syntactic and morphological processing to applications like information extraction, question answering, and machine translation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Estimation of probabilistic grammars, in many cases, indeed starts with the principle of maximum likelihood estimation (MLE). In the supervised case, and with traditional parametrizations based on multinomial distributions, MLE amounts to normalization of rule frequencies as they are observed in data. In the unsupervised case, on the other hand, algorithms such as expectation-maximization are available. MLE is attractive because it offers statistical consistency if some conditions are met (i.e., if the data are distributed according to a distribution in the family, then we will discover the correct parameters if sufficient data is available). In addition, under some conditions it is also an unbiased estimator.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "An issue that has been far less explored in the computational linguistics literature is the sample complexity of MLE. Here, we are interested in quantifying the number of samples required to accurately learn a probabilistic grammar either in a supervised or in an unsupervised way. If bounds on the requisite number of samples (known as \"sample complexity bounds\") are sufficiently tight, then they may offer guidance to learner performance, given various amounts of data and a wide range of parametric families. Being able to reason analytically about the amount of data to annotate, and the relative gains in moving to a more restricted parametric family, could offer practical advantages to language engineers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We note that grammar learning has been studied in formal settings as a problem of grammatical inference-learning the structure of a grammar or an automaton (Angluin 1987; Clark and Thollard 2004; de la Higuera 2005; Clark, Eyraud, and Habrard 2008, among others) . Our setting in this article is different. We assume that we have a fixed grammar, and our goal is to estimate its parameters. This approach has shown great empirical success, both in the supervised (Collins 2003; Charniak and Johnson 2005) and the unsupervised (Carroll and Charniak 1992; Pereira and Schabes 1992; Klein and Manning 2004; Cohen and Smith 2010a) settings. There has also been some discussion of sample complexity bounds for statistical parsing models, in a distribution-free setting (Collins 2004) . The distribution-free setting, however, is not ideal for analysis of natural language, as it has to account for pathological cases of distributions that generate data.", "cite_spans": [ { "start": 156, "end": 170, "text": "(Angluin 1987;", "ref_id": "BIBREF2" }, { "start": 171, "end": 195, "text": "Clark and Thollard 2004;", "ref_id": "BIBREF16" }, { "start": 196, "end": 215, "text": "de la Higuera 2005;", "ref_id": "BIBREF25" }, { "start": 216, "end": 262, "text": "Clark, Eyraud, and Habrard 2008, among others)", "ref_id": null }, { "start": 463, "end": 477, "text": "(Collins 2003;", "ref_id": "BIBREF20" }, { "start": 478, "end": 504, "text": "Charniak and Johnson 2005)", "ref_id": "BIBREF12" }, { "start": 526, "end": 553, "text": "(Carroll and Charniak 1992;", "ref_id": "BIBREF11" }, { "start": 554, "end": 579, "text": "Pereira and Schabes 1992;", "ref_id": "BIBREF45" }, { "start": 580, "end": 603, "text": "Klein and Manning 2004;", "ref_id": "BIBREF38" }, { "start": 604, "end": 626, "text": "Cohen and Smith 2010a)", "ref_id": "BIBREF17" }, { "start": 764, "end": 778, "text": "(Collins 2004)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We develop a framework for deriving sample complexity bounds using the maximum likelihood principle for probabilistic grammars in a distribution-dependent setting. Distribution dependency is introduced here by making empirically justified assumptions about the distributions that generate the data. Our framework uses and significantly extends ideas that have been introduced for deriving sample complexity bounds for probabilistic graphical models (Dasgupta 1997) . Maximum likelihood estimation is put in the empirical risk minimization framework (Vapnik 1998) with the loss function being the log-loss. Following that, we develop a set of learning theoretic tools to explore rates of estimation convergence for probabilistic grammars. We also develop algorithms for performing empirical risk minimization.", "cite_spans": [ { "start": 449, "end": 464, "text": "(Dasgupta 1997)", "ref_id": "BIBREF24" }, { "start": 549, "end": 562, "text": "(Vapnik 1998)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Much research has been devoted to the problem of learning finite state automata (which can be thought of as a class of grammars) in the Probably Approximately Correct setting, leading to the conclusion that it is a very hard problem (Kearns and Valiant 1989; Pitt 1989; Terwijn 2002) . Typically, the setting in these cases is different from our setting: Error is measured as the probability mass of strings that are not identified correctly by the learned finite state automaton, instead of measuring KL divergence between the automaton and the true distribution. In addition, in many cases, there is also a focus on the distribution-free setting. To the best of our knowledge, it is still an open problem whether finite state automata are learnable in the distribution-dependent setting when measuring the error as the fraction of misidentified strings. Other work (Ron 1995; Ron, Singer, and Tishby 1998; Clark and Thollard 2004; Palmer and Goldberg 2007) also gives treatment to probabilistic automata with an error measure which is more suitable for the probabilistic setting, such as Kullback-Lielder (KL) divergence or variation distance. These also focus on learning the structure of finite state machines. As mentioned earlier, in our setting we assume that the grammar is fixed, and that our goal is to estimate its parameters.", "cite_spans": [ { "start": 233, "end": 258, "text": "(Kearns and Valiant 1989;", "ref_id": "BIBREF35" }, { "start": 259, "end": 269, "text": "Pitt 1989;", "ref_id": "BIBREF46" }, { "start": 270, "end": 283, "text": "Terwijn 2002)", "ref_id": "BIBREF53" }, { "start": 867, "end": 877, "text": "(Ron 1995;", "ref_id": "BIBREF48" }, { "start": 878, "end": 907, "text": "Ron, Singer, and Tishby 1998;", "ref_id": "BIBREF49" }, { "start": 908, "end": 932, "text": "Clark and Thollard 2004;", "ref_id": "BIBREF16" }, { "start": 933, "end": 958, "text": "Palmer and Goldberg 2007)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We note an important connection to an earlier study about the learnability of probabilistic automata and hidden Markov models by Abe and Warmuth (1992) . In that study, the authors provided positive results for the sample complexity for learning probabilistic automata-they showed that a polynomial sample is sufficient for MLE. We demonstrate positive results for the more general class of probabilistic grammars which goes beyond probabilistic automata. Abe and Warmuth also showed that the problem of finding or even approximating the maximum likelihood solution for a twostate probabilistic automaton with an alphabet of an arbitrary size is hard. Even though these results extend to probabilistic grammars to some extent, we provide a novel proof that illustrates the NP-hardness of identifying the maximum likelihood solution for probabilistic grammars in the specific framework of \"proper approximations\" that we define in this article. Whereas Abe and Warmuth show that the problem of maximum likelihood maximization for two-state HMMs is not approximable within a certain factor in time polynomial in the alphabet and the length of the observed sequence, we show that there is no polynomial algorithm (in the length of the observed strings) that identifies the maximum likelihood estimator in our framework. In our reduction, from 3-SAT to the problem of maximum likelihood estimation, the alphabet used is binary and the grammar size is proportional to the length of the formula. In Abe and Warmuth, the alphabet size varies, and the number of states is two.", "cite_spans": [ { "start": 129, "end": 151, "text": "Abe and Warmuth (1992)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "This article proceeds as follows. In Section 2 we review the background necessary from Vapnik's (1988) empirical risk minimization framework. This framework is reduced to maximum likelihood estimation when a specific loss function is used: the logloss. 1 There are some shortcomings in using the empirical risk minimization framework in its simplest form. In its simplest form, the ERM framework is distribution-free, which means that we make no assumptions about the distribution that generated the data. Naively attempting to apply the ERM framework to probabilistic grammars in the distribution-free setting does not lead to the desired sample complexity bounds. The reason for this is that the log-loss diverges whenever small probabilities are allocated in the learned hypothesis to structures or strings that have a rather large probability in the probability distribution that generates the data. With a distribution-free assumption, therefore, we would have to give treatment to distributions that are unlikely to be true for natural language data (e.g., where some extremely long sentences are very probable).", "cite_spans": [ { "start": 87, "end": 102, "text": "Vapnik's (1988)", "ref_id": null }, { "start": 253, "end": 254, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "To correct for this, we move to an analysis in a distribution-dependent setting, by presenting a set of assumptions about the distribution that generates the data. In Section 3 we discuss probabilistic grammars in a general way and introduce assumptions about the true distribution that are reasonable when our data come from natural language examples. It is important to note that this distribution need not be a probabilistic grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The next step we take, in Section 4, is approximating the set of probabilistic grammars over which we maximize likelihood. This is again required in order to overcome the divergence of the log-loss for probabilities that are very small. Our approximations are based on bounded approximations that have been used for deriving sample complexity bounds for graphical models in a distribution-free setting (Dasgupta 1997) .", "cite_spans": [ { "start": 402, "end": 417, "text": "(Dasgupta 1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Our approximations have two important properties: They are, by themselves, probabilistic grammars from the family we are interested in estimating, and they become a tighter approximation around the family of probabilistic grammars we are interested in estimating as more samples are available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Moving to the distribution-dependent setting and defining proper approximations enables us to derive sample complexity bounds. In Section 5 we present the sample complexity results for both the supervised and unsupervised cases. A question that lingers at this point is whether it is computationally feasible to maximize likelihood in our framework even when given enough samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In Section 6, we describe algorithms we use to estimate probabilistic grammars in our framework, when given access to the required number of samples. We show that in the supervised case, we can indeed maximize likelihood in our approximation framework using a simple algorithm. For the unsupervised case, however, we show that maximizing likelihood is NP-hard. This fact is related to a notion known in the learning theory literature as inherent unpredictability (Kearns and Vazirani 1994) : Accurate learning is computationally hard even with enough samples. To overcome this difficulty, we adapt the expectation-maximization algorithm (Dempster, Laird, and Rubin 1977) to approximately maximize likelihood (or minimize log-loss) in the unsupervised case with proper approximations.", "cite_spans": [ { "start": 463, "end": 489, "text": "(Kearns and Vazirani 1994)", "ref_id": "BIBREF37" }, { "start": 637, "end": 670, "text": "(Dempster, Laird, and Rubin 1977)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "In Section 7 we discuss some related ideas. These include the failure of an alternative kind of distributional assumption and connections to regularization by maximum a posteriori estimation with Dirichlet priors. Longer proofs are included in the appendices. A table of notation that is used throughout is included as Table D.1 in Appendix D. This article builds on two earlier papers. In Cohen and Smith (2010b) we presented the main sample complexity results described here; the present article includes significant extensions, a deeper analysis of our distributional assumptions, and a discussion of variants of these assumptions, as well as related work, such as that about the Tsybakov noise condition. In Cohen and Smith (2010c) we proved NP-hardness for unsupervised parameter estimation of probalistic context-free grammars (PCFGs) (without approximate families). The present article uses a similar type of proof to achieve results adapted to empirical risk minimization in our approximation framework.", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 344, "text": "Table D.1 in Appendix D.", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We begin by introducing some notation. We seek to construct a predictive model that maps inputs from space X to outputs from space Z. In this work, X is a set of strings using some alphabet \u03a3 (X \u2286 \u03a3 * ), and Z is a set of derivations allowed by a grammar (e.g., a context-free grammar). We assume the existence of an unknown joint probability distribution p(x, z) over X \u00d7 Z. (For the most part, we will be discussing discrete input and output spaces. This means that p will denote a probability mass function.) We are interested in estimating the distribution p from examples, either in a supervised setting, where we are provided with examples of the form (x, z) \u2208 X \u00d7 Z, or in the unsupervised setting, where we are provided only with examples of the form x \u2208 X. We first consider the supervised setting and return to the unsupervised setting in Section 5. We will use q to denote the estimated distribution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "In order to estimate p as accurately as possible using q(x, z), we are interested in minimizing the log-loss, that is, in finding q opt , from a fixed family of distributions Q (also called \"the concept space\"), such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q opt = argmin q\u2208Q E p \u2212 log q = argmin q\u2208Q \u2212 x,z p(x, z) log q(x, z)", "eq_num": "( 1 )" } ], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "Note that if p \u2208 Q, then this quantity achieves the minimum when q opt = p, in which case the value of the log-loss is the entropy of p. Indeed, more generally, this optimization is equivalent to finding q such that it minimizes the KL divergence from p to q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "Because p is unknown, we cannot hope to minimize the log-loss directly. Given a set of examples (x 1 , z 1 ), . . . , (x n , z n ), however, there is a natural candidate, the empirical distributionp n , for use in Equation (1) instead of p, defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "p n (x, z) = n \u22121 n i=1 I {(x, z) = (x i , z i )} where I {(x, z) = (x i , z i )} is 1 if (x, z) = (x i , z i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "and 0 otherwise. 2 We then set up the problem as the problem of empirical risk minimization (ERM), that is, trying to find q such that", "cite_spans": [ { "start": 17, "end": 18, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q * = argmin q\u2208Q Ep n \u2212 log q (2) = argmin q\u2208Q \u2212n \u22121 n i=1 log q(x i , z i ) = argmax q\u2208Q n \u22121 n i=1 log q(x i , z i )", "eq_num": "( 3 )" } ], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "Equation 3immediately shows that minimizing empirical risk using the log-loss is equivalent to the maximizing likelihood, which is a common statistical principle used for estimating a probabilistic grammar in computational linguistics (Charniak 1993; Manning and Sch\u00fctze 1999) . 3 As mentioned earlier, our goal is to estimate the probability distribution p while quantifying how accurate our estimate is. One way to quantify the estimation accuracy is by bounding the excess risk, which is defined as", "cite_spans": [ { "start": 235, "end": 250, "text": "(Charniak 1993;", "ref_id": "BIBREF12" }, { "start": 251, "end": 276, "text": "Manning and Sch\u00fctze 1999)", "ref_id": "BIBREF41" }, { "start": 279, "end": 280, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "E p (q; Q) = E p (q) E p \u2212 log q \u2212 min q \u2208Q E p \u2212 log q", "eq_num": "(4)" } ], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "We are interested in bounding the excess risk for q * , E p (q * ). The excess risk is reduced to KL divergence between p and q if p \u2208 Q, because in this case the quantity min q \u2208Q E \u2212 log q is minimized with q = p, and equals the entropy of p. In a typical case, where we do not necessarily have p \u2208 Q, then the excess risk of q is bounded from above by the KL divergence between p and q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "We can bound the excess risk by showing the double-sided convergence of the empirical process R n (Q), defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R n (Q) sup q\u2208Q Ep n \u2212 log q \u2212 E p \u2212 log q \u2192 0", "eq_num": "( 5 )" } ], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "as n \u2192 \u221e. For any > 0, if, for large enough n it holds that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "sup q\u2208Q Ep n \u2212 log q \u2212 E p \u2212 log q <", "eq_num": "(6)" } ], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "(with high probability), then we can \"sandwich\" the following quantities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "E p \u2212 log q opt \u2264 E p \u2212 log q * (7) \u2264 Ep n \u2212 log q * + \u2264 Ep n \u2212 log q opt + \u2264 E p \u2212 log q opt + 2 (8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "where the inequalities come from the fact that q opt minimizes the expected risk E p \u2212 log q for q \u2208 Q, and q * minimizes the empirical risk Ep n \u2212 log q for q \u2208 Q. The consequence of Equations 7and 8is that the expected risk of q * is at most 2 away from the expected risk of q opt , and as a result, we find the excess risk E p (q * ), for large enough n, is smaller than 2 . Intuitively, this means that, under a large sample, q * does not give much worse results than q opt under the criterion of the log-loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "Unfortunately, the regularity conditions which are required for the convergence of R n (Q) do not hold because the log-loss can be unbounded. This means that a modification is required for the empirical process in a way that will actually guarantee some kind of convergence. We give a treatment to this in the next section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "We note that all discussion of convergence in this section has been about convergence in probability. For example, we want Equation (6) to hold with high probabilityfor most samples of size n. We will make this notion more rigorous in Section 2.2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Maximum Likelihood Estimation", "sec_num": "2." }, { "text": "It has been noted in the literature (Vapnik 1998; Koltchinskii 2006 ) that often the class Q is too complex for empirical risk minimization using a fixed number of data points. It is therefore desirable in these cases to create a family of subclasses {Q \u03b1 | \u03b1 \u2208 A} that have increasing complexity. The more data we have, the more complex our Q \u03b1 can be for empirical risk minimization. Structural risk minimization (Vapnik 1998) and the method of sieves (Grenander 1981) are examples of methods that adopt such an approach. Structural risk minimization, for example, can be represented in many cases as a penalization of the empirical risk method, using a regularization term.", "cite_spans": [ { "start": 36, "end": 49, "text": "(Vapnik 1998;", "ref_id": "BIBREF56" }, { "start": 50, "end": 67, "text": "Koltchinskii 2006", "ref_id": "BIBREF39" }, { "start": 415, "end": 428, "text": "(Vapnik 1998)", "ref_id": "BIBREF56" }, { "start": 454, "end": 470, "text": "(Grenander 1981)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "In our case, the level of \"complexity\" is related to allocation of small probabilities to derivations in the grammar by a distribution q \u2208 Q. The basic problem is this: Whenever we have a derivation with a small probability, the log-loss becomes very large (in absolute value), and this makes it hard to show the convergence of the empirical process R n (Q). Because grammars can define probability distributions over infinitely many discrete outcomes, probabilities can be arbitrarily small and log-loss can be arbitrarily large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "To solve this issue with the complexity of Q, we define in Section 4 a series of approximations {Q n | n \u2208 N} for probabilistic grammars such that n Q n = Q. Our framework for empirical risk minimization is then set up to minimize the empirical risk with respect to Q n , where n is the number of samples we draw for the learner:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q * n = argmin q\u2208Q n Ep n \u2212 log q", "eq_num": "(9)" } ], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "We are then interested in the convergence of the empirical process", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "R n (Q n ) = sup q\u2208Q n Ep n \u2212 log q \u2212 E p \u2212 log q", "eq_num": "(10)" } ], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "In Section 4 we show that the minimizer q * n is an asymptotic empirical risk minimizer (in our specific framework), which means that E p \u2212 log q * n \u2192 E p \u2212 log q * . Because we have n Q n = Q, the implication of having asymptotic empirical risk minimization is that we have E p (q * n ; Q n ) \u2192 E p (q * ; Q).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Empirical Risk Minimization and Structural Risk Minimization Methods", "sec_num": "2.1" }, { "text": "Knowing that we are interested in the convergence of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "2.2" }, { "text": "R n (Q n ) = sup q\u2208Q n |Ep n \u2212 log q \u2212 E p \u2212 log q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "2.2" }, { "text": "|, a natural question to ask is: \"At what rate does this empirical process converge?\"", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "2.2" }, { "text": "Because the quantity R n (Q n ) is a random variable, we need to give a probabilistic treatment to its convergence. More specifically, we ask the question that is typically asked when learnability is considered (Vapnik 1998) : \"How many samples n are required so that with probability 1 \u2212 \u03b4 we have R n (Q n ) < ?\" Bounds on this number of samples are also called \"sample complexity bounds,\" and in a distribution-free setting they are described as a function N( , \u03b4, Q), independent of the distribution p that generates the data.", "cite_spans": [ { "start": 211, "end": 224, "text": "(Vapnik 1998)", "ref_id": "BIBREF56" } ], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "2.2" }, { "text": "A complete distribution-free setting is not appropriate for analyzing natural language. This setting poses technical difficulties with the convergence of R n (Q n ) and needs to take into account pathological cases that can be ruled out in natural language data. Instead, we will make assumptions about p, parametrize these assumptions in several ways, and then calculate sample complexity bounds of the form N( , \u03b4, Q, p), where the dependence on the distribution is expressed as dependence on the parameters in the assumptions about p.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "2.2" }, { "text": "The learning setting, then, can be described as follows. The user decides on a level of accuracy ( ) which the learning algorithm has to reach with confidence (1 \u2212 \u03b4). Then, N( , \u03b4, Q, p) samples are drawn from p and presented to the learning algorithm. The learning algorithm then returns an hypothesis according to Equation (9).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "2.2" }, { "text": "We begin this section by discussing the family of probabilistic grammars. A probabilistic grammar defines a probability distribution over a certain kind of structured object (a derivation of the underlying symbolic grammar) explained step-by-step as a stochastic process. Hidden Markov models (HMMs), for example, can be understood as a random walk through a probabilistic finite-state network, with an output symbol sampled at each state. PCFGs generate phrase-structure trees by recursively rewriting nonterminal symbols as sequences of \"child\" symbols (each itself either a nonterminal symbol or a terminal symbol analogous to the emissions of an HMM).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "Each step or emission of an HMM and each rewriting operation of a PCFG is conditionally independent of the others given a single structural element (one HMM or PCFG state); this Markov property permits efficient inference over derivations given a string.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "In general, a probabilistic grammar G, \u03b8 defines the joint probability of a string x and a grammatical derivation z:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q(x, z | \u03b8, G) = K k=1 N k i=1 \u03b8 \u03c8 k,i (x,z) k,i = exp K k=1 N k i=1 \u03c8 k,i (x, z) log \u03b8 k,i", "eq_num": "(11)" } ], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "where \u03c8 k,i is a function that \"counts\" the number of times the kth distribution's ith event occurs in the derivation. The parameters \u03b8 are a collection of K multinomials \u03b8 1 , . . . , \u03b8 K , the kth of which includes N k competing events. If we let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "\u03b8 k = \u03b8 k,1 , . . . , \u03b8 k,N k , each \u03b8 k,i is a probability, such that \u2200k, \u2200i, \u03b8 k,i \u2265 0 \u2200k, N k i=1 \u03b8 k,i = 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "We denote by \u0398 G this parameter space for \u03b8. The grammar G dictates the support of q in Equation (11). As is often the case in probabilistic modeling, there are different ways to carve up the random variables. We can think of x and z as correlated structure variables (often x is known if z is known), or the derivation event counts \u03c8(x, z) = \u03c8 k,i (x, z) 1\u2264k\u2264K,1\u2264i\u2264N k as an integer-vector random variable. In this article, we assume that x is always a deterministic function of z, so we use the distribution p(z) interchangeably with p (x, z) .", "cite_spans": [ { "start": 538, "end": 544, "text": "(x, z)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "Note that there may be many derivations z for a given string x-perhaps even infinitely many in some kinds of grammars. For HMMs, there are three kinds of multinomials: a starting state multinomial, a transition multinomial per state and an emission multinomial per state. In that case K = 2s + 1, where s is the number of states. The value of N k depends on whether the kth multinomial is the starting state multinomial (in which case N k = s), transition multinomial (N k = s), or emission multinomial (N k = t, with t being the number of symbols in the HMM). For PCFGs, each multinomial among the K multinomials corresponds to a set of N k context-free rules headed by the same nonterminal. The parameter \u03b8 k,i is then the probability of the ith rule for the kth nonterminal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "We assume that G denotes a fixed grammar, such as a context-free or regular grammar. We let N = K k=1 N k denote the total number of derivation event types. We use D(G) to denote the set of all possible derivations of G. We define D x (G) ", "cite_spans": [ { "start": 235, "end": 238, "text": "(G)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "= {z \u2208 D(G) | yield(z) = x}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "We use deg(G) to denote the \"degree\" of G, i.e., deg(G) = max k N k . We let |x| denote the length of the string x, and |z| = K k=1 N k i=1 \u03c8 k,i (z) denote the \"length\" (number of event tokens) of the derivation z.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "Going back to the notation in Section 2, Q would be a collection of probabilistic grammars, parametrized by \u03b8, and q would be a specific probabilistic grammar with a specific \u03b8. We therefore treat the problem of ERM with probabilistic grammars as the problem of parameter estimation-identifying \u03b8 from complete data or incomplete data (strings x are visible but the derivations z are not). We can also view parameter estimation as the identification of a hypothesis from the concept space", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "Q = H(G) = {h \u03b8 (z) | \u03b8 \u2208 \u0398 G } (where h \u03b8 is a distribution of the form of Equation [11]) or, equivalently, from negated log-concept space F(G) = {\u2212 log h \u03b8 (z) | \u03b8 \u2208 \u0398 G }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "For simplicity of notation, we assume that there is a fixed grammar G and use H to refer to H(G) and F to refer to F(G).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Probabilistic Grammars", "sec_num": "3." }, { "text": "In this section, we describe a parametrization of assumptions we make about the distribution p(x, z), the distribution that generates derivations from D(G) (note that p does not have to be a probabilistic grammar). We first describe empirical evidence about the decay of the frequency of long strings x. Figure 1 shows the frequency of sentence length for treebanks in various languages. 4 The trend in the plots clearly shows that in the extended tail of the curve, all languages have an exponential decay of probabilities as a function of sentence length. To test this, we performed a simple regression of frequencies using an exponential curve. We estimated each curve for each language using a curve of the form f (l; c, \u03b1) = cl \u03b1 . This estimation was done by minimizing squared error between the frequency versus sentence length curve and the approximate version of this curve. The data points used for the approximation are (l i , p i ), where l i denotes sentence length and p i denotes frequency, selected from the extended tail of the distribution. Extended tail here refers to all points with length longer than l 1 , where l 1 is the length with the highest frequency in the treebank. The goal of focusing on the tail is to avoid approximating the head of the curve, which is actually a monotonically increasing function. We plotted the approximate curve together with a length versus frequency curve for new syntactic data. It can be seen (Figure 1 ) that the approximation is rather accurate in these corpora.", "cite_spans": [ { "start": 388, "end": 389, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 304, "end": 312, "text": "Figure 1", "ref_id": "FIGREF1" }, { "start": 1452, "end": 1461, "text": "(Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Distributional Assumptions about Language", "sec_num": "3.1" }, { "text": "As a consequence of this observation, we make a few assumptions about G and p(x, z):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Assumptions about Language", "sec_num": "3.1" }, { "text": "r Derivation length proportional to sentence length: There is an \u03b1 \u2265 1 such that, for all z, |z| \u2264 \u03b1|yield(z)|. Further, |z| \u2265 |x|. (This prohibits unary cycles.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Distributional Assumptions about Language", "sec_num": "3.1" }, { "text": "r Exponential decay of derivations: There is a constant r < 1 and a constant L \u2265 0 such that p(z) \u2264 Lr |z| . Note that the assumption here is about the frequency of length of separate derivations, and not the aggregated frequency of all sentences of a certain length (cf. the discussion above referring to Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 306, "end": 314, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Distributional Assumptions about Language", "sec_num": "3.1" }, { "text": "A plot of the tail of frequency vs. sentence length in treebanks for English, German, Bulgarian, Turkish, Spanish, and Chinese. Red lines denote data from the treebank, blue lines denote an approximation which uses an exponential function of the form f (l; c, \u03b1) = cl \u03b1 (the blue line uses data which is different from the data used to estimate the curve parameters, c and \u03b1). The parameters (c, \u03b1) are (0.19, 0.92) for English, (0.06, 0.94) for German, (0.26, 0.89) for Bulgarian, (0.26, 0.83) for Turkish, (0.11, 0.93) for Spanish, and (0.03, 0.97) for Chinese. Squared errors are 0.0005, 0.0003, 0.0007, 0.0003, 0.001, and 0.002 for English, German, Bulgarian, Turkish, Spanish, and Chinese, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "r Exponential decay of strings: Let \u039b(k) = |{z \u2208 D(G) | |z| = k}| be the number derivations of length k in G. We assume that \u039b(k) is an increasing function, and complete it such that it is defined over positive numbers by taking \u039b(t) \u039b( t ). Taking r as before, we assume there exists a constant q < 1, such that \u039b 2 (k)r k \u2264 q k (and as a consequence, \u039b(k)r k \u2264 q k ). This implies that the number of derivations of length k may be exponentially large (e.g., as with many PCFGs), but is bounded by (q/r) k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "r Bounded expectations of rules:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "There is a B < \u221e such that E p \u03c8 k,i (z) \u2264 B", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "for all k and i.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "These assumptions must hold for any p whose support consists of a finite set. These assumptions also hold in many cases when p itself is a probabilistic grammar. Also, we note that the last requirement of bounded expectations is optional, and it can be inferred from the rest of the requirements: B = L/(1 \u2212 q) 2 . We make this requirement explicit for simplicity of notation later. We denote the family of distributions that satisfy all of these requirements by P (\u03b1, L, r, q, B, G) .", "cite_spans": [ { "start": 465, "end": 483, "text": "(\u03b1, L, r, q, B, G)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "There are other cases in the literature of language learning where additional assumptions are made on the learned family of models in order to obtain positive learnability results. For example, Clark and Thollard (2004) put a bound on the expected length of strings generated from any state of probabilistic finite state automata, which resembles the exponential decay of strings we have for p in this article.", "cite_spans": [ { "start": 194, "end": 219, "text": "Clark and Thollard (2004)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "An immediate consequence of these assumptions is that the entropy of p is finite and bounded by a quantity that depends on L, r and q. 5 Bounding entropy of labels (derivations) given inputs (sentences) is a common way to quantify the noise in a distribution. Here, both the sentential entropy (H s (p) ", "cite_spans": [ { "start": 299, "end": 302, "text": "(p)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "= \u2212 x p(x) log p(x)) is bounded as well as the derivational entropy (H d (p) = \u2212 x,z p(x, z) log p(x, z))", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": ". This is stated in the following result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 1", "sec_num": null }, { "text": "Let p \u2208 P(\u03b1, L, r, q, B, G) be a distribution. Then, we have H s (p) \u2264 H d (p) \u2264 \u2212 log L + L log r (1 \u2212 q) 2 log 1 r + (1 + log L)/ log 1 r e \u039b 1 + log L log 1 r Proof First note that H s (p) \u2264 H d (p)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "holds by the data processing inequality (Cover and Thomas 1991) because the sentential probability distribution p(x) is a coarser version of the derivational probability distribution p(x, z). Now, consider p(x, z). For simplicity of notation, we use p(z) instead of p(x, z). The yield of z, x, is a function of z, and therefore can be omitted from the distribution. It holds that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "H d (p) = \u2212 z p(z) log p(z) = \u2212 z\u2208Z 1 p(z) log p(z) \u2212 z\u2208Z 2 p(z) log p(z) = H d (p, Z 1 ) + H d (p, Z 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "Z 1 = {z | p(z) > 1/e} and Z 2 = {z | p(z) \u2264 1/e}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "Note that the function \u2212\u03b1 log \u03b1 reaches its maximum for \u03b1 = 1/e. We therefore have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "H d (p, Z 1 ) \u2264 |Z 1 | e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "We give a bound on |Z 1 |, the number of \"high probability\" derivations. Because we have p(x, z) \u2264 Lr |z| , we can find the maximum length of a derivation that has a probability of more than 1/e (and hence, it may appear in Z 1 ) by solving 1/e \u2264 Lr |z| for |z|, which leads to |z| \u2264 log(1/eL)/ log r. Therefore, there are at most", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "(1+log L)/ log 1 r k=1 \u039b(k) derivations in |Z 1 | and therefore we have |Z 1 | \u2264 (1 + log L)/ log 1 r \u039b (1 + log L)/ log 1 r H d (p, Z 1 ) \u2264 (1 + log L)/ log 1 r e \u039b (1 + log L)/ log 1 r (12)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "where we use the monotonicity of \u039b. Consider H d (p, Z 2 ) (the \"low probability\" derivations). We have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "H d (p, Z 2 ) \u2264 \u2212 z\u2208Z 2 Lr |z| log Lr |z| \u2264 \u2212 log L \u2212 L log r z\u2208Z 2 |z|r |z| \u2264 \u2212 log L \u2212 L log r \u221e k=1 \u039b(k)kr k \u2264 \u2212 log L \u2212 L log r \u221e k=1 kq k (13) = \u2212 log L + L log r (1 \u2212 q) 2 log 1 q (14)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "where Equation 13holds from the assumptions about p. Putting Equation 12and Equation 14together, we obtain the result. We note that another common way to quantify the noise in a distribution is through the notion of Tsybakov noise (Tsybakov 2004; Koltchinskii 2006) . We discuss this further in Section 7.1, where we show that Tsybakov noise is too permissive, and probabilistic grammars do not satisfy its conditions.", "cite_spans": [ { "start": 231, "end": 246, "text": "(Tsybakov 2004;", "ref_id": "BIBREF55" }, { "start": 247, "end": 265, "text": "Koltchinskii 2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 1", "sec_num": null }, { "text": "When approximating a family of probabilistic grammars, it is much more convenient when the degree of the grammar is limited. In this article, we limit the degree of the grammar by making the assumption that all N k \u2264 2. This assumption may seem, at first glance, somewhat restrictive, but we show next that for PCFGs (and as a consequence, other formalisms), this assumption does not limit the total generative capacity that we can have across all context-free grammars.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limiting the Degree of the Grammar", "sec_num": "3.2" }, { "text": "We first show that any context-free grammar with arbitrary degree can be mapped to a corresponding grammar with all N k \u2264 2 that generates derivations equivalent to derivations in the original grammar. Such a grammar is also called a \"covering grammar\" (Nijholt 1980; Leermakers 1989) . Let G be a CFG. Let A be the kth nonterminal. Consider the rules A \u2192 \u03b1 i for i \u2264 N k where A appears on the left side. For each rule", "cite_spans": [ { "start": 253, "end": 267, "text": "(Nijholt 1980;", "ref_id": "BIBREF43" }, { "start": 268, "end": 284, "text": "Leermakers 1989)", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Limiting the Degree of the Grammar", "sec_num": "3.2" }, { "text": "Example of a context-free grammar and its equivalent binarized form.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A \u2192 \u03b1 i , i < N k ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "we create a new nonterminal in G such that A i has two rewrite rules: Figure 2 demonstrates an example of this transformation on a small context-free grammar.", "cite_spans": [], "ref_spans": [ { "start": 70, "end": 78, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "A i \u2192 \u03b1 i and A i \u2192 A i+1 . In addition, we create rules A \u2192 A 1 and A N k \u2192 \u03b1 N k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "It is easy to verify that the resulting grammar G has an equivalent capacity to the original CFG, G. A simple transformation that converts each derivation in the new grammar to a derivation in the old grammar would involve collapsing any path of nonterminals added to G (i.e., all A i for nonterminal A) so that we end up with nonterminals from the original grammar only. Similarly, any derivation in G can be converted to a derivation in G by adding new nonterminals through unary application of rules of the form A i \u2192 A i+1 . Given a derivation z in G, we denote by \u03a5 G \u2192G (z) the corresponding derivation in G after adding the new non-terminals A i to z. Throughout this article, we will refer to the normalized form of G as a \"binary normal form.\" 6 Note that K , the number of multinomials in the binary normal form, is a function of both the number of nonterminals in the original grammar and the number of rules in that grammar. More specifically, we have that K = K k=1 N k + K. To make the equivalence complete, we need to show that any probabilistic context-free grammar can be translated to a PCFG with max k N k \u2264 2 such that the two PCFGs induce the same equivalent distributions over derivations.", "cite_spans": [ { "start": 753, "end": 754, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Figure 2", "sec_num": null }, { "text": "a i \u2208 [0, 1], i \u2208 {1, . . . , N} such that i a i = 1. Define b 1 = a 1 , c 1 = 1 \u2212 a 1 , b i = a i a i\u22121 b i\u22121 c i\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1 Let", "sec_num": null }, { "text": ", and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1 Let", "sec_num": null }, { "text": "c i = 1 \u2212 b i for i \u2265 2. Then a i = \uf8eb \uf8ed i\u22121 j=1 c j \uf8f6 \uf8f8 b i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1 Let", "sec_num": null }, { "text": "See Appendix A for the proof of Utility Lemma 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1 Let", "sec_num": null }, { "text": "Let G, \u03b8 be a probabilistic context-free grammar. Let G be the binarizing transformation of G as defined earlier. Then, there exists \u03b8 for G such that for any", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1", "sec_num": null }, { "text": "z \u2208 D(G) we have p(z | \u03b8, G) = p(\u03a5 G \u2192G (z) | \u03b8 , G ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1", "sec_num": null }, { "text": "6 We note that this notion of binarization is different from previous types of binarization appearing in computational linguistics for grammars. Typically in previous work about binarized grammars such as CFGs, the grammars are constrained to have at most two nonterminals in the right side in Chomsky normal form. Another form of binarization for linear context-free rewriting systems is restriction of the fan-out of the rules to two (G\u00f3mez-Rodr\u00edguez and Satta 2009; Gildea 2010). We, however, limit the number of rules for each nonterminal (or more generally, the number of elements in each multinomial).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 1", "sec_num": null }, { "text": "For the grammar G, index the set {1, ..., K} with nonterminals ranging from A 1 to A K . Define G as before. We need to define \u03b8 . Index the multinomials in G by (k, i), each having two events. Let \u00b5 (k,i),1 = \u03b8 k,i , \u00b5 (k,i),2 = 1 \u2212 \u03b8 k,i for i = 1 and set \u00b5 k,i,1 = \u03b8 k,i /\u00b5 (k,i\u22121),2 , and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "\u00b5 (k,i\u22121),2 = 1 \u2212 \u00b5 (k,i\u22121),2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "G , \u00b5 is a weighted context-free grammar such that the \u00b5 (k,i),1 corresponds to the ith event in the k multinomial of the original grammar. Let z be a derivation in G and z = \u03a5 G \u2192G (z). Then, from Utility Lemma 1 and the construction of g , we have that:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "p(z | \u03b8, G) = K k=1 N k i=1 \u03b8 \u03c8 k,i (z) k,i = K k=1 N k i=1 \u03c8 k,i (z) l=1 \u03b8 k,i = K k=1 N k i=1 \u03c8 k,i (z) l=1 \uf8eb \uf8ed i\u22121 j=1 \u00b5 (k,j),2 \uf8f6 \uf8f8 \u00b5 (k,i),1 = K k=1 N k i=1 \uf8eb \uf8ed i\u22121 j=1 \u00b5 \u03c8 k,i (z) (k,j),2 \uf8f6 \uf8f8 \u00b5 \u03c8 k,i (z) (k,i),1 = K k=1 N k j=1 2 i=1 \u00b5 \u03c8 k,j (z ) (k,j),i = p(z | \u00b5, G )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "From Chi (1999) , we know that the weighted grammar G , \u00b5 can be converted to a probabilistic context-free grammar G , \u03b8 , through a construction of \u03b8 based on \u00b5,", "cite_spans": [ { "start": 5, "end": 15, "text": "Chi (1999)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "such that p(z | \u00b5, G ) = p(z | \u03b8 , G ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "The proof for Theorem 1 gives a construction the parameters \u03b8 of G such that G, \u03b8 is equivalent to G , \u03b8 . The construction of \u03b8 can also be reversed: Given \u03b8 for G , we can construct \u03b8 for G so that again we have equivalence between G, \u03b8 and G , \u03b8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "In this section, we focused on presenting parametrized, empirically justified distributional assumptions about language data that will make the analysis in later sections more manageable. We showed that these assumptions bound the amount of entropy as a function of the assumption parameters. We also made an assumption about the structure of the grammar family, and showed that it entails no loss of generality for CFGs. Many other formalisms can follow similar arguments to show that the structural assumption is justified for them as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "In order to follow the empirical risk minimization described in Section 2.1, we have to define a series of approximations for F, which we denote by the log-concept spaces F 1 , F 2 , . . . . We also have to replace two-sided uniform convergence (Equation [6]) with convergence on the sequence of concept spaces we defined (Equation [10] ). The concept spaces in the sequence vary as a function of the number of samples we have. We next construct the sequence of concept spaces, and in Section 5 we return to the learning model. Our approximations are based on the concept of bounded approximations (Abe, Takeuchi, and Warmuth 1991; Dasgupta 1997) , which were originally designed for graphical models. 7 A bounded approximation is a subset of a concept space which is controlled by a parameter that determines its tightness. Here we use this idea to define a series of subsets of the original concept space F as approximations, while having two asymptotic properties that control the series' tightness.", "cite_spans": [ { "start": 332, "end": 336, "text": "[10]", "ref_id": null }, { "start": 598, "end": 631, "text": "(Abe, Takeuchi, and Warmuth 1991;", "ref_id": "BIBREF0" }, { "start": 632, "end": 646, "text": "Dasgupta 1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "Let F m (for m \u2208 {1, 2, . . .}) be a sequence of concept spaces. We consider three properties of elements of this sequence, which should hold for m > M for a fixed M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "The first is containment in F:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "F m \u2286 F", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "The second property is boundedness:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "\u2203K m \u2265 0, \u2200f \u2208 F m , E | f | \u00d7 I {| f | \u2265 K m } \u2264 bound (m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "where bound is a non-increasing function such that bound (m) \u2212\u2192 m\u2192\u221e 0. This states that the expected values of functions from F m on values larger than some K m is small. This is required to obtain uniform convergence results in the revised empirical risk minimization model from Section 2.1. Note that K m can grow arbitrarily large. The third property is tightness:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "\u2203C m \u2208 F \u2192 F m , p \uf8eb \uf8ed f \u2208F {z | C m ( f )(z) \u2212 f (z) \u2265 tail (m)} \uf8f6 \uf8f8 \u2264 tail (m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "where tail is a non-increasing function such that tail (m) \u2212\u2192 m\u2192\u221e 0, and C m denotes an operator that maps functions in F to F m . This ensures that our approximation actually converges to the original concept space F. We will show in Section 4.3 that this is actually a well-motivated characterization of convergence for probabilistic grammars in the supervised setting. We say that the sequence F m properly approximates F if there exist tail (m), bound (m), and C m such that, for all m larger than some M, containment, boundedness, and tightness all hold.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "In a good approximation, K m would increase at a fast rate as a function of m and tail (m) and bound (m) decrease quickly as a function of m. As we will see in Section 5, we cannot have an arbitrarily fast convergence rate (by, for example, taking a subsequence of F m ), because the size of K m has a great effect on the number of samples required to obtain accurate estimation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "7 There are other ways to manage the unboundedness of KL divergence in the language learning literature. Clark and Thollard (2004) , for example, decompose the KL divergence between probabilistic finite-state automata into several terms according to a decomposition of Carrasco (1997) and then bound each term separately.", "cite_spans": [ { "start": 105, "end": 130, "text": "Clark and Thollard (2004)", "ref_id": "BIBREF16" }, { "start": 269, "end": 284, "text": "Carrasco (1997)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Proper Approximations", "sec_num": "4." }, { "text": "Example of a PCFG where there is more than a single way to approximate it by truncation with \u03b3 = 0.1, because it has more than two rules. Any value of \u03b7 \u2208 [0, \u03b3] will lead to a different approximation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "Rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "\u03b8 General \u03b7 = 0 \u03b7 = 0.01 \u03b7 = 0.005 S \u2192 NP VP 0.09 0.01 0.1 0.1 0.1 S \u2192 NP 0.11 0.11 \u2212 \u03b7 0.11 0.1 0.105 S \u2192 VP 0.8 0.8 \u2212 \u03b3 + \u03b7 0.79 0.8 0.795", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Table 1", "sec_num": null }, { "text": "We now focus on constructing proper approximations for probabilistic grammars whose degree is limited to 2. Proper approximations could, in principle, be used with losses other than the log-loss, though their main use is for unbounded losses. Starting from this point in the article, we focus on using such proper approximations with the log-loss. We construct F m . For each f \u2208 F we define a transformation T( f, \u03b3) that shifts every binomial parameter \u03b8 k = \u03b8 k,1 , \u03b8 k,2 in the probabilistic grammar by at most \u03b3:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing Proper Approximations for Probabilistic Grammars", "sec_num": "4.1" }, { "text": "\u03b8 k,1 , \u03b8 k,2 \u2190 \uf8f1 \uf8f2 \uf8f3 \u03b3, 1\u2212 \u03b3 if \u03b8 k,1 < \u03b3 1 \u2212 \u03b3, \u03b3 if \u03b8 k,1 > 1 \u2212 \u03b3 \u03b8 k,1 , \u03b8 k,2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing Proper Approximations for Probabilistic Grammars", "sec_num": "4.1" }, { "text": "otherwise Note that T( f, \u03b3) \u2208 F for any \u03b3 \u2264 1/2. Fix a constant s > 1. 8 We denote by T(\u03b8, \u03b3) the same transformation on \u03b8 (which outputs the new shifted parameters) and we denote by", "cite_spans": [ { "start": 72, "end": 73, "text": "8", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Constructing Proper Approximations for Probabilistic Grammars", "sec_num": "4.1" }, { "text": "\u0398 G (\u03b3) = \u0398(\u03b3) the set {T(\u03b8, \u03b3) | \u03b8 \u2208 \u0398 G }. For each m \u2208 N, define F m = {T( f, m \u2212s ) | f \u2208 F}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing Proper Approximations for Probabilistic Grammars", "sec_num": "4.1" }, { "text": "When considering our approach to approximate a probabilistic grammar by increasing its parameter probabilities to be over a certain threshold, it becomes clear why we are required to limit the grammar to have only two rules and why we are required to use the normal from Section 3.2 with grammars of degree 2. Consider the PCFG rules in Table 1 . There are different ways to move probability mass to the rule with small probability. This leads to a problem with identifability of the approximation: How does one decide how to reallocate probability to the small probability rules? By binarizing the grammar in advance, we arrive at a single way to reallocate mass when required (i.e., move mass from the high-probability rule to the low-probability rule). This leads to a simpler proof for sample complexity bounds and a single bound (rather than different bounds depending on different smoothing operators). We note, however, that the choices made in binarizing the grammar imply a particular way of smoothing the probability across the original rules.", "cite_spans": [], "ref_spans": [ { "start": 337, "end": 344, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Constructing Proper Approximations for Probabilistic Grammars", "sec_num": "4.1" }, { "text": "We now describe how this construction of approximations satisfies the properties mentioned in Section 4, specifically, the boundedness property and the tightness property.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Constructing Proper Approximations for Probabilistic Grammars", "sec_num": "4.1" }, { "text": "Let p \u2208 P (\u03b1, L, r, q, B, G) and let F m be as defined earlier. There exists a constant \u03b2 = \u03b2 (L, q, p, N) > 0 such that F m has the boundedness property with K m = sN log 3 m and", "cite_spans": [ { "start": 10, "end": 28, "text": "(\u03b1, L, r, q, B, G)", "ref_id": null }, { "start": 94, "end": 106, "text": "(L, q, p, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "bound (m) = m \u2212\u03b2 log m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "See Appendix A for the proof of Proposition 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "Next, F m is tight with respect to F with tail (m) , r, q, B, G) and let F m as defined earlier. There exists an M such that for any", "cite_spans": [ { "start": 47, "end": 50, "text": "(m)", "ref_id": null }, { "start": 51, "end": 64, "text": ", r, q, B, G)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "= N log 2 m m s \u2212 1 . Proposition 3 Let p \u2208 P(\u03b1, L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "m > M we have p \uf8eb \uf8ed f \u2208F {z | C m ( f )(z) \u2212 f (z) \u2265 tail (m)} \uf8f6 \uf8f8 \u2264 tail (m) for tail (m) = N log 2 m m s \u2212 1 and C m ( f ) = T( f, m \u2212s ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "See Appendix A for the proof of Proposition 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "We now have proper approximations for probabilistic grammars. These approximations are defined as a series of probabilistic grammars, related to the family of probabilistic grammars we are interested in estimating. They consist of three properties: containment (they are a subset of the family of probabilistic grammars we are interested in estimating), boundedness (their log-loss does not diverge to infinity quickly), and they are tight (there is a small probability mass at which they are not tight approximations).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "At this point, the number of samples n is decoupled from the bounded approximation (F m ) that we choose for grammar estimation. To couple between these two, we need to define m as a function of the number of samples, m(n). As mentioned earlier, there is a clear trade-off between choosing a fast rate for m(n) (such as m(n) = n k for some k > 1) and a slower rate (such as m(n) = log n). The faster the rate is, the tighter the family of approximations that we use for n samples. If the rate is too fast, however, then K m grows quickly as well. In that case, because our sample complexity bounds are increasing functions of such K m , the bounds will degrade.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling Bounded Approximations with Number of Samples", "sec_num": "4.2" }, { "text": "To balance the trade-off, we choose m(n) = n. As we see later, this gives sample complexity bounds which are asymptotically interesting for both the supervised and unsupervised case.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupling Bounded Approximations with Number of Samples", "sec_num": "4.2" }, { "text": "It would be compelling to determine whether the empirical risk minimizer over F n is an asymptotic empirical risk minimizer. This would mean that the risk of the empirical risk minimizer over F n converges to the risk of the maximum likelihood estimate. As a conclusion to this section about proper approximations, we motivate the three requirements that we posed on proper approximations by showing that this is indeed true. We now unify n, the number of samples, and m, the index of the approximation of the concept space F. Let f * n be the minimizer of the empirical risk over F, ( f * n = argmin f \u2208F Ep n f ) and let g n be the minimizer of the empirical risk over F n (g -Shwartz et al. 2009) . Then, we have the following", "cite_spans": [ { "start": 675, "end": 677, "text": "(g", "ref_id": null }, { "start": 678, "end": 699, "text": "-Shwartz et al. 2009)", "ref_id": "BIBREF50" } ], "ref_spans": [], "eq_spans": [], "section": "Asymptotic Empirical Risk Minimization", "sec_num": "4.3" }, { "text": "n = argmin f \u2208F n Ep n f ). Let D = {z 1 , ..., z n } be a sample from p(z). The operator (g n =) argmin f \u2208F n Ep n [ f ] is an asymptotic empirical risk minimizer if E Ep n g n \u2212 Ep n [ f * n ] \u2192 0 as n \u2192 \u221e (Shalev", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asymptotic Empirical Risk Minimization", "sec_num": "4.3" }, { "text": "Lemma 1 Denote by Z ,n the set f \u2208F {z | C n ( f )(z) \u2212 f (z) \u2265 }. Denote by A ,n the event \"one of z i \u2208 D is in Z ,n .\" If F n properly approximates F, then: E Ep n g n \u2212 Ep n f * n (15) \u2264 E Ep n C n ( f * n ) | A ,n p(A ,n ) + E Ep n f * n | A ,n p(A ,n ) + tail (n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asymptotic Empirical Risk Minimization", "sec_num": "4.3" }, { "text": "where the expectations are taken with respect to the data set D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asymptotic Empirical Risk Minimization", "sec_num": "4.3" }, { "text": "See Appendix A for the proof of Lemma 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Asymptotic Empirical Risk Minimization", "sec_num": "4.3" }, { "text": "Let D = {z 1 , ..., z n } be a sample of derivations from G. Then g n = argmin f \u2208F n Ep n f is an asymptotic empirical risk minimizer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 4", "sec_num": null }, { "text": "Let f 0 \u2208 F be the concept that puts uniform weights over \u03b8, namely, \u03b8 k = 1 2 , 1 2 for all k. Note that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "|E Ep n f * n | A ,n |p(A ,n ) \u2264 |E Ep n f 0 | A ,n |p(A ,n ) = log 2 n n l=1 k,i E[\u03c8 k,i (z l ) | A ,n ]p(A ,n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let A j, ,n for j \u2208 {1, . . . , n} be the event \"z j \u2208 Z ,n \". Then A ,n = j A j, ,n . We have that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E[\u03c8 k,i (z l ) | A ,n ]p(A ,n ) \u2264 j z l p(z l , A j, ,n )|z l | \u2264 j =l z l p(z l )p(A j, ,n )|z l | + z l p(z l , A l, ,n )|z l | (16) \u2264 \uf8eb \uf8ed j =l p(A j, ,n ) \uf8f6 \uf8f8 B + E[\u03c8 k,i (z) | z \u2208 Z ,n ]p(z \u2208 Z ,n ) \u2264 (n \u2212 1)Bp(z \u2208 Z ,n ) + E[\u03c8 k,i (z) | z \u2208 Z ,n ]p(z \u2208 Z ,n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "where Equation 16comes from z l being independent. Also, B is the constant from Section 3.1. Therefore, we have:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "1 n n l=1 k,i E[\u03c8 k,i (z l ) | A ,n ]p(A ,n ) \u2264 k,i E[\u03c8 k,i (z) | z \u2208 Z ,n ]p(z \u2208 Z ,n ) + (n \u2212 1)Bp(z \u2208 Z ,n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "From the construction of our proper approximations (Proposition 3), we know that only derivations of length log 2 n or greater can be in Z ,n . Therefore", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E[\u03c8 k,i | Z ,n ]p(Z ,n ) \u2264 z:|z|>log 2 n p(z)\u03c8 k,i (z) \u2264 \u221e l>log 2 n L\u039b(l)r l l \u2264 \u03baq log 2 n = o(1) where \u03ba > 0 is a constant. Similarly, we have p(z \u2208 Z ,n ) = o(n \u22121 ). This means that |E[Ep n [\u2212 log \u2212f * n ] | A ,n ]|p(A ,n ) \u2212\u2192 n\u2192\u221e 0. In addition, it can be shown that |E[Ep n [C n ( f * n ) | A ,n ]|p(A ,n ) \u2212\u2192 n\u2192\u221e", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "0 using the same proof technique we used here, while relying on the fact that C n ( f * n ) \u2208 F n , and therefore C n ( f * n )(z) \u2264 sN|z| log n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Equipped with the framework of proper approximations as described previously, we now give our main sample complexity results for probabilistic grammars. These results hinge on the convergence of sup f \u2208F n |Ep n f \u2212 E p f |. Indeed, proper approximations replace the use of F in these convergence results. The rate of this convergence can be fast, if the covering numbers for F n do not grow too fast.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds", "sec_num": "5." }, { "text": "We next give a brief overview of covering numbers. A cover provides a way to reduce a class of functions to a much smaller (finite, in fact) representative class such that each function in the original class is represented using a function in the smaller class. Let G be a class of functions. Let d(f, g) be a distance measure between two functions f, g from G. An -cover is a subset of G, denoted by G , such that for every f \u2208 G there exists an f \u2208 G such that d( f, f ) < . The covering number N( , G, d) is the size of the smallest -cover of G for the distance measure d.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Covering Numbers and Bounds on Covering Numbers", "sec_num": "5.1" }, { "text": "We are interested in a specific distance measure which is dependent on the empirical distributionp n that describes the data z 1 , ..., z n . Let f, g \u2208 G. We will use", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Covering Numbers and Bounds on Covering Numbers", "sec_num": "5.1" }, { "text": "dp n ( f, g) = Ep n | f \u2212 g| = z\u2208D(G) | f (z) \u2212 g(z)|p n (z) = 1 n n i=1 | f (z i ) \u2212 g(z i )|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Covering Numbers and Bounds on Covering Numbers", "sec_num": "5.1" }, { "text": "Instead of using N( , G, dp n ) directly, we bound this quantity with N( , G) = supp n N( , G, dp n ), where we consider all possible samples (yieldingp n ). The following is the key result regarding the connection between covering numbers and the double-sided convergence of the empirical process sup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Covering Numbers and Bounds on Covering Numbers", "sec_num": "5.1" }, { "text": "f \u2208F n |Ep n f \u2212 E p f | as n \u2192 \u221e.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Covering Numbers and Bounds on Covering Numbers", "sec_num": "5.1" }, { "text": "This result is a general-purpose result that has been used frequently to prove the convergence of empirical processes of the type we discuss in this article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Covering Numbers and Bounds on Covering Numbers", "sec_num": "5.1" }, { "text": "Let F n be a permissible class 9 of functions such that for every", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "f \u2208 F n we have E[| f | \u00d7 I {| f | \u2264 K n }] \u2264 bound (n). Let F truncated,n = {f \u00d7 I {f \u2264 K n } | f \u2208 F m },", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "namely, the set of functions from F n after being truncated by K n . Then for > 0 we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "p sup f \u2208F n |Ep n f \u2212 E p f | > 2 \u2264 8N( /8, F truncated,n ) exp \u2212 1 128 n 2 /K 2 n + bound (n)/ provided n \u2265 K 2 n /4 2 and bound (n) < .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "See Pollard (1984; Chapter 2, pages 30-31) for the proof of Lemma 2. See also Appendix A.", "cite_spans": [ { "start": 4, "end": 18, "text": "Pollard (1984;", "ref_id": "BIBREF47" }, { "start": 19, "end": 19, "text": "", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "Covering numbers are rather complex combinatorial quantities which are hard to compute directly. Fortunately, they can be bounded using the pseudo-dimension (Anthony and Bartlett 1999) , a generalization of the Vapnik-Chervonenkis (VC) dimension for real functions. In the case of our \"binomialized\" probabilistic grammars, the pseudo-dimension of F n is bounded by N, because we have F n \u2286 F, and the functions in F are linear with N parameters. Hence, F truncated,n also has pseudodimension that is at most N. We then have the following. Pollard [1984] and Haussler [1992] .) Let F n be the proper approximations for probabilistic grammars, for any 0 < < K n we have:", "cite_spans": [ { "start": 157, "end": 184, "text": "(Anthony and Bartlett 1999)", "ref_id": "BIBREF3" }, { "start": 540, "end": 554, "text": "Pollard [1984]", "ref_id": "BIBREF47" }, { "start": 559, "end": 574, "text": "Haussler [1992]", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "N( , F truncated,n ) < 2 2eK n log 2eK n N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 3 (From", "sec_num": null }, { "text": "We turn to give an analysis for the supervised case. This analysis is mostly described as a preparation for the unsupervised case. In general, the families of probabilistic grammars we give a treatment to are parametric families, and the maximum likelihood estimator for these families is a consistent estimator in the supervised case. In the unsupervised case, however, lack of identifiability prevents us from getting these traditional consistency results. Also, the traditional results about the consistency of MLE are based on the assumption that the sample is generated from the parametric family we are trying to estimate. This is not the case in our analysis, where the distribution that generates the data does not have to be a probabilistic grammar. Lemmas 2 and 3 can be combined to get the following sample complexity result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "5.2" }, { "text": "Let G be a grammar. Let p \u2208 P(\u03b1, L, r, q, B, G) (Section 3.1). Let F n be a proper approximation for the corresponding family of probabilistic grammars. Let z 1 , . . . , z n be a sample of derivations. Then there exists a constant \u03b2 (L, q, p, N) and constant M such that for any 0 < \u03b4 < 1 and 0 < < K n and any n > M and if", "cite_spans": [ { "start": 234, "end": 246, "text": "(L, q, p, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "n \u2265 max 128K 2 n 2 2N log(16eK n / ) + log 32 \u03b4 , log 4/\u03b4 + log 1/ \u03b2(L, q, p, N)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "then we have , p, N) is the constant from Proposition 2. The main idea in the proof is to solve for n in the following two inequalities (based on Equation [17] [see the following]) while relying on Lemma 3:", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 20, "text": ", p, N)", "ref_id": null } ], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "P sup f \u2208F n |Ep n f \u2212 E p f | \u2264 2 \u2265 1 \u2212 \u03b4 where K n = sN log 3 n. Proof Sketch \u03b2(L, q", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "8N( /8, F truncated,n ) exp \u2212 1 128 n 2 /K 2 n \u2264 \u03b4/2 bound (n)/ \u2264 \u03b4/2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "Theorem 2 gives little intuition about the number of samples required for accurate estimation of a grammar because it considers the \"additive\" setting: The empirical risk is within from the expected risk. More specifically, it is not clear how we should pick for the log-loss, because the log-loss can obtain arbitrary values.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "We turn now to converting the additive bound in Theorem 2 to a multiplicative bound. Multiplicative bounds can be more informative than additive bounds when the range of the values that the log-loss can obtain is not known a priori. It is important to note that the two views are equivalent (i.e., it is possible to convert a multiplicative bound to an additive bound and vice versa). Let \u03c1 \u2208 (0, 1) and choose = \u03c1K n . Then, substituting this in Theorem 2, we get that if", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n \u2265 max 128 \u03c1 2 2N log 16e \u03c1 + log 32 \u03b4 , log 4/\u03b4 + log 1/\u03c1 \u03b2(L, q, p, N) then, with probability 1 \u2212 \u03b4, sup f \u2208F n 1 \u2212 Ep n f E p f \u2264 \u03c1 \u00d7 2sN log 3 (n) H(p)", "eq_num": "(17)" } ], "section": "Theorem 2", "sec_num": null }, { "text": "where H(p) is the Shannon entropy of p. This stems from the fact that E p f \u2265 H(p) for any f . This means that if we are interested in computing a sample complexity bound such that the ratio between the empirical risk and the expected risk (for log-loss) is close to 1 with high probability, we need to pick up \u03c1 such that the righthand side of Equation 17is smaller than the desired accuracy level (between 0 and 1). Note that Equation 17is an oracle inequality-it requires knowing the entropy of p or some upper bound on it.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 2", "sec_num": null }, { "text": "In the unsupervised setting, we have n yields of derivations from the grammar, x 1 , ..., x n , and our goal again is to identify grammar parameters \u03b8 from these yields. Our concept classes are now the sets of log marginalized distributions from F n . For each f \u03b8 \u2208 F n , we define f \u03b8 as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Case", "sec_num": "5.3" }, { "text": "f \u03b8 (x) = \u2212 log z\u2208D x (G) exp(\u2212f \u03b8 (z)) = \u2212 log z\u2208D x (G) exp \uf8eb \uf8ed K k=1 N k i=1 \u03c8 i,k (z)\u03b8 i,k \uf8f6 \uf8f8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Case", "sec_num": "5.3" }, { "text": "We denote the set of { f \u03b8 } by F n . Analogously, we define F . Note that we also need to define the operator C n ( f ) as a first step towards defining F n as proper approximations (for F ) in the unsupervised setting. Let f \u2208 F . Let f be the concept in F such that f (x) = z f (x, z). Then we define C n ( f )(x) = z C n ( f )(x, z).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Case", "sec_num": "5.3" }, { "text": "It does not immediately follow that F n is a proper approximation for F . It is not hard to show that the boundedness property is satisfied with the same K n and the same form of bound (n) as in Proposition 2 (we would have bound (m) = m \u2212\u03b2 log m for some \u03b2 (L, q, p, N) = \u03b2 > 0). This relies on the property of bounded derivation length of p (see Appendix A, Proposition 7). The following result shows that we have tightness as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Case", "sec_num": "5.3" }, { "text": "For a i , b i \u2265 0, if \u2212 log i a i + log i b i \u2265 then there exists an i such that \u2212 log a i + log b i \u2265 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 2", "sec_num": null }, { "text": "There exists an M such that for any n > M we have (n) for tail (n) = N log 2 n n s \u2212 1 and the operator C n ( f ) as defined earlier.", "cite_spans": [ { "start": 50, "end": 53, "text": "(n)", "ref_id": null }, { "start": 63, "end": 66, "text": "(n)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 5", "sec_num": null }, { "text": "p \uf8eb \uf8ed f \u2208F {x | C n (f )(x) \u2212 f (x) \u2265 tail (n)} \uf8f6 \uf8f8 \u2264 tail", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 5", "sec_num": null }, { "text": "From Utility Lemma 2 we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "p \uf8eb \uf8ed f \u2208F {x | C n ( f )(x) \u2212 f (x) \u2265 tail (n)} \uf8f6 \uf8f8 \u2264 p \uf8eb \uf8ed f \u2208F {x | \u2203zC n ( f )(z) \u2212 f (z) \u2265 tail (n)} \uf8f6 \uf8f8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "Define X(n) to be all x such that there exists a z with yield(z) = x and |z| \u2265 log 2 n. From the proof of Proposition 3 and the requirements on p, we know that there exists an \u03b1 \u2265 1 such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "p f \u2208F {x | \u2203z s.t. C n ( f )(z) \u2212 f (z) \u2265 tail (n)} \u2264 x\u2208X(n) p(x) \u2264 x:|x|\u2265log 2 n/\u03b1 p(x) \u2264 \u221e k= log 2 n/\u03b1 L\u039b(k)r k \u2264 tail (n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "where the last inequality happens for some n larger than a fixed M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "Computing either the covering number or the pseudo-dimension of F n is a hard task, because the function in the classes includes the \"log-sum-exp.\" Dasgupta (1997) overcomes this problem for Bayesian networks with fixed structure by giving a bound on the covering number for (his respective) F which depends on the covering number of F.", "cite_spans": [ { "start": 148, "end": 163, "text": "Dasgupta (1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "Unfortunately, we cannot fully adopt this approach, because the derivations of a probabilistic grammar can be arbitrarily large. Instead, we present the following proposition, which is based on the \"Hidden Variable Rule\" from Dasgupta (1997) . This proposition shows that the covering number of F (or more accurately, its bounded approximations) can be bounded in terms of the covering number of the bounded approximations of F, and the constants which control the underlying distribution p mentioned in Section 3.", "cite_spans": [ { "start": 226, "end": 241, "text": "Dasgupta (1997)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Proof Sketch", "sec_num": null }, { "text": "For any two positive-valued sequences (a 1 , . . . , a n ) and (b 1 , . . . , b n ) we have that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 3", "sec_num": null }, { "text": "i | log a i /b i | \u2265 | log ( a i / b i ) |.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 3", "sec_num": null }, { "text": "Let m = log 4K n (1 \u2212 q) log 1 q . Then, N( , F truncated,n ) \u2264 N 2\u039b(m) , F truncated,n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 6 (Hidden Variable Rule for Probabilistic Grammars)", "sec_num": null }, { "text": "Let Z(m) = {z | |z| \u2264 m} be the subset of derivations of length shorter than m. Consider f, f 0 \u2208 F truncated,n . Let f and f 0 be the corresponding functions in F truncated,n . Then, for any distribution p, ) is a probability distribution that uniformly divides the probability mass p(x) across all derivations for the specific x, that is:", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 209, "text": ")", "ref_id": null } ], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "d p ( f , f 0 ) = x | f (x) \u2212 f 0 (x)| p(x) \u2264 x z | f (x, z) \u2212 f 0 (x, z)| p(x) = x z\u2208Z(m) | f (x, z) \u2212 f 0 (x, z)| p(x) + x z/ \u2208Z(m) | f (x, z) \u2212 f 0 (x, z)| p(x) \u2264 x z\u2208Z(m) | f (x, z) \u2212 f 0 (x, z)| p(x) + x z/ \u2208Z(m) 2K n p(x) (18) \u2264 x z\u2208Z(m) | f (x, z) \u2212 f 0 (x, z)| p(x) + 2K n x : |x|\u2265m |D x (G)|p(x) \u2264 x z\u2208Z(m) | f (x, z) \u2212 f 0 (x, z)| p(x) + 2K n \u221e k=m \u039b 2 (k)r k \u2264 d p ( f, f 0 )|Z(m)| + 2K n q m 1 \u2212 q where p (x, z", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "p (x, z) = p(x) |D x (G)|", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "The inequality in Equation (18) stems from Utility Lemma 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Set m to be the quantity that appears in the proposition to get the necessary result ( f and f are arbitrary functions in F truncated,n and F truncated,n respectively. Then consider f 0 and f 0 to be functions from the respective covers.).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "For the unsupervised case, then, we get the following sample complexity result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let G be a grammar. Let F n be a proper approximation for the corresponding family of probabilistic grammars. Let p(x, z) be a distribution over derivations which satisfies the requirements in Section 3.1. Let x 1 , . . . , x n be a sample of strings from p(x). Then there exists a constant \u03b2 (L, q, p, N) and constant M such that for any 0 < \u03b4 < 1, 0 < < K n , any n > M, and if", "cite_spans": [ { "start": 293, "end": 305, "text": "(L, q, p, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n \u2265 max 128K 2 n 2 2N log 32eK n \u039b(m) + log 32 \u03b4 , log 4/\u03b4 + log 1/ \u03b2 (L, q, p, N)", "eq_num": "(19)" } ], "section": "Theorem 3", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "m = log 4K n (1 \u2212 q) log 1 q , we have that p sup f \u2208F n |Ep n f \u2212 E p f | \u2264 2 \u2265 1 \u2212 \u03b4 where K n = sN log 3 n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "Theorem 3 states that the number of samples we require in order to accurately estimate a probabilistic grammar from unparsed strings depends on the level of ambiguity in the grammar, represented as \u039b(m). We note that this dependence is polynomial, and we consider this a positive result for unsupervised learning of grammars. More specifically, if \u039b is an exponential function (such as the case with PCFGs), when compared to the supervised learning, there is an extra multiplicative factor in the sample complexity in the unsupervised setting that behaves like O(log log K n ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "We note that the following Equation (20) can again be reduced to a multiplicative case, similarly to the way we described it for the supervised case. Setting = \u03c1K n (\u03c1 \u2208 (0, 1)), we get the following requirement on n:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "n \u2265 max 128 \u03c1 2 2N log 32e \u00d7 t(\u03c1) \u03c1 + log 32 \u03b4 , log 4/\u03b4 + log 1/ \u03b2 (L, q, p, N)", "eq_num": "(20)" } ], "section": "Theorem 3", "sec_num": null }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "t(\u03c1) = log 4 \u03c1(1 \u2212 q) log 1 q .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 3", "sec_num": null }, { "text": "We turn now to describing algorithms and their properties for minimizing empirical risk using the framework described in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Algorithms for Empirical Risk Minimization", "sec_num": "6." }, { "text": "ERM with proper approximations leads to simple algorithms for estimating the probabilities of a probabilistic grammar in the supervised setting. Given an > 0 and a \u03b4 > 0, we draw n examples according to Theorem 2. We then set \u03b3 = n \u2212s . To minimize the log-loss with respect to these n examples, we use the proper approximation F n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "Note that the value of the empirical log-loss for a probabilistic grammar parametrized by \u03b8 is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "Ep n \u2212 log h(x, z | \u03b8) = \u2212 x,zp n (x, z) log h(x, z | \u03b8) = \u2212 x,zp n (x, z) K k=1 N k i=1 \u03c8 k,i (x, z) log(\u03b8 k,i ) = \u2212 K k=1 N k i=1 log(\u03b8 k,i )Ep n \u03c8 k,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "Because we make the assumption that deg(G) \u2264 2 (Section 3.2), we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Ep n \u2212 log h(x, z | \u03b8) = \u2212 K k=1 log(\u03b8 k,1 )Ep n \u03c8 k,1 + log(1 \u2212 \u03b8 k,1 )Ep n \u03c8 k,2", "eq_num": "(21)" } ], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "To minimize the log-loss with respect to F n , we need to minimize Equation 21under the constraint that \u03b3 \u2264 \u03b8 k,i \u2264 1 \u2212 \u03b3 and \u03b8 k 1 + \u03b8 k,2 = 1. It can be shown that the solution for this optimization problem is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 k,i = min \uf8f1 \uf8f2 \uf8f3 1 \u2212 \u03b3, max \uf8f1 \uf8f2 \uf8f3 \u03b3, \uf8eb \uf8ed n j=1\u03c8 j,k,i \uf8f6 \uf8f8 \uf8eb \uf8ed n j=1 2 i =1\u03c8 j,k,i \uf8f6 \uf8f8 \uf8fc \uf8fd \uf8fe \uf8fc \uf8fd \uf8fe", "eq_num": "(22)" } ], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "where\u03c8 j,k,i is the number of times that \u03c8 k,i fires in Example j. (We include a full derivation of this result in Appendix B.) The interpretation of Equation (22) is simple:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "We count the number of times a rule appears in the samples and then normalize this value by the total number of times rules associated with the same multinomial appear in the samples. This frequency count is the maximum likelihood solution with respect to the full hypothesis class H (Corazza and Satta 2006 ; see Appendix B). Because we constrain ourselves to obtain a value away from 0 or 1 by a margin of \u03b3, we need to truncate this solution, as done in Equation 22. This truncation to a margin \u03b3 can be thought of as a smoothing factor that enables us to compute sample complexity bounds. We explore this connection to smoothing with a Dirichlet prior in a Maximum a posteriori (MAP) Bayesian setting in Section 7.2.", "cite_spans": [ { "start": 284, "end": 307, "text": "(Corazza and Satta 2006", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Supervised Case", "sec_num": "6.1" }, { "text": "Similarly to the supervised case, minimizing the empirical log-loss in the unsupervised setting requires minimizing (with respect to \u03b8) the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Case", "sec_num": "6.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Ep n \u2212 log h(x | \u03b8) = \u2212 xp n (x) log z h(x, z | \u03b8)", "eq_num": "(23)" } ], "section": "Unsupervised Case", "sec_num": "6.2" }, { "text": "with the constraint that \u03b3 \u2264 \u03b8 k,i \u2264 1 \u2212 \u03b3 (i.e., \u03b8 \u2208 \u0398(\u03b3)) where \u03b3 = n \u2212s . This is done after drawing n examples according to Theorem 3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Unsupervised Case", "sec_num": "6.2" }, { "text": "Approximations. It turns out that minimizing Equation (23) under the specified constraints is actually an NP-hard problem when G is a PCFG. This result follows using a similar proof to the one in Cohen and Smith (2010c) for the hardness of Viterbi training and maximizing log-likelihood for PCFGs. We turn to giving the full derivation of this hardness result for PCFGs and the modification required for adapting the results from Cohen and Smith to the case of having an arbitrary \u03b3 margin constraint. In order to show an NP-hardness result, we need to \"convert\" the problem of the maximization of Equation 23to a decision problem. We do so by stating the following decision problem.", "cite_spans": [ { "start": 196, "end": 219, "text": "Cohen and Smith (2010c)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Hardness of ERM with Proper", "sec_num": "6.2.1" }, { "text": "Input: A binarized context-free grammar G, a set of sentences x 1 , . . . , x n , a value \u03b3 \u2208 [0, 1 2 ), and a value \u03b1 \u2208 [0, 1] . Output: 1 if there exists \u03b8 \u2208 \u0398(\u03b3) (and hence, h", "cite_spans": [ { "start": 121, "end": 124, "text": "[0,", "ref_id": null }, { "start": 125, "end": 127, "text": "1]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2208 H(G)) such that \u2212 xp n (x) log z h(x, z | \u03b8) \u2264 \u2212 log(\u03b1)", "eq_num": "(24)" } ], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "and 0 otherwise. We will show the hardness result both when \u03b3 is not restricted at all as well as when we allow \u03b3 > 0. The proof of the hardness result is achieved by reducing the problem 3-SAT (Sipser 2006) , known to be NP-complete, to Problem 1. The problem 3-SAT is defined as follows:", "cite_spans": [ { "start": 194, "end": 207, "text": "(Sipser 2006)", "ref_id": "BIBREF51" } ], "ref_spans": [], "eq_spans": [], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "Problem 2 (3-SAT) Input: A formula \u03c6 = m i=1 (a i \u2228 b i \u2228 c i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "in conjunctive normal form, such that each clause has three literals. Output: 1 if there is a satisfying assignment for \u03c6, and 0 otherwise.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "Given an instance of the 3-SAT problem, the reduction will, in polynomial time, create a grammar and a single string such that solving Problem 1 for this grammar and string will yield a solution for the instance of the 3-SAT problem.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "Let \u03c6 = m i=1 (a i \u2228 b i \u2228 c i ) be an instance of the 3-SAT problem, where a i , b i , and c i are literals over the set of variables {Y 1 , . . . , Y N } (a literal refers to a variable Y j or its negation,\u0232 j ). Let C j be the jth clause in \u03c6, such that C j = a j \u2228 b j \u2228 c j . We define the following CFG G \u03c6 and string to parse s \u03c6 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Problem 1 (Unsupervised Minimization of the Log-Loss with Margin)", "sec_num": null }, { "text": "The terminals of G \u03c6 are the binary digits \u03a3 = {0, 1}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "We create N nonterminals V Y r , r \u2208 {1, . . . , N} and rules V Y r \u2192 0 and V Y r \u2192 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "We create N nonterminals V\u0232 r , r \u2208 {1, . . . , N} and rules V\u0232 r \u2192 0 and V\u0232 r \u2192 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "3.", "sec_num": null }, { "text": "We create", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "U Y r ,1 \u2192 V Y r V\u0232 r and U Y r ,0 \u2192 V\u0232 r V Y r . 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "We create the rule S 1 \u2192 A 1 . For each j \u2208 {2, . . . , m}, we create a rule S j \u2192 S j\u22121 A j where S j is a new nonterminal indexed by \u03c6 j j i=1 C i and A j is also a new nonterminal indexed by j \u2208 {1, . . . , m}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4.", "sec_num": null }, { "text": "Let C j = a j \u2228 b j \u2228 c j be clause j in \u03c6. Let Y(a j ) be the variable that a j mentions. Let (y 1 , y 2 , y 3 ) be a satisfying assignment for C j where y k \u2208 {0, 1} and is the value of Y(a j ), Y(b j ), and Y(c j ), respectively, for k \u2208 {1, 2, 3}. For each such clause-satisfying assignment, we add the rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.", "sec_num": null }, { "text": "A j \u2192 U Y(a j ),y 1 U Y(b j ),y 2 U Y(c j ),y 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.", "sec_num": null }, { "text": "For each A j , we would have at most seven rules of this form, because one rule will be logically inconsistent with a j \u2228 b j \u2228 c j .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6.", "sec_num": null }, { "text": "The grammar's start symbol is S n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "7.", "sec_num": null }, { "text": "The string to parse is s \u03c6 = (10) 3m , that is, 3m consecutive occurrences of the string 10.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "A parse of the string s \u03c6 using G \u03c6 will be used to get an assignment by setting Y r = 0 if the rule V Y r \u2192 0 or V\u0232 r \u2192 1 is used in the derivation of the parse tree, and 1 otherwise. Notice that at this point we do not exclude \"contradictions\" that come from the parse tree, such as V Y 3 \u2192 0 used in the tree together with V Y 3 \u2192 1 or V\u0232 3 \u2192 0. To maintain the restriction on the degree of grammars, we convert G \u03c6 to the binary normal form described in Section 3.2. The following lemma gives a condition under which the assignment is consistent (so that contradictions do not occur in the parse tree).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8.", "sec_num": null }, { "text": "Let \u03c6 be an instance of the 3-SAT problem, and let G \u03c6 be a probabilistic CFG based on the given grammar with weights \u03b8 \u03c6 . If the (multiplicative) weight of the Viterbi parse (i.e., the highest scoring parse according to the PCFG) of s \u03c6 is 1, then the assignment extracted from the parse tree is consistent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 4", "sec_num": null }, { "text": "Because the probability of the Viterbi parse is 1, all rules of the form {V Y r , V\u0232 r } \u2192 {0, 1} which appear in the parse tree have probability 1 as well. There are two possible types of inconsistencies. We show that neither exists in the Viterbi parse:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "For any r, an appearance of both rules of the form V Y r \u2192 0 and V Y r \u2192 1 cannot occur because all rules that appear in the Viterbi parse tree have probability 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "1.", "sec_num": null }, { "text": "For any r, an appearance of rules of the form V Y r \u2192 1 and V\u0232 r \u2192 1 cannot occur, because whenever we have an appearance of the rule V Y r \u2192 0, we have an adjacent appearance of the rule V\u0232 r \u2192 1 (because we parse substrings of the form 10), and then we again use the fact that all rules in the parse tree have probability 1. The case of V Y r \u2192 0 and V\u0232 r \u2192 0 is handled analogously.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Thus, both possible inconsistencies are ruled out, resulting in a consistent assignment. Figure 3 gives an example of an application of the reduction. ", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 97, "text": "Figure 3", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "\u03c6 = (Y 1 \u2228 Y 2 \u2228\u0232 4 ) \u2227 (\u0232 1 \u2228\u0232 2 \u2228 Y 3 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": ". In \u03b8 \u03c6 , all rules appearing in the parse tree have probability 1. The extracted assignment would be", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Y 1 = 0, Y 2 = 1, Y 3 = 1, Y 4 = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Note that there is no usage of two different rules for a single nonterminal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "2.", "sec_num": null }, { "text": "Define \u03c6 and G \u03c6 as before. There exists \u03b8 \u03c6 such that the Viterbi parse of s \u03c6 is 1 if and only if \u03c6 is satisfiable. Moreover, the satisfying assignment is the one extracted from the parse tree with weight 1 of s \u03c6 under \u03b8 \u03c6 . (=\u21d2) Assume that there is a satisfying assignment. Each clause C j = a j \u2228 b j \u2228 c j is satisfied using a tuple (y 1 , y 2 , y 3 ), which assigns values for Y(a j ), Y(b j ), and Y(c j ). This assignment corresponds to the following rule:", "cite_spans": [ { "start": 228, "end": 232, "text": "(=\u21d2)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Lemma 5", "sec_num": null }, { "text": "A j \u2192 U Y(a j ),y 1 U Y(b j ),y 2 U Y(c j ),y 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Set its probability to 1, and set all other rules of A j to 0. In addition, for each r, if Y r = y, set the probabilities of the rules V Y r \u2192 y and V\u0232 r \u2192 1 \u2212 y to 1 and V\u0232 r \u2192 y and V Y r \u2192 1 \u2212 y to 0. The rest of the weights for S j \u2192 S j\u22121 A j are set to 1. This assignment of rule probabilities results in a Viterbi parse of weight 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "(\u21d0=) Assume that the Viterbi parse has probability 1. From Lemma 4, we know that we can extract a consistent assignment from the Viterbi parse. In addition, for each clause C j we have a rule", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "A j \u2192 U Y(a j ),y 1 U Y(b j ),y 2 U Y(c j ),y 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "that is assigned probability 1, for some (y 1 , y 2 , y 3 ). One can verify that (y 1 , y 2 , y 3 ) are the values of the assignment for the corresponding variables in clause C j , and that they satisfy this clause. This means that each clause is satisfied by the assignment we extracted.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "We are now ready to prove the following result.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Problem 1 is NP-hard when either requiring \u03b3 > 0 or when fixing \u03b3 = 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 4", "sec_num": null }, { "text": "We first describe the reduction for the case of \u03b3 = 0. In Problem 1, set \u03b3 = 0, \u03b1 = 1, G = G \u03c6 , \u03b3 = 0, and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "x 1 = s \u03c6 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "If \u03c6 is satisfiable, then the left side of Equation (24) can get value 0, by setting the rule probabilities according to Lemma 5, hence we would return 1 as the result of running Problem 1. If \u03c6 is unsatisfiable, then we would still get value 0 only if L(G) = {s \u03c6 }. If G \u03c6 generates a single derivation for (10) 3m , then we actually do have a satisfying assignment from Lemma 4. Otherwise (more than a single derivation), the optimal \u03b8 would have to give fractional probabilities to rules of the form V Y r \u2192 {0, 1} (or V\u0232 r \u2192 {0, 1}). In that case, it is no longer true that (10) 3m is the only generated sentence, and this is a contradiction to getting value 0 for Problem 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "We next show that Problem 1 is NP-hard even if we require \u03b3 > 0. Let \u03b3 < 1 20m . Set \u03b1 = \u03b3, and the rest of the inputs to Problem 1 the same as before. Assume that \u03c6 is satisfiable. Let \u03b8 be the rule probabilities from Equation (5) after being shifted with a margin of \u03b3. Then, because there is a derivation that uses only rules that have probability 1 \u2212 \u03b3, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "h(x 1 | T(\u03b8, \u03b3), G \u03c6 ) = z p(x 1 , z | T(\u03b8, \u03b3), G \u03c6 ) \u2265 (1 \u2212 \u03b3) 10m > \u03b1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "because the size of the parse tree for (10) 3m is at most 10m (using the binarized G \u03c6 ) and assuming \u03b1 = \u03b3 < (1 \u2212 \u03b3) 10m . This inequality indeed holds whenever \u03b3 < 1 20m . Therefore, we have \u2212 log h(x 1 | \u03b8) > \u2212 log \u03b1. Problem 1 would return 0 in this case. Now, assume that \u03c6 is not satisfiable. That means that any parse tree for the string (10) 3m would have to contain two different rules headed by the same non-terminal. This means that T(\u03b8, \u03b3) ) \u2264 \u2212 log \u03b1, and Problem 1 would return 1.", "cite_spans": [], "ref_spans": [ { "start": 444, "end": 451, "text": "T(\u03b8, \u03b3)", "ref_id": null } ], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "h(x 1 | T(\u03b8, \u03b3), G \u03c6 ) = z p(x 1 , z | T(\u03b8, \u03b3), G \u03c6 ) \u2264 \u03b3 and therefore \u2212 log h(x 1 |", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Instead of solving the optimization problem implied by Equation (21), we propose a rather simple modification to the expectation-maximization (EM) algorithm (Dempster, Laird, and Rubin 1977) to approximate the optimal solution-this algorithm finds a local maximum for the maximum likelihood problem using proper approximations. The modified algorithm is given in Algorithm 1.", "cite_spans": [ { "start": 157, "end": 190, "text": "(Dempster, Laird, and Rubin 1977)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "The modification from the usual expectation-maximization algorithm is done in the M-step: Instead of using the expected value of the sufficient statistics by counting and normalizing, we truncate the values by \u03b3. It can be shown that if \u03b8 (0) \u2208 \u0398(\u03b3), then the likelihood is guaranteed to increase (and hence, the log-loss is guaranteed to decrease) after each iteration of the algorithm. Input: grammar G in binary normal form, initial parameters \u03b8 (0) ", "cite_spans": [ { "start": 449, "end": 452, "text": "(0)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": ", > 0, \u03b4 > 0, s > 1 Output: learned parameters \u03b8 draw x = x 1 , ..., x n from p following Theorem 3; t \u2190 1 ; \u03b3 \u2190 n \u2212s ; repeat // E \u03b8 (t\u22121) \u03c8 k,i (z) | x j denotes the expected counts of event i in multinomial k under the distributionp n (x)p(z | x, \u03b8 (t\u22121) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "Compute for each training example j \u2208 {1, . . . , n}, for each event i \u2208 {1, 2} in each multinomial k \u2208 {1, . . . , K}:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "\u03c8 j,k,i \u2190 E \u03b8 (t\u22121) \u03c8 k,i (z) | x j ; Set \u03b8 (t) i,k = min{1 \u2212 \u03b3, max{\u03b3, n j=1\u03c8j,k,i / n j=1 2 i =1\u03c8j,k,i }}; t \u2190 t + 1; until convergence; return \u03b8 (t)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "The reason for this likelihood increase stems from the fact that the M-step solves the optimization problem of minimizing the log-loss (with respect to \u03b8 \u2208 \u0398(\u03b3)) when the posterior calculate at the E-step as the base distribution is used. This means that the M-step minimizes (in iteration t):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "E r \u2212 log h(x, z | \u03b8 (t) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "where the expectation is taken with respect to the distribution r(x, z) =p n (x)p(z | x, \u03b8 (t\u22121) ). With this notion in mind, the likelihood increase after each iteration follows from principles similar to those described in Bishop (2006) for the EM algorithm.", "cite_spans": [ { "start": 225, "end": 238, "text": "Bishop (2006)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "An Expectation-Maximization Algorithm.", "sec_num": "6.2.2" }, { "text": "Our framework can be specialized to improve the two main criteria which have a tradeoff: the tightness of the proper approximation and the sample complexity. For example, we can improve the tightness of our proper approximations by taking a subsequence of F n . This will make the sample complexity bound degrade, however, because K n will grow faster. Table 2 shows the trade-offs between parameters in our model and the effectiveness of learning.", "cite_spans": [], "ref_spans": [ { "start": 353, "end": 360, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "7." }, { "text": "We note that the sample complexity bounds that we give in this article give insight about the asymptotic behavior of grammar estimation, but are not necessarily Table 2 Trade-off between quantities in our learning model and effectiveness of different criteria. K n is the constant that satisfies the boundedness property (Theorems 2 and 3) and s is a fixed constant larger than 1 (Section 4.1).", "cite_spans": [], "ref_spans": [ { "start": 161, "end": 168, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Discussion", "sec_num": "7." }, { "text": "as K n increases . . . as s increases . . . tightness of proper approximation improves improves sample complexity bound degrades degrades sufficiently tight to be used in practice. It still remains an open problem to obtain sample complexity bounds which are sufficiently tight in this respect. For a discussion about the connection of grammar learning in theory and practice, we refer the reader to Clark and Lappin (2010) . It is also important to note that MLE is not the only option for estimating finite state probabilistic grammars. There has been some recent advances in learning finite state models (HMMs and finite state transducers) by using spectral analysis of matrices which consist of quantities estimated from observations only (Hsu, Kakade, and Zhang 2009; Balle, Quattoni, and Carreras 2011) , based on the observable operator models of Jaeger (1999) . These algorithms are not prone to local minima, and converge to the correct model as the number of samples increases, but require some assumptions about the underlying model that generates the data.", "cite_spans": [ { "start": 400, "end": 423, "text": "Clark and Lappin (2010)", "ref_id": "BIBREF15" }, { "start": 743, "end": 772, "text": "(Hsu, Kakade, and Zhang 2009;", "ref_id": "BIBREF31" }, { "start": 773, "end": 808, "text": "Balle, Quattoni, and Carreras 2011)", "ref_id": "BIBREF5" }, { "start": 854, "end": 867, "text": "Jaeger (1999)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "criterion", "sec_num": null }, { "text": "In this article, we chose to introduce assumptions about distributions that generate natural language data. The choice of these assumptions was motivated by observations about properties shared among treebanks. The main consequence of making these assumptions is bounding the amount of noise in the distribution (i.e., the amount of variation in probabilities across labels given a fixed input).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "There are other ways to restrict the noise in a distribution. One condition for such noise restriction, which has received considerable recent attention in the statistical literature, is the Tsybakov noise condition (Tsybakov 2004; Koltchinskii 2006) . Showing that a distribution satisfies the Tsybakov noise condition enables the use of techniques (e.g., from Koltchinskii 2006) for deriving distribution-dependent sample complexity bounds that depend on the parameters of the noise. It is therefore of interest to see whether Tsybakov noise holds under the assumptions presented in Section 3.1. We show that this is not the case, and that Tsybakov noise is too permissive. In fact, we show that p can be a probabilistic grammar itself (and hence, satisfy the assumptions in Section 3.1), and still not satisfy the Tsybakov noise conditions.", "cite_spans": [ { "start": 216, "end": 231, "text": "(Tsybakov 2004;", "ref_id": "BIBREF55" }, { "start": 232, "end": 250, "text": "Koltchinskii 2006)", "ref_id": "BIBREF39" }, { "start": 362, "end": 380, "text": "Koltchinskii 2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "Tsybakov noise was originally introduced for classification problems (Tsybakov 2004) , and was later extended to more general settings, such as the one we are facing in this article (Koltchinskii 2006) . We now explain the definition of Tsybakov noise in our context.", "cite_spans": [ { "start": 69, "end": 84, "text": "(Tsybakov 2004)", "ref_id": "BIBREF55" }, { "start": 182, "end": 201, "text": "(Koltchinskii 2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "Let C > 0 and \u03ba \u2265 1. We say that a distribution p(x, z) satisfies the (C, \u03ba) Tsybakov noise condition if for any > 0 and h, g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2208 H such that h, g \u2208 {h | E p (h , H) \u2264 }, we have dist(g, h) E p log g log h 2 \u2264 C 1/\u03ba", "eq_num": "(25)" } ], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "This interpretation of Tsybakov noise implies that the diameter of the set of functions from the concept class that has small excess risk should shrink to 0 at the rate in Equation 25. Distribution-dependent bounds from Koltchinskii (2006) are monotone with respect to the diameter of this set of functions, and therefore demonstrating that it goes to 0 enables sharper derivations of sample complexity bounds.", "cite_spans": [ { "start": 220, "end": 239, "text": "Koltchinskii (2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "We turn now to illustrating that the Tsybakov condition does not hold for probabilistic grammars in most cases. Let G be a probabilistic grammar. Define A = A G (\u03b8) as a matrix such that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "(A G (\u03b8)) (k,i),(k ,i ) E \u03c8 k,i \u00d7 \u03c8 k ,i E[\u03c8 k,i ]E[\u03c8 k ,i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "Theorem 5 Let G be a grammar with K \u2265 2 and degree 2. Assume that p is G, \u03b8 * for some \u03b8 * , such that \u03b8 * 1,1 = \u03b8 * 2,1 = \u00b5 and that c 1 \u2264 c 2 . If A G (\u03b8 * ) is positive definite, then p does not satisfy the Tsybakov noise condition for any (C, \u03ba), where C > 0 and \u03ba \u2265 1. See Appendix C for the proof of Theorem 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "In Appendix C we show that A G (\u03b8) is positive semi-definite for any choice of \u03b8. The main intuition behind the proof is that given a probabilistic grammar p, we can construct an hypothesis h such that the KL divergence between p and h is small, but dist(p, h) is lower-bounded and is not close to 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "We conclude that probabilistic grammars, as generative distributions of data, do not generally satisfy the Tsybakov noise condition. This motivates an alternative choice of assumptions that could lead to better understanding of rates of convergences and bounds on the excess risk. Section 3.1 states such assumptions which were also justified empirically.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tsybakov Noise", "sec_num": "7.1" }, { "text": "The transformation T(\u03b8, \u03b3) from Section 4.1 can be thought of as a smoother for the probabilities \u03b8: It ensures that the probability of each rule is at least \u03b3 (and as a result, the probabilities of all rules cannot exceed 1 \u2212 \u03b3). Adding pseudo-counts to frequency counts is also a common way to smooth probabilities in models based on multinomial distributions, including probabilistic grammars (Manning and Sch\u00fctze 1999) . These pseudo-counts can be framed as a maximum a posteriori (MAP) alternative to the maximum likelihood problem, with the choice of Bayesian prior over the parameters in the form of a Dirichlet distribution. In comparison to our framework, with (symmetric) Dirichlet smoothing, instead of truncating the probabilities with a margin \u03b3 we would set the probability of each rule (in the supervised setting) t\u00f4", "cite_spans": [ { "start": 396, "end": 422, "text": "(Manning and Sch\u00fctze 1999)", "ref_id": "BIBREF41" } ], "ref_spans": [], "eq_spans": [], "section": "Comparison to Dirichlet Maximum A Posteriori Solutions", "sec_num": "7.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b8 k,i = n j=1\u03c8j,k,i + \u03b1 \u2212 1 n j=1\u03c8j,k,1 + n j=1\u03c8j,k,2 + 2(\u03b1 \u2212 1)", "eq_num": "(26)" } ], "section": "Comparison to Dirichlet Maximum A Posteriori Solutions", "sec_num": "7.2" }, { "text": "for i = 1, 2, where\u03c8 k,i are the counts in the data of event i in multinomial k for Example j. Dirichlet smoothing can be formulated as the result of adding a symmetric Dirichlet prior over the parameters \u03b8 k,i with hyperparameter \u03b1. Then Equation (26) is the mode of the posterior after observing\u03c8 k,i appearances of event i in multinomial k.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Dirichlet Maximum A Posteriori Solutions", "sec_num": "7.2" }, { "text": "The effect of Dirichlet smoothing becomes weaker as we have more samples, because the frequency counts\u03c8 j,k,i become dominant in both the numerator and the denominator when there are more data. In this sense, the prior's effect on learning diminishes as we use more data. A similar effect occurs in our framework: \u03b3 = n \u2212s where n is the number of samples-the more samples we have, the more we trust the counts in the data to be reliable. There is a subtle difference, however. With the Dirichlet MAP solution, the smoothing is less dominant only if the counts of the features are large, regardless of the number of samples we have. With our framework, smoothing depends only on the number of samples we have. These two scenarios are related, of course: The more samples we have, the more likely it is that the counts of the events will grow large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparison to Dirichlet Maximum A Posteriori Solutions", "sec_num": "7.2" }, { "text": "In this section, we discuss other possible solutions to the problem of deriving sample complexity bounds for probabilistic grammars. Talagrand's Inequality. Our bounds are based on VC theory together with classical results for empirical processes (Pollard 1984) . There have been some recent developments to the derivation of rates of convergence in statistical learning theory (Massart 2000; Bartlett, Bousquet, and Mendelson 2005; Koltchinskii 2006) , most prominently through the use of Talagrand's inequality (Talagrand 1994) , which is a concentration of measure inequality, in the spirit of Lemma 2.", "cite_spans": [ { "start": 133, "end": 156, "text": "Talagrand's Inequality.", "ref_id": null }, { "start": 247, "end": 261, "text": "(Pollard 1984)", "ref_id": "BIBREF47" }, { "start": 378, "end": 392, "text": "(Massart 2000;", "ref_id": "BIBREF42" }, { "start": 393, "end": 432, "text": "Bartlett, Bousquet, and Mendelson 2005;", "ref_id": "BIBREF7" }, { "start": 433, "end": 451, "text": "Koltchinskii 2006)", "ref_id": "BIBREF39" }, { "start": 513, "end": 529, "text": "(Talagrand 1994)", "ref_id": "BIBREF52" } ], "ref_spans": [], "eq_spans": [], "section": "Other Derivations of Sample Complexity Bounds", "sec_num": "7.3" }, { "text": "The bounds achieved with Talagrand's inequality are also distribution-dependent, and are based on the diameter of the -minimal set-the set of hypotheses which have an excess risk smaller than . We saw in Section 7.1 that the diameter of the -minimal set does not follow the Tsybakov noise condition, but it is perhaps possible to find meaningful bounds for it, in which case we may be able to get tighter bounds using Talagrand's inequality. We note that it may be possible to obtain data-dependent bounds for the diameter of the -minimal set, following Koltchinskii (2006) , by calculating the diameter of the -minimal set usingp n .", "cite_spans": [ { "start": 554, "end": 573, "text": "Koltchinskii (2006)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Using", "sec_num": "7.3.1" }, { "text": "As noted in Section 6.1, minimizing empirical risk with the log-loss leads to a simple frequency count for calculating the estimated parameters of the grammar. In Corazza and Satta (2006) , it has been also noted that to minimize the non-empirical risk, it is necessary to set the parameters of the grammar to the normalized expected count of the features.", "cite_spans": [ { "start": 163, "end": 187, "text": "Corazza and Satta (2006)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Simpler Bounds for the Supervised Case.", "sec_num": "7.3.2" }, { "text": "This means that we can get bounds on the deviation of a certain parameter from the optimal parameter by applying modifications to rather simple inequalities such as Hoeffding's inequality, which determines the probability of the average of a set of i.i.d. random variables deviating from its mean. The modification would require us to split the event space into two cases: one in which the count of some features is larger than some fixed value (which will happen with small probability because of the bounded expectation of features), and one in which they are all smaller than that fixed value. Handling these two cases separately is necessary because Hoeffding's inequality requires that the count of the rules is bounded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simpler Bounds for the Supervised Case.", "sec_num": "7.3.2" }, { "text": "The bound on the deviation from the mean of the parameters (the true probability) can potentially lead to a bound on the excess risk in the supervised case. This formulation of the problem would not generalize to the unsupervised case, however, where the empirical risk minimization does not amount to simple frequency count.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Simpler Bounds for the Supervised Case.", "sec_num": "7.3.2" }, { "text": "We conclude the discussion with some directions for further exploration and future work. Semi-Supervised Learning. Our bounds focus on the supervised case and the unsupervised case. There is a trivial extension to the semisupervised case. Consider the objective function to be the sum of the likelihood for the labeled data together with the marginalized likelihood of the unlabeled data (this sum could be a weighted sum). Then, use the sample complexity bounds for each summand to derive a sample complexity bound on this sum.", "cite_spans": [ { "start": 89, "end": 114, "text": "Semi-Supervised Learning.", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Open Problems", "sec_num": "7.4" }, { "text": "It would be more interesting to extend our results to frameworks such as the one described by Balcan and Blum (2010) . In that case, our discussion of sample complexity would attempt to identify how unannotated data can reduce the space of candidate probabilistic grammars to a smaller set, after which we can use the annotated data to estimate the final grammar. This reduction of the space is accomplished through a notion of compatibility, a type of fitness that the learner believes the estimated grammar should have given the distribution that generates the data. The key challenge in the case of probabilistic grammars would be to properly define this compatibility notion such that it fits the log-loss. If this is achieved, then similar machinery to that described in this paper (with proper approximations) can be followed to derive semi-supervised sample complexity bounds for probabilistic grammars.", "cite_spans": [ { "start": 94, "end": 116, "text": "Balcan and Blum (2010)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Sample Complexity Bounds with", "sec_num": "7.4.1" }, { "text": "The pseudodimension of a probabilistic grammar with the log-loss is bounded by the number of parameters in the grammar, because the logarithm of a distribution generated by a probabilistic grammar is a linear function. Typically the set of counts for the feature vectors of a probabilistic grammar resides in a subspace of a dimension which is smaller than the full dimension specified by the number of parameters, however. The reason for this is that there are usually relationships (which are often linear) between the elements in the feature counts. For example, with HMMs, the total feature count for emissions should equal the total feature count for transitions. With PCFGs, the total number of times that nonterminal rules fire equals the total number of times that features with that nonerminal in the right-hand side fired, again reducing the pseudo-dimension. An open problem that remains is characterization of the exact value pseudo-dimension for a given grammar, determined by consideration of various properties of that grammar. We conjecture, however, that a lower bound on the pseudo-dimension would be rather close to the full dimension of the grammar (the number of parameters).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sharper Bounds for the Pseudo-Dimension of Probabilistic Grammars.", "sec_num": "7.4.2" }, { "text": "It is interesting to note that there has been some work to identify the VC dimension and pseudo-dimension for certain types of grammars. Bane, Riggle, and Sonderegger (2010) , for example, calculated the VC dimension for constraint-based grammars. Tani (1993, 1997) computed the VC dimension for finite state automata with various properties.", "cite_spans": [ { "start": 137, "end": 173, "text": "Bane, Riggle, and Sonderegger (2010)", "ref_id": "BIBREF6" }, { "start": 248, "end": 265, "text": "Tani (1993, 1997)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Sharper Bounds for the Pseudo-Dimension of Probabilistic Grammars.", "sec_num": "7.4.2" }, { "text": "We presented a framework for performing empirical risk minimization for probabilistic grammars, in which sample complexity bounds, for the supervised case and the unsupervised case, can be derived. Our framework is based on the idea of bounded approximations used in the past to derive sample complexity bounds for graphical models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7.5" }, { "text": "Our framework required assumptions about the probability distribution that generates sentences or derivations in the language of the given grammar. These assumptions were tested using corpora, and found to fit the data well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7.5" }, { "text": "We also discussed algorithms that can be used for minimizing empirical risk in our framework, given enough samples. We showed that directly trying to minimize empirical risk in the unsupervised case is NP-hard, and suggested an approximation based on an expectation-maximization algorithm.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7.5" }, { "text": "We include in this appendix proofs for several results in the article.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix A. Proofs", "sec_num": null }, { "text": "Let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1", "sec_num": null }, { "text": "a i \u2208 [0, 1], i \u2208 {1, . . . , N} such that i a i = 1. Define b 1 = a 1 , c 1 = 1 \u2212 a 1 , b i = a i a i\u22121 b i\u22121 c i\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1", "sec_num": null }, { "text": ", and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1", "sec_num": null }, { "text": "c i = 1 \u2212 b i for i \u2265 2. Then a i = \uf8eb \uf8ed i\u22121 j=1 c j \uf8f6 \uf8f8 b i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Utility Lemma 1", "sec_num": null }, { "text": "Proof by induction on i \u2208 {1, . . . , N}. Clearly, the statement holds for i = 1. Assume it holds for arbitrary i < N. Then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "a i+1 = a i a i a i+1 = \uf8eb \uf8ed \uf8eb \uf8ed i\u22121 j=1 c j \uf8f6 \uf8f8 b i \uf8f6 \uf8f8 a i+1 a i = \uf8eb \uf8ed \uf8eb \uf8ed i\u22121 j=1 c j \uf8f6 \uf8f8 b i \uf8f6 \uf8f8 c i b i+1 b i = \uf8eb \uf8ed i j=1 c j \uf8f6 \uf8f8 b i+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "and this completes the proof.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Denote by Z ,n the set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1", "sec_num": null }, { "text": "f \u2208F {z | C n ( f )(z) \u2212 f (z) \u2265 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1", "sec_num": null }, { "text": "Denote by A ,n the event \"one of z i \u2208 D is in Z ,n .\" If F n properly approximates F, then:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1", "sec_num": null }, { "text": "E Ep n g n \u2212 Ep n f * n (A.1) \u2264 E Ep n C n ( f * n ) | A ,n p(A ,n ) + E Ep n f * n | A ,n p(A ,n ) + tail (n)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1", "sec_num": null }, { "text": "where the expectations are taken with respect to the data set D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 1", "sec_num": null }, { "text": "Consider the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E Ep n g n \u2212 Ep n f * n = E Ep n g n \u2212 Ep n C n ( f * n ) + Ep n C n ( f * n ) \u2212 Ep n f * n = E Ep n g n \u2212 Ep n C n ( f * n ) + E Ep n C n ( f * n ) \u2212 Ep n f * n Note first that E Ep n g n \u2212 Ep n C n ( f * n )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "\u2264 0, by the definition of g n as the minimizer of the empirical risk. We next bound E Ep", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "n C n ( f * n ) \u2212 Ep n f * n .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "We know from the requirement of proper approximation that we have (n) and that equals the right side of Equation (Appendix A.1).", "cite_spans": [ { "start": 66, "end": 69, "text": "(n)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E Ep n C n ( f * n ) \u2212 Ep n f * n = E Ep n C n ( f * n ) \u2212 Ep n f * n | A ,n p(A ,n ) + E Ep n C n ( f * n ) \u2212 Ep n f * n | \u00acA ,n (1 \u2212 p(A ,n )) \u2264 |E Ep n C n ( f * n ) | A ,n |p(A ,n ) + |E Ep n f * n | A ,n |p(A ,n ) + tail", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let p \u2208 P (\u03b1, L, r, q, B, G) and let F m be as defined earlier. There exists a constant \u03b2 = \u03b2(L, q, p, N) > 0 such that F m has the boundedness property with K m = sN log 3 m and ", "cite_spans": [ { "start": 10, "end": 28, "text": "(\u03b1, L, r, q, B, G)", "ref_id": null }, { "start": 97, "end": 105, "text": "q, p, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "bound (m) = m \u2212\u03b2 log m . Proof Let f \u2208 F m . Let Z(m) = {z | |z| \u2264 log 2 m}. Then, for all z \u2208 Z(m) we have | f (z)| = \u2212 i,k \u03c8(k, i) log \u03b8 k,i \u2264 i,k \u03c8(k, i)(p log m) \u2264 sN log 3 m = K m ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "E | f | \u00d7 I {| f | \u2265 K m } \u2264 sN log 3 m \u00d7 \uf8eb \uf8ed k>log 2 m L\u039b(k)r k k \uf8f6 \uf8f8 \u2264 \u03ba log 3 m \u00d7 q log 2 m for \u03ba = sNL (1 \u2212 q) 2 . Finally, for \u03b2(L, q, p, N) log \u03ba + 1 + log 1 q = \u03b2 > 0 and if m > 1 then \u03ba log 3 m q log 2 m \u2264 m \u2212\u03b2 log m .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "Utility Lemma 4 (From [Dasgupta 1997 ", "cite_spans": [ { "start": 22, "end": 36, "text": "[Dasgupta 1997", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "{z | C m ( f )(z) \u2212 f (z) \u2265 tail (m)} \uf8f6 \uf8f8 \u2264 tail (m)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "for tail (m) ", "cite_spans": [ { "start": 9, "end": 12, "text": "(m)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "= N log 2 m m s \u2212 1 and C m ( f ) = T( f, m \u2212s ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "where z(x) is some derivation for x. We have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "x:|x|>log 2 m/\u03b1 p(x)f 1 (x, z(x)) \u2264 x:|x|\u2265log 2 m/\u03b1 z\u2208D x (G) p(x, z)f 1 (x, z(x)) \u2264 sN log m x:|x|>log 2 m/\u03b1 z p(x, z)|z(x)| \u2264 sN log m k>log 2 m \u039b(k)r k k \u2264 sN log m k>log 2 m q k k \u2264 \u03ba log mq log 2 m", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "for some constant \u03ba > 0. Finally, for some \u03b2 (L, p, q, N) = \u03b2 > 0 and some constant M,", "cite_spans": [ { "start": 45, "end": 57, "text": "(L, p, q, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "if m > M then \u03ba log m q log 2 m \u2264 m \u2212\u03b2 log m . Utility Lemma 2 For a i , b i \u2265 0, if \u2212 log i a i + log i b i \u2265 then there exists an i such that \u2212 log a i + log b i \u2265 . Proof Assume \u2212 log a i + log b i < for all i. Then, b i /a i < e , therefore i b i / i a i < e , there- fore \u2212 log i a i + log i b i < which is a contradiction to \u2212 log i a i + log i b i \u2265 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "The next lemma is the main concentation of measure result that we use. Its proof requires some simple modification to the proof given for Theorem 24 in Pollard (1984, pages 30-31) .", "cite_spans": [ { "start": 152, "end": 179, "text": "Pollard (1984, pages 30-31)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 2", "sec_num": null }, { "text": "Let F n be a permissible class of functions such that for every f \u2208 F n we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "E[| f | \u00d7 I {| f | \u2264 K n }] \u2264 bound (n). Let F truncated,n = { f \u00d7 I { f \u2264 K n } | f \u2208 F m }, that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "is, the set of functions from F n after being truncated by K n . Then for > 0 we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "p sup f \u2208F n |Ep n f \u2212 E p f | > 2 \u2264 8N( /8, F truncated,n ) exp \u2212 1 128 n 2 /K 2 n + bound (n)/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "provided n \u2265 K 2 n /4 2 and bound (n) < .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 2", "sec_num": null }, { "text": "First note that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "sup f \u2208F n |Ep n f \u2212 E p f | \u2264 sup f \u2208F n |Ep n f I {| f | \u2264 K n } \u2212 E p f I {| f | \u2264 K n } | + sup f \u2208F n Ep n | f |I {| f | \u2264 K n } + sup f \u2208F n E p | f |I {| f | \u2264 K n }", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "We have sup f \u2208F n E p | f |I {| f | \u2264 K n } \u2264 bound (n) < , and also, from Markov inequality, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "P(sup f \u2208F n Ep n | f |I {| f | \u2264 K n } > ) \u2264 bound (n)/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "At this point, we can follow the proof of Theorem 24 in Pollard (1984) , and its extension on pages 30-31 to get Lemma 2, using the shifted set of functions F truncated,n .", "cite_spans": [ { "start": 56, "end": 70, "text": "Pollard (1984)", "ref_id": "BIBREF47" } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Central to our algorithms for minimizing the log-loss (both in the supervised case and the unsupervised case) is a convex optimization problem of the form min \u03b8 K k=1 c k,1 log \u03b8 k,1 + c k,2 log \u03b8 k,2 such that \u2200k \u2208 {1, . . . , K} :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "\u03b8 k,1 + \u03b8 k,2 = 1 \u03b3 \u2264 \u03b8 k,1 \u2264 1 \u2212 \u03b3 \u03b3 \u2264 \u03b8 k,2 \u2264 1 \u2212 \u03b3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "for constants c k,i which depend onp n or some other intermediate distribution in the case of the expectation-maximization algorithm and \u03b3 which is a margin determined by the number of samples. This minimization problem can be decomposed into several optimization problems, one for each k, each having the following form:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "max \u03b2 c 1 \u03b2 1 + c 2 \u03b2 2 (B.1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "such that exp(\u03b2 1 ) + exp(\u03b2 2 ) = 1 ( B . 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "\u03b3 \u2264 \u03b2 1 \u2264 1 \u2212 \u03b3 (B.3) \u03b3 \u2264 \u03b2 2 \u2264 1 \u2212 \u03b3 (B.4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "where c i \u2265 0 and 1/2 > \u03b3 \u2265 0. Ignore for a moment the constraints \u03b3 \u2264 \u03b2 i \u2264 1 \u2212 \u03b3. In that case, this can be thought of as a regular maximum likelihood estimation problem, so \u03b2 i = c i /(c 1 + c 2 ). We give a derivation of this result in this simple case for completion. We use Lagranian multipliers to solve this problem. Let F(\u03b21, \u03b22) = c 1 \u03b2 1 + c 2 \u03b2 2 . Define the Lagrangian:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "g(\u03bb) = inf \u03b2 L(\u03bb, \u03b2) = inf \u03b2 c 1 \u03b2 1 + c 2 \u03b2 2 + \u03bb(exp(\u03b2 1 ) + exp(\u03b2 2 ) \u2212 1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Taking the derivative of the term we minimize in the Lagrangian, we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "\u2202L \u2202\u03b2 i = c i + \u03bb exp(\u03b2 i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Setting the derivatives to 0 for minimization, we have g(\u03bb) = c 1 log(\u2212c 1 /\u03bb) + c 2 log(\u2212c 2 /\u03bb) + \u03bb(\u2212c 1 /\u03bb \u2212 c 2 /\u03bb \u2212 1) (B.5) g(\u03bb) is the objective function of the dual problem of Equation (B.1)-Equation (B.2). We would like to minimize Equation (B.5) with respect to \u03bb. The derivative of g(\u03bb) is \u2202g \u2202\u03bb = \u2212c 1 /\u03bb \u2212 c 2 /\u03bb \u2212 1 hence when equating the derivative of g(\u03bb) to 0, we get \u03bb = \u2212(c 1 + c 2 ), and therefore the solution is \u03b2 * i = log (c i /(c 1 + c 2 )). We need to verify that the solution to the dual problem indeed gets the optimal value for the primal. Because the primal problem is convex, it is sufficient to verify that the Karush-Kuhn-Tucker (KKT) conditions hold (Boyd and Vandenberghe 2004) . Indeed, we have", "cite_spans": [ { "start": 685, "end": 713, "text": "(Boyd and Vandenberghe 2004)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "\u2202F \u2202\u03b2 i (\u03b2 * ) + \u03bb \u2202h \u2202\u03b2 i (\u03b2 * ) = c i \u2212 (c 1 + c 2 ) \u00d7 c i c 1 + c 2 = 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "where h(\u03b2) exp(\u03b2) + exp(\u03b2) \u2212 1 stands for the equality constraint. The rest of the KKT conditions trivially hold, therefore \u03b2 * is the optimal solution for Equations (B.1)-(B.2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Note that if 1 \u2212 \u03b3 < c i /(c 1 + c 2 ) < \u03b3, then this is the solution even when again adding the constraints in Equation (B.3) and (B.4). When c 1 /(c 1 + c 2 ) < \u03b3, then the solution is \u03b2 * 1 = \u03b3 and \u03b2 * 2 = 1 \u2212 \u03b3. Similarly, when c 2 /(c 1 + c 2 ) < \u03b3 then the solution is \u03b2 * 2 = \u03b3 and \u03b2 * 1 = 1 \u2212 \u03b3. We describe why this is true for the first case. The second case follows very similarly. Assume c 1 /(c 1 + c 2 ) < \u03b3. We want to show that for any choice of \u03b2 \u2208 [0, 1] such that \u03b2 > \u03b3 we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "c 1 log \u03b3 + c 2 log(1 \u2212 \u03b3) \u2265 c 1 log \u03b2 + c 2 log(1 \u2212 \u03b2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Divide both sides of the inequality by c 1 + c 2 and we get that we need to show that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "c 1 c 1 + c 2 log(\u03b3/\u03b2) + c 2 c 1 + c 2 log 1 \u2212 \u03b3 1 \u2212 \u03b2 \u2265 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Because we have \u03b2 > \u03b3, and we also have c 1 /(c 1 + c 2 ) < \u03b3, it is sufficient to show that \u03b3 log(\u03b3/\u03b2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "+ (1 \u2212 \u03b3) log 1 \u2212 \u03b3 1 \u2212 \u03b2 \u2265 0 ( B . 6 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Equation (B.6) is precisely the definition of the KL divergence between the distribution of a coin with probability \u03b3 of heads and the distribution of a coin with probability \u03b2 of heads, and therefore the right side in Equation (B.6) is positive, and we get what we need.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix B. Minimizing Log-Loss for Probabilistic Grammars", "sec_num": null }, { "text": "Lemma 6 A = A G (\u03b8) is positive semi-definite for any probabilistic grammar G, \u03b8 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Appendix C. Counterexample to Tsybakov Noise (Proofs)", "sec_num": null }, { "text": "Let d k,i be a collection of constants. Define the random variable:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "R(z) = i,k d k,i E \u03c8 k,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "\u03c8 k,i (z) We have that", "cite_spans": [ { "start": 6, "end": 9, "text": "(z)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E R 2 = i,i k,k A (k,i),(k ,i ) d k,i d k ,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "which is always larger or equal to 0. Therefore, A is positive semi-definite.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let 0 < \u00b5 < 1/2, c 1 , c 2 \u2265 0. Let \u03ba, C > 0. Also, assume that c 1 \u2264 c 2 . For any > 0, define:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 7", "sec_num": null }, { "text": "a = \u00b5 exp C 1/\u03ba + /2 c 1 = \u03b1 1 \u00b5 b = \u00b5 exp \u2212C 1/\u03ba + /2 c 2 = \u03b1 2 \u00b5 t( ) = c 1 1 \u2212 \u00b5 1 \u2212 a + c 2 1 \u2212 \u00b5 1 \u2212 b \u2212 (c 1 + c 2 ) exp( /2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 7", "sec_num": null }, { "text": "Then, for small enough , we have t( ) \u2264 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lemma 7", "sec_num": null }, { "text": "We have that t( ) \u2264 0 if", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "ac 2 + bc 1 \u2265 \u2212 (c 1 + c 2 )(1 \u2212 a)(1 \u2212 b) 1 \u2212 \u00b5 exp( /2) + c 1 + c 2 = (c 1 + c 2 ) 1 \u2212 (1 \u2212 a)(1 \u2212 b) (1 \u2212 \u00b5) exp(\u2212 /2) (C.1) First, show that (1 \u2212 a)(1 \u2212 b) (1 \u2212 \u00b5) exp(\u2212 /2) \u2265 1 \u2212 \u00b5 (C.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "which happens if (after substituting a = \u03b1 1 \u00b5, b = \u03b1 2 \u00b5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "\u00b5 \u2264 (\u03b1 1 + \u03b1 2 \u2212 2)/(1 \u2212 \u03b1 1 \u03b1 2 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Note we have \u03b1 1 \u03b1 2 > 1 because c 1 \u2264 c 2 . In addition, we have \u03b1 1 + \u03b1 2 \u2212 2 \u2265 0 for small enough (can be shown by taking the derivative, with respect to of \u03b1 1 + \u03b1 2 \u2212 2, which is always positive for small enough , and in addition, noticing that the value of \u03b1 1 + \u03b1 2 \u2212 2 is 0 when = 0.) Therefore, Equation (C.2) is true. Substituting Equation (C.2) in Equation (C.1), we have that t( ) \u2264 0 if", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "ac 2 + bc 1 \u2265 (c 1 + c 2 )\u00b5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "which is equivalent to", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "c 2 \u03b1 1 + c 1 \u03b1 2 \u2265 c 1 + c 2 (C.3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Taking again the derivative of the left side of Equation (C.3), we have that it is an increasing function of (if c 1 \u2264 c 2 ), and in addition at = 0 it obtains the value c 1 + c 2 . Therefore, Equation (C.3) holds, and therefore t( ) \u2264 0 for small enough .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Let G be a grammar with K \u2265 2 and degree 2. Assume that p is G, \u03b8 * for some \u03b8 * , such that \u03b8 * 1,1 = \u03b8 * 2,1 = \u00b5 and that c 1 \u2264 c 2 . If A G (\u03b8 * ) is positive definite, then p does not satisfy the Tsybakov noise condition for any (C, \u03ba), where C > 0 and \u03ba \u2265 1.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Theorem 5", "sec_num": null }, { "text": "Define \u03bb to be the eigenvalue of A G (\u03b8) with the smallest value (\u03bb is positive). Also, define v(\u03b8) to be a vector indexed by k, i such that v k,i (\u03b8) ", "cite_spans": [ { "start": 147, "end": 150, "text": "(\u03b8)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "= E \u03c8 k,i log \u03b8 * k,i \u03b8 k,i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Simple algebra shows that for any h \u2208 H(G) (and the fact that p \u2208 H(G)), we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E p (h) = D KL (p h) = K k=1 E p \u03c8 k,1 log \u03b8 * k,1 \u03b8 k,1 + E p \u03c8 k,1 log 1 \u2212 \u03b8 * k,1 1 \u2212 \u03b8 k,1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "For a C > 0 and \u03ba \u2265 1, define \u03b1 = C 1/\u03ba . Let < \u03b1. First, we construct an h such that D KL (p h) < + /2 but dist(p, h) > C 1/\u03ba as \u2192 0. The construction follows. Parametrize h by \u03b8 such that \u03b8 is identical to \u03b8 * except for k = 1, 2, in which case we have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "\u03b8 1,1 = \u03b8 * 1,1 exp \u03b1 + /2 c 1 = \u00b5 exp \u03b1 + /2 c 1 (C.4) \u03b8 2,1 = \u03b8 * 2,1 exp \u2212\u03b1 + /2 c 2 = \u00b5 exp \u2212\u03b1 + /2 c 2 (C.5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Note that \u00b5 \u2264 \u03b8 1,1 \u2264 1/2 and \u03b8 2,1 < \u00b5. Then, we have that", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "D KL (p h) = K k=1 E p \u03c8 k,1 log \u03b8 * k,1 \u03b8 k,1 + E p \u03c8 k,1 log 1 \u2212 \u03b8 * k,1 1 \u2212 \u03b8 k,1 = + c 1 log 1 \u2212 \u03b8 * k,1 1 \u2212 \u03b8 1,1 + c 2 log 1 \u2212 \u03b8 * k,2 1 \u2212 \u03b8 2,1 = + c 1 log 1 \u2212 \u00b5 1 \u2212 \u03b8 1,1 + c 2 log 1 \u2212 \u00b5 1 \u2212 \u03b8 2,1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "We also have", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "c 1 log 1 \u2212 \u00b5 1 \u2212 \u03b8 1,1 + c 2 log 1 \u2212 \u00b5 1 \u2212 \u03b8 2,1 \u2264 0 ( C . 6 ) if c 1 \u00d7 1 \u2212 \u00b5 1 \u2212 \u03b8 1,1 + c 2 \u00d7 1 \u2212 \u00b5 1 \u2212 \u03b8 2,1 \u2264 c 1 + c 2 (C.7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "(This can be shown by dividing Equation [C.6] by c 1 + c 2 and then using the concavity of the logarithm function.) From Lemma 7, we have that Equation (C.7) holds. Therefore,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "D KL (p h) \u2264 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Now, consider the following, which can be shown through algebraic manipulation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "dist(p, h) = E log p h 2 = k,k i,i E \u03c8 k,i \u00d7 \u03c8 k ,i log \u03b8 * k,i \u03b8 k,i log \u03b8 * k ,i \u03b8 k ,i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "Then, additional algebraic simplification shows that E log p h 2 = v(\u03b8)Av(\u03b8)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "A fact from linear algebra states that v(\u03b8)Av(\u03b8) \u2265 \u03bb||v(\u03b8)|| 2 2 where \u03bb is the smallest eigenvalue in A. From the construction of \u03b8 and Equation (C.4)-(C.5), we have that ||v(\u03b8)|| 2 2 > \u03b1 2 . Therefore,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "E log p h 2 \u2265 \u03bb\u03b1 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "which means dist(p, h) \u2265 \u221a \u03bbC 1/\u03ba . Therefore, p does not satisfy the Tsybakov noise condition with parameters (D, \u03ba) for any D > 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "It is important to remember that minimizing the log-loss does not equate to minimizing the error of a linguistic analyzer or natural language processing application. In this article we focus on the log-loss case because we believe that probabilistic models of language phenomena have inherent usefulness as explanatory tools in computational linguistics, aside from their use in systems.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We note thatp n itself is a random variable, because it depends on the sample drawn from p. 3 We note that being able to attain the minimum through an hypothesis q * is not necessarily possible in the general case. In our instantiations of ERM for probabilistic grammars, however, the minimum can be attained. In fact, in the unsupervised case the minimum can be attained by more than a single hypothesis. In these cases, q * is arbitrarily chosen to be one of these minimizers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Treebanks offer samples of cleanly segmented sentences. It is important to note that the distributions estimated may not generalize well to samples from other domains in these languages. Our argument is that the family of the estimated curve is reasonable, not that we can correctly estimate the curve's parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "For simplicity and consistency with the log-loss, we measure entropy in nats, which means we use the natural logarithm when computing entropy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "By varying s we get a family of approximations. The larger s is, the tighter the approximation is. Also, the larger s is, as we see later, the looser our sample complexity bound will be.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The \"permissible class\" requirement is a mild regularity condition regarding measurability that holds for proper approximations. We refer the reader toPollard (1984) for more details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The authors thank the anonymous reviewers for their comments and Avrim Blum, Steve Hanneke, Mark Johnson, John Lafferty, Dan Roth, and Eric Xing for useful conversations. This research was supported by National Science Foundation grant IIS-0915187.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "Let Z(m) be the set of derivations of size bigger than log 2 m. Let f \u2208 F. Define f = T(f, m \u2212s ). For any z / \u2208 Z(m) we have that\u03c6 k,1 (z) log \u03b8 k,1 + \u03c6 k,2 (z) log \u03b8 k,2 \u2212 \u03c6 k,1 (z) log \u03b8 k,1 \u2212 \u03c6 k,1 (z) log \u03b8 k,2log 2 m max{0, log(\u03b8 k,1 /\u03b8 k,1 )} + max{0, log(\u03b8 k,2 /\u03b8 k,2 )} (A.2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "There exists a \u03b2 (L, p, q, N) > 0 such that F m has the boundedness property with K m = sN log 3 m and bound (m) = m \u2212\u03b2 log m .", "cite_spans": [ { "start": 17, "end": 29, "text": "(L, p, q, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Proposition 7", "sec_num": null }, { "text": "From the requirement of p, we know that for any x we have a z such that yield(z) = x and |z| \u2264 \u03b1|x|. Therefore, if we let X(m) = {x | |x| \u2264 log 2 m/\u03b1}, then we have for anyIn addition, from the requirements on p and the definition of K m we have Table D .1 gives a table of notation for symbols used throughout this article. (n) Convergence rate for the boundedness property Sec. 4 bound (n) Convergence rate for the tightness property Sec. 4Set of parameters {T(\u03b8, \u03b3) | \u03b8 \u2208 \u0398 G } for a given G Sec. 4.1 s A constant larger than 1 on which boundedness property depends Sec. 4.1", "cite_spans": [ { "start": 325, "end": 328, "text": "(n)", "ref_id": null }, { "start": 388, "end": 391, "text": "(n)", "ref_id": null } ], "ref_spans": [ { "start": 246, "end": 253, "text": "Table D", "ref_id": null } ], "eq_spans": [], "section": "Proof", "sec_num": null }, { "text": "A constant on which sample complexity depends for the supervised caseProp. 2 F n Element n in a proper approximation (contained in F) S e c . 4 C n ( f ) Am a pf o rf \u2208 F to f \u2208 F n Sec. 4 tail (n) Convergence rate for the soundness property Sec. 4 bound (n) Convergence rate for the tightness property Sec. 4 \u03b2 (L, q, p, N) A constant on which sample complexity depends for the unsupervised case Sec. 5.3", "cite_spans": [ { "start": 312, "end": 324, "text": "(L, q, p, N)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u03b2(L, q, p, N)", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Polynomial learnability of probabilistic concepts with respect to the Kullback-Leiber divergence", "authors": [ { "first": "N", "middle": [], "last": "Abe", "suffix": "" }, { "first": "J", "middle": [], "last": "Takeuchi", "suffix": "" }, { "first": "M", "middle": [], "last": "Warmuth", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the Conference on Learning Theory", "volume": "", "issue": "", "pages": "277--289", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abe, N., J. Takeuchi, and M. Warmuth. 1991. Polynomial learnability of probabilistic concepts with respect to the Kullback-Leiber divergence. In Proceedings of the Conference on Learning Theory, pages 277-289.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On the computational complexity of approximating distributions by probabilistic automata", "authors": [ { "first": "N", "middle": [], "last": "Abe", "suffix": "" }, { "first": "M", "middle": [], "last": "Warmuth", "suffix": "" } ], "year": 1992, "venue": "Machine Learning", "volume": "2", "issue": "", "pages": "205--260", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abe, N. and M. Warmuth. 1992. On the computational complexity of approximating distributions by probabilistic automata. Machine Learning, 2:205-260.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning regular sets from queries and counterexamples", "authors": [ { "first": "D", "middle": [], "last": "Angluin", "suffix": "" } ], "year": 1987, "venue": "Information and Computation", "volume": "75", "issue": "", "pages": "87--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angluin, D. 1987. Learning regular sets from queries and counterexamples. Information and Computation, 75:87-106.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Neural Network Learning: Theoretical Foundations", "authors": [ { "first": "M", "middle": [], "last": "Anthony", "suffix": "" }, { "first": "P", "middle": [ "L" ], "last": "Bartlett", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anthony, M. and P. L. Bartlett. 1999. Neural Network Learning: Theoretical Foundations. Cambridge University Press.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A discriminative model for semisupervised learning", "authors": [ { "first": "M", "middle": [], "last": "Balcan", "suffix": "" }, { "first": "A", "middle": [], "last": "Blum", "suffix": "" } ], "year": 2010, "venue": "Journal of the Association for Computing Machinery", "volume": "57", "issue": "3", "pages": "1--46", "other_ids": {}, "num": null, "urls": [], "raw_text": "Balcan, M. and A. Blum. 2010. A discriminative model for semi- supervised learning. Journal of the Association for Computing Machinery, 57(3):1-46.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A spectral learning algorithm for finite state transducers", "authors": [ { "first": "B", "middle": [], "last": "Balle", "suffix": "" }, { "first": "A", "middle": [], "last": "Quattoni", "suffix": "" }, { "first": "X", "middle": [], "last": "Carreras", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the European Conference on Machine Learning/the Principles and Practice of Knowledge Discovery in Databases", "volume": "", "issue": "", "pages": "156--171", "other_ids": {}, "num": null, "urls": [], "raw_text": "Balle, B., A. Quattoni, and X. Carreras. 2011. A spectral learning algorithm for finite state transducers. In Proceedings of the European Conference on Machine Learning/the Principles and Practice of Knowledge Discovery in Databases, pages 156-171.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The VC dimension of constraint-based grammars", "authors": [ { "first": "M", "middle": [], "last": "Bane", "suffix": "" }, { "first": "J", "middle": [], "last": "Riggle", "suffix": "" }, { "first": "M", "middle": [], "last": "Sonderegger", "suffix": "" } ], "year": 2010, "venue": "Lingua", "volume": "120", "issue": "5", "pages": "1194--1208", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bane, M., J. Riggle, and M. Sonderegger. 2010. The VC dimension of constraint-based grammars. Lingua, 120(5):1194-1208.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Local Rademacher complexities", "authors": [ { "first": "P", "middle": [], "last": "Bartlett", "suffix": "" }, { "first": "O", "middle": [], "last": "Bousquet", "suffix": "" }, { "first": "S", "middle": [], "last": "Mendelson", "suffix": "" } ], "year": 2005, "venue": "Annals of Statistics", "volume": "33", "issue": "4", "pages": "1497--1537", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bartlett, P., O. Bousquet, and S. Mendelson. 2005. Local Rademacher complexities. Annals of Statistics, 33(4):1497-1537.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Pattern Recognition and Machine Learning", "authors": [ { "first": "C", "middle": [ "M" ], "last": "Bishop", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bishop, C. M. 2006. Pattern Recognition and Machine Learning. Springer, Berlin.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Convex Optimization", "authors": [ { "first": "S", "middle": [], "last": "Boyd", "suffix": "" }, { "first": "L", "middle": [], "last": "Vandenberghe", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Boyd, S. and L. Vandenberghe. 2004. Convex Optimization. Cambridge University Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Accurate computation of the relative entropy between stochastic regular grammars", "authors": [ { "first": "R", "middle": [], "last": "Carrasco", "suffix": "" } ], "year": 1997, "venue": "Theoretical Informatics and Applications", "volume": "31", "issue": "5", "pages": "437--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carrasco, R. 1997. Accurate computation of the relative entropy between stochastic regular grammars. Theoretical Informatics and Applications, 31(5):437-444.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Two experiments on learning probabilistic dependency grammars from corpora", "authors": [ { "first": "G", "middle": [], "last": "Carroll", "suffix": "" }, { "first": "E", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carroll, G. and E. Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. Technical report, Brown University, Providence, RI.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Coarse-to-fine n-best parsing and maxent discriminative reranking", "authors": [ { "first": "E", "middle": [ ";" ], "last": "Charniak", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Charniak", "suffix": "" }, { "first": "E", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 1993, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charniak, E. 1993. Statistical Language Learning. MIT Press, Cambridge, MA. Charniak, E. and M. Johnson. 2005. Coarse-to-fine n-best parsing and maxent discriminative reranking. In Proceedings of the Association for Computational Linguistics, pages 173-180.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Statistical properties of probabilistic context-free grammars", "authors": [ { "first": "Z", "middle": [], "last": "Chi", "suffix": "" } ], "year": 1999, "venue": "Computational Linguistics", "volume": "25", "issue": "1", "pages": "131--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chi, Z. 1999. Statistical properties of probabilistic context-free grammars. Computational Linguistics, 25(1):131-160.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A polynomial algorithm for the inference of context free languages", "authors": [ { "first": "A", "middle": [], "last": "Clark", "suffix": "" }, { "first": "R", "middle": [], "last": "Eyraud", "suffix": "" }, { "first": "A", "middle": [], "last": "Habrard", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the International Colloquium on Grammatical Inference", "volume": "", "issue": "", "pages": "29--42", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, A., R. Eyraud, and A. Habrard. 2008. A polynomial algorithm for the inference of context free languages. In Proceedings of the International Colloquium on Grammatical Inference, pages 29-42.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Unsupervised learning and grammar induction", "authors": [ { "first": "A", "middle": [], "last": "Clark", "suffix": "" }, { "first": "S", "middle": [], "last": "Lappin", "suffix": "" } ], "year": 2010, "venue": "The Handbook of Computational Linguistics and Natural Language Processing", "volume": "", "issue": "", "pages": "197--220", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, A. and S. Lappin. 2010. Unsupervised learning and grammar induction. In Alexander Clark, Chris Fox, and Shalom Lappin, editors, The Handbook of Computational Linguistics and Natural Language Processing. Wiley-Blackwell, London, pages 197-220.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "PAC-learnability of probabilistic deterministic finite state automata", "authors": [ { "first": "A", "middle": [], "last": "Clark", "suffix": "" }, { "first": "F", "middle": [], "last": "Thollard", "suffix": "" } ], "year": 2004, "venue": "Journal of Machine Learning Research", "volume": "5", "issue": "", "pages": "473--497", "other_ids": {}, "num": null, "urls": [], "raw_text": "Clark, A. and F. Thollard. 2004. PAC-learnability of probabilistic deterministic finite state automata. Journal of Machine Learning Research, 5:473-497.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Covariance in unsupervised learning of probabilistic grammars", "authors": [ { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Journal of Machine Learning Research", "volume": "11", "issue": "", "pages": "3017--3051", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, S. B. and N. A. Smith. 2010a. Covariance in unsupervised learning of probabilistic grammars. Journal of Machine Learning Research, 11:3017-3051.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Empirical risk minimization with approximations of probabilistic grammars", "authors": [ { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "424--432", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, S. B. and N. A. Smith. 2010b. Empirical risk minimization with approximations of probabilistic grammars. In Proceedings of the Advances in Neural Information Processing Systems, pages 424-432.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization", "authors": [ { "first": "S", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "N", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1502--1511", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cohen, S. B. and N. A. Smith. 2010c. Viterbi training for PCFGs: Hardness results and competitiveness of uniform initialization. In Proceedings of the Association for Computational Linguistics, pages 1502-1511.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Head-driven statistical models for natural language processing", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "", "pages": "589--637", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. 2003. Head-driven statistical models for natural language processing. Computational Linguistics, 29:589-637.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods", "authors": [ { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2004, "venue": "Speech and Language Technology (New Developments in Parsing Technology)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Collins, M. 2004. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In H. Bunt, J. Carroll, and G. Satta, Text, Speech and Language Technology (New Developments in Parsing Technology).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Cross-entropy and estimation of probabilistic context-free grammars", "authors": [ { "first": "A", "middle": [], "last": "Corazza", "suffix": "" }, { "first": "G", "middle": [], "last": "Satta", "suffix": "" } ], "year": 1991, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "335--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Corazza, A. and G. Satta. 2006. Cross-entropy and estimation of probabilistic context-free grammars. In Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 335-342. Cover, T. M. and J. A. Thomas. 1991. Elements of Information Theory. Wiley, London.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "The sample complexity of learning fixed-structure bayesian networks", "authors": [ { "first": "S", "middle": [], "last": "Dasgupta", "suffix": "" } ], "year": 1997, "venue": "Machine Learning", "volume": "29", "issue": "", "pages": "165--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dasgupta, S. 1997. The sample complexity of learning fixed-structure bayesian networks. Machine Learning, 29(2-3):165-180.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A bibliographical study of grammatical inference", "authors": [ { "first": "C", "middle": [], "last": "De La Higuera", "suffix": "" } ], "year": 2005, "venue": "Pattern Recognition", "volume": "38", "issue": "", "pages": "1332--1348", "other_ids": {}, "num": null, "urls": [], "raw_text": "de la Higuera, C. 2005. A bibliographical study of grammatical inference. Pattern Recognition, 38:1332-1348.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Maximum likelihood estimation from incomplete data via the EM algorithm", "authors": [ { "first": "A", "middle": [], "last": "Dempster", "suffix": "" }, { "first": "N", "middle": [], "last": "Laird", "suffix": "" }, { "first": "D", "middle": [], "last": "Rubin", "suffix": "" } ], "year": 1977, "venue": "Journal of the Royal Statistical Society B", "volume": "39", "issue": "", "pages": "1--38", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dempster, A., N. Laird, and D. Rubin. 1977. Maximum likelihood estimation from incomplete data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Optimal parsing strategies for linear context-free rewriting systems", "authors": [ { "first": "D", "middle": [], "last": "Gildea", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "769--776", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gildea, D. 2010. Optimal parsing strategies for linear context-free rewriting systems. In Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 769-776.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "An optimal-time binarization algorithm for linear context-free rewriting systems with fan-out two", "authors": [ { "first": "C", "middle": [], "last": "G\u00f3mez-Rodr\u00edguez", "suffix": "" }, { "first": "G", "middle": [], "last": "Satta", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Association for Computational Linguistics-International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "985--993", "other_ids": {}, "num": null, "urls": [], "raw_text": "G\u00f3mez-Rodr\u00edguez, C. and G. Satta. 2009. An optimal-time binarization algorithm for linear context-free rewriting systems with fan-out two. In Proceedings of the Association for Computational Linguistics-International Joint Conference on Natural Language Processing, pages 985-993.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Abstract Inference", "authors": [ { "first": "U", "middle": [], "last": "Grenander", "suffix": "" } ], "year": 1981, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grenander, U. 1981. Abstract Inference. Wiley, New York.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Decision-theoretic generalizations of the PAC model for neural net and other learning applications", "authors": [ { "first": "D", "middle": [], "last": "Haussler", "suffix": "" } ], "year": 1992, "venue": "Information and Computation", "volume": "100", "issue": "", "pages": "78--150", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haussler, D. 1992. Decision-theoretic generalizations of the PAC model for neural net and other learning applications. Information and Computation, 100:78-150.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A spectral algorithm for learning hidden Markov models", "authors": [ { "first": "D", "middle": [], "last": "Hsu", "suffix": "" }, { "first": "S", "middle": [ "M" ], "last": "Kakade", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Conference on Learning Theory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hsu, D., S. M. Kakade, and T. Zhang. 2009. A spectral algorithm for learning hidden Markov models. In Proceedings of the Conference on Learning Theory.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "The VC-dimensions of finite automata with n states", "authors": [ { "first": "Y", "middle": [], "last": "Ishigami", "suffix": "" }, { "first": "S", "middle": [], "last": "Tani", "suffix": "" } ], "year": 1993, "venue": "Proceedings of Algorithmic Learning Theory", "volume": "", "issue": "", "pages": "328--341", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ishigami, Y. and S. Tani. 1993. The VC-dimensions of finite automata with n states. In Proceedings of Algorithmic Learning Theory, pages 328-341.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "VC-dimensions of finite automata and commutative finite automata with k letters and n states", "authors": [ { "first": "Y", "middle": [], "last": "Ishigami", "suffix": "" }, { "first": "S", "middle": [], "last": "Tani", "suffix": "" } ], "year": 1997, "venue": "Applied Mathematics", "volume": "74", "issue": "3", "pages": "229--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ishigami, Y. and S. Tani. 1997. VC-dimensions of finite automata and commutative finite automata with k letters and n states. Applied Mathematics, 74(3):229-240.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Observable operator models for discrete stochastic time series", "authors": [ { "first": "H", "middle": [], "last": "Jaeger", "suffix": "" } ], "year": 1999, "venue": "Neural Computation", "volume": "12", "issue": "", "pages": "1371--1398", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jaeger, H. 1999. Observable operator models for discrete stochastic time series. Neural Computation, 12:1371-1398.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Cryptographic limitations on learning Boolean formulae and finite automata", "authors": [ { "first": "M", "middle": [], "last": "Kearns", "suffix": "" }, { "first": "L", "middle": [], "last": "Valiant", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kearns, M. and L. Valiant. 1989. Cryptographic limitations on learning Boolean formulae and finite automata.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Proceedings of the 21st Association for Computing Machinery Symposium on the Theory of Computing", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "433--444", "other_ids": {}, "num": null, "urls": [], "raw_text": "In Proceedings of the 21st Association for Computing Machinery Symposium on the Theory of Computing, pages 433-444.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "An Introduction to Computational Learning Theory", "authors": [ { "first": "M", "middle": [ "J" ], "last": "Kearns", "suffix": "" }, { "first": "U", "middle": [ "V" ], "last": "Vazirani", "suffix": "" } ], "year": 1994, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kearns, M. J. and U. V. Vazirani. 1994. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Corpus-based induction of syntactic structure: Models of dependency and constituency", "authors": [ { "first": "D", "middle": [], "last": "Klein", "suffix": "" }, { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "478--487", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, D. and C. D. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedings of the Association for Computational Linguistics, pages 478-487.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Local Rademacher complexities and oracle inequalities in risk minimization", "authors": [ { "first": "V", "middle": [], "last": "Koltchinskii", "suffix": "" } ], "year": 2006, "venue": "The Annals of Statistics", "volume": "34", "issue": "6", "pages": "2593--2656", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koltchinskii, V. 2006. Local Rademacher complexities and oracle inequalities in risk minimization. The Annals of Statistics, 34(6):2593-2656.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "How to cover a grammar", "authors": [ { "first": "R", "middle": [], "last": "Leermakers", "suffix": "" } ], "year": 1989, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "135--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leermakers, R. 1989. How to cover a grammar. In Proceedings of the Association for Computational Linguistics, pages 135-142.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Foundations of Statistical Natural Language Processing", "authors": [ { "first": "C", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Manning, C. D. and H. Sch\u00fctze. 1999. Foundations of Statistical Natural Language Processing. MIT Press, Cambridge, MA.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Some applications of concentration inequalities to statistics", "authors": [ { "first": "P", "middle": [], "last": "Massart", "suffix": "" } ], "year": 2000, "venue": "Annales de la Facult\u00e9 des Sciences de Toulouse, IX", "volume": "", "issue": "2", "pages": "245--303", "other_ids": {}, "num": null, "urls": [], "raw_text": "Massart, P. 2000. Some applications of concentration inequalities to statistics. Annales de la Facult\u00e9 des Sciences de Toulouse, IX(2):245-303.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Context-Free Grammars: Covers, Normal Forms, and Parsing", "authors": [ { "first": "A", "middle": [], "last": "Nijholt", "suffix": "" } ], "year": 1980, "venue": "Lecture Notes in Computer Science", "volume": "93", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nijholt, A. 1980. Context-Free Grammars: Covers, Normal Forms, and Parsing (volume 93 of Lecture Notes in Computer Science). Springer-Verlag, Berlin.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "PAC-learnability of probabilistic deterministic finite state automata in terms of variation distance", "authors": [ { "first": "N", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "P", "middle": [ "W" ], "last": "Goldberg", "suffix": "" } ], "year": 2007, "venue": "Proceedings of Algorithmic Learning Theory", "volume": "", "issue": "", "pages": "157--170", "other_ids": {}, "num": null, "urls": [], "raw_text": "Palmer, N. and P. W. Goldberg. 2007. PAC-learnability of probabilistic deterministic finite state automata in terms of variation distance. In Proceedings of Algorithmic Learning Theory, pages 157-170.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Inside-outside reestimation from partially bracketed corpora", "authors": [ { "first": "F", "middle": [ "C N" ], "last": "Pereira", "suffix": "" }, { "first": "Y", "middle": [], "last": "Schabes", "suffix": "" } ], "year": 1992, "venue": "Proceedings of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "128--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pereira, F. C. N. and Y. Schabes. 1992. Inside-outside reestimation from partially bracketed corpora. In Proceedings of the Association for Computational Linguistics, pages 128-135.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Inductive inference, DFAs, and computational complexity", "authors": [ { "first": "L", "middle": [], "last": "Pitt", "suffix": "" } ], "year": 1989, "venue": "Analogical and Inductive Inference", "volume": "397", "issue": "", "pages": "18--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pitt, L. 1989. Inductive inference, DFAs, and computational complexity. Analogical and Inductive Inference, 397:18-44.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Convergence of Stochastic Processes", "authors": [ { "first": "D", "middle": [], "last": "Pollard", "suffix": "" } ], "year": 1984, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pollard, D. 1984. Convergence of Stochastic Processes. Springer-Verlag, New York.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Automata Learning and Its Applications", "authors": [ { "first": "D", "middle": [], "last": "Ron", "suffix": "" } ], "year": 1995, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ron, D. 1995. Automata Learning and Its Applications. Ph.D. thesis, Hebrew University of Jerusalem.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "On the learnability and usage of acyclic probabilistic finite automata", "authors": [ { "first": "D", "middle": [], "last": "Ron", "suffix": "" }, { "first": "Y", "middle": [], "last": "Singer", "suffix": "" }, { "first": "N", "middle": [], "last": "Tishby", "suffix": "" } ], "year": 1998, "venue": "Journal of Computer and System Sciences", "volume": "56", "issue": "2", "pages": "133--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ron, D., Y. Singer, and N. Tishby. 1998. On the learnability and usage of acyclic probabilistic finite automata. Journal of Computer and System Sciences, 56(2):133-152.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Learnability and stability in the general learning setting", "authors": [ { "first": "S", "middle": [], "last": "Shalev-Shwartz", "suffix": "" }, { "first": "O", "middle": [], "last": "Shamir", "suffix": "" }, { "first": "K", "middle": [], "last": "Sridharan", "suffix": "" }, { "first": "N", "middle": [], "last": "Srebro", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Conference on Learning Theory", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shalev-Shwartz, S., O. Shamir, K. Sridharan, and N. Srebro. 2009. Learnability and stability in the general learning setting. In Proceedings of the Conference on Learning Theory.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Introduction to the Theory of Computation", "authors": [ { "first": "M", "middle": [], "last": "Sipser", "suffix": "" } ], "year": 2006, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sipser, M. 2006. Introduction to the Theory of Computation, Second Edition. Thomson Course Technology, Boston, MA.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Sharper bounds for Gaussian and empirical processes", "authors": [ { "first": "M", "middle": [], "last": "Talagrand", "suffix": "" } ], "year": 1994, "venue": "Annals of Probability", "volume": "22", "issue": "", "pages": "28--76", "other_ids": {}, "num": null, "urls": [], "raw_text": "Talagrand, M. 1994. Sharper bounds for Gaussian and empirical processes. Annals of Probability, 22:28-76.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "On the learnability of hidden Markov models", "authors": [ { "first": "S", "middle": [ "A" ], "last": "Terwijn", "suffix": "" } ], "year": 2002, "venue": "P. Adriaans, H. Fernow, & M. van Zaane. Grammatical Inference: Algorithms and Applications (Lecture Notes in Computer Science)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Terwijn, S. A. 2002. On the learnability of hidden Markov models. In P. Adriaans, H. Fernow, & M. van Zaane. Grammatical Inference: Algorithms and Applications (Lecture Notes in Computer Science).", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Optimal aggregation of classifiers in statistical learning", "authors": [ { "first": "A", "middle": [], "last": "Tsybakov", "suffix": "" } ], "year": 2004, "venue": "The Annals of Statistics", "volume": "32", "issue": "1", "pages": "135--166", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsybakov, A. 2004. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32(1):135-166.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "Statistical Learning Theory", "authors": [ { "first": "V", "middle": [ "N" ], "last": "Vapnik", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vapnik, V. N. 1998. Statistical Learning Theory. Wiley-Interscience, New York.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "An example of a Viterbi parse tree which represents a satisfying assignment for", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Expectation-Maximization Algorithm with Proper Approximations.", "uris": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "content": "", "text": "where the first inequality follows from f \u2208 F m (\u03b8 k,i \u2265 m \u2212s ) and the second from |z| \u2264 log 2 m. In addition, from the requirements on p we have", "num": null }, "TABREF1": { "type_str": "table", "html": null, "content": "
Proposition 3
Let p \u2208 P(\u03b1, L, r, q, B, G) and let F m as defined earlier. There exists an M such that for any
m > M we have
\uf8eb
p\uf8ed
f \u2208F
", "text": "].) Let a \u2208[0, 1] and let b = a if a \u2208 [\u03b3, 1 \u2212 \u03b3], b = \u03b3 if a \u2264 \u03b3, and b = 1 \u2212 \u03b3 if a \u2265 1 \u2212 \u03b3.Then for any \u2264 1/2 such that \u03b3 \u2264 /(1 + ) we have log a/b \u2264 .", "num": null } } } }