text
stringlengths
0
105k
A mathematical model specifies a relation among variables, either in functional form that maps inputs to outputs (e.g. y = m x + b ) or in relation form (e.g. the following ( x , y ) pairs are part of the relation).
specifies a relation among variables, either in functional form that maps inputs to outputs (e.g. = + ) or in relation form (e.g. the following ( , ) pairs are part of the relation). A probabilistic model specifies a probability distribution over possible values of random variables, e.g., P( x , y ), rather than a strict deterministic relationship, e.g., y = f ( x ).
specifies a probability distribution over possible values of random variables, e.g., P( , ), rather than a strict deterministic relationship, e.g., = ( ). A trained model uses some training/learning algorithm to take as input a collection of possible models and a collection of data points (e.g. (x, y) pairs) and select the best model. Often this is in the form of choosing the values of parameters (such as m and b above) through a process of statistical inference.
Claude Shannon Claude Shannon
For example, a decade before Chomsky, Claude Shannon proposed probabilistic models of communication based on Markov chains of words. If you have a vocabulary of 100,000 words and a second-order Markov model in which the probability of a word depends on the previous two words, then you need a quadrillion (1015) probability values to specify the model. The only feasible way to learn these 1015 values is to gather statistics from data and introduce some smoothing method for the many cases where there is no data. Therefore, most (but not all) probabilistic models are trained. Also, many (but not all) trained models are probabilistic.
As another example, consider the Newtonian model of gravitational attraction, which says that the force between two objects of mass m 1 and m 2 a distance r apart is given by
F = G m 1 m 2 / r2
(This example brings up another distinction: the gravitational model is continuous and quantitative whereas the linguistic tradition has favored models that are discrete, categorical, and qualitative: a word is or is not a verb, there is no question of its degree of verbiness. For more on these distinctions, see Chris Manning's article on Probabilistic Syntax.)
A relevant probabilistic statistical model is the ideal gas law, which describes the pressure P of a gas in terms of the the number of molecules, N, the temperature T, and Boltzmann's constant, K:
P = N k T / V.
The equation can be derived from first principles using the tools of statistical mechanics. It is an uncertain, incorrect model; the true model would have to describe the motions of individual gas molecules. This model ignores that complexity and summarizes our uncertainty about the location of individual molecules. Thus, even though it is statistical and probabilistic, even though it does not completely model reality, it does provide both good predictions and insight—insight that is not available from trying to understand the true movements of individual molecules.
Now let's consider the non-statistical model of spelling expressed by the rule "I before E except after C." Compare that to the probabilistic, trained statistical model:
P(IE) = 0.0177 P(CIE) = 0.0014 P(*IE) = 0.163 P(EI) = 0.0046 P(CEI) = 0.0005 P(*EI) = 0.0041
Accuracy("I before E") = 0.0177/(0.0177+0.0046) = 0.793 Accuracy("I before E except after C") = (0.0005+0.0163)/(0.0005+0.0163+0.0014+0.0041) = 0.753
As a final example (not of statistical models, but of insight), consider the Theory of Supreme Court Justice Hand-Shaking: when the supreme court convenes, all attending justices shake hands with every other justice. The number of attendees, n, must be an integer in the range 0 to 9; what is the total number of handshakes, h for a given n? Here are three possible explanations:
Each of n justices shakes hands with the other n - 1 justices, but that counts Alito/Breyer and Breyer/Alito as two separate shakes, so we should cut the total in half, and we end up with h = n × (n - 1) / 2. To avoid double-counting, we will order the justices by seniority and only count a more-senior/more-junior handshake, not a more-junior/more-senior one. So we count, for each justice, the shakes with the more junior justices, and sum them up, giving h = Σ i = 1 .. n (i - 1). Just look at this table: n: 0 1 2 3 4 5 6 7 8 9 h: 0 0 1 3 6 10 15 21 28 36
How successful are statistical language models?
Search engines: 100% of major players are trained and probabilistic. Their operation cannot be described by a simple function.
100% of major players are trained and probabilistic. Their operation cannot be described by a simple function. Speech recognition: 100% of major systems are trained and probabilistic, mostly relying on probabilistic hidden Markov models.
100% of major systems are trained and probabilistic, mostly relying on probabilistic hidden Markov models. Machine translation: 100% of top competitors in competitions such as NIST use statistical methods. Some commercial systems use a hybrid of trained and rule-based approaches. Of the 4000 language pairs covered by machine translation systems, a statistical system is by far the best for every pair except Japanese-English, where the top statistical system is roughly equal to the top hybrid system.
100% of top competitors in competitions such as NIST use statistical methods. Some commercial systems use a hybrid of trained and rule-based approaches. Of the 4000 language pairs covered by machine translation systems, a statistical system is by far the best for every pair except Japanese-English, where the top statistical system is roughly equal to the top hybrid system. Question answering: this application is less well-developed, and many systems build heavily on the statistical and probabilistic approach used by search engines. The IBM Watson system that recently won on Jeopardy is thoroughly probabilistic and trained, while Boris Katz's START is a hybrid. All systems use at least some statistical techniques.
Now let's look at some components that are of interest only to the computational linguist, not to the end user:
Word sense disambiguation: 100% of top competitors at the SemEval-2 competition used statistical techniques; most are probabilistic; some use a hybrid approach incorporating rules from sources such as Wordnet.
100% of top competitors at the SemEval-2 competition used statistical techniques; most are probabilistic; some use a hybrid approach incorporating rules from sources such as Wordnet. Coreference resolution: The majority of current systems are statistical, although we should mention the system of Haghighi and Klein, which can be described as a hybrid system that is mostly rule-based rather than trained, and performs on par with top statistical systems.
The majority of current systems are statistical, although we should mention the system of Haghighi and Klein, which can be described as a hybrid system that is mostly rule-based rather than trained, and performs on par with top statistical systems. Part of speech tagging: Most current systems are statistical. The Brill tagger stands out as a successful hybrid system: it learns a set of deterministic rules from statistical data.
Most current systems are statistical. The Brill tagger stands out as a successful hybrid system: it learns a set of deterministic rules from statistical data. Parsing: There are many parsing systems, using multiple approaches. Almost all of the most successful are statistical, and the majority are probabilistic (with a substantial minority of deterministic parsers).
Clearly, it is inaccurate to say that statistical models (and probabilistic models) have achieved limited success; rather they have achieved a dominant (although not exclusive) position.
Another measure of success is the degree to which an idea captures a community of researchers. As Steve Abney wrote in 1996, "In the space of the last ten years, statistical methods have gone from being virtually unknown in computational linguistics to being a fundamental given. ... anyone who cannot at least use the terminology persuasively risks being mistaken for kitchen help at the ACL [Association for Computational Linguistics] banquet."
Now of course, the majority doesn't rule -- just because everyone is jumping on some bandwagon, that doesn't make it right. But I made the switch: after about 14 years of trying to get language models to work using logical rules, I started to adopt probabilistic approaches (thanks to pioneers like Gene Charniak (and Judea Pearl for probability in general) and to my colleagues who were early adopters, like Dekai Wu). And I saw everyone around me making the same switch. (And I didn't see anyone going in the other direction.) We all saw the limitations of the old tools, and the benefits of the new.
And while it may seem crass and anti-intellectual to consider a financial measure of success, it is worth noting that the intellectual offspring of Shannon's theory create several trillion dollars of revenue each year, while the offspring of Chomsky's theories generate well under a billion.
This section has shown that one reason why the vast majority of researchers in computational linguistics use statistical models is an engineering reason: statistical models have state-of-the-art performance, and in most cases non-statistical models perform worst. For the remainder of this essay we will concentrate on scientific reasons: that probabilistic models better represent linguistic facts, and statistical techniques make it easier for us to make sense of those facts.
Is there anything like [the statistical model] notion of success in the history of science?
A dictionary definition of science is "the systematic study of the structure and behavior of the physical and natural world through observation and experiment," which stresses accurate modeling over insight, but it seems to me that both notions have always coexisted as part of doing science. To test that, I consulted the epitome of doing science, namely Science. I looked at the current issue and chose a title and abstract at random:
Chlorinated Indium Tin Oxide Electrodes with High Work Function for Organic Device Compatibility In organic light-emitting diodes (OLEDs), a stack of multiple organic layers facilitates charge flow from the low work function [~4.7 electron volts (eV)] of the transparent electrode (tin-doped indium oxide, ITO) to the deep energy levels (~6 eV) of the active light-emitting organic materials. We demonstrate a chlorinated ITO transparent electrode with a work function of >6.1 eV that provides a direct match to the energy levels of the active light-emitting materials in state-of-the art OLEDs. A highly simplified green OLED with a maximum external quantum efficiency (EQE) of 54% and power efficiency of 230 lumens per watt using outcoupling enhancement was demonstrated, as were EQE of 50% and power efficiency of 110 lumens per watt at 10,000 candelas per square meter.
It certainly seems that this article is much more focused on "accurately modeling the world" than on "providing insight." The paper does indeed fit in to a body of theories, but it is mostly reporting on specific experiments and the results obtained from them (e.g. efficiency of 54%).
I then looked at all the titles and abstracts from the current issue of Science:
Comparative Functional Genomics of the Fission Yeasts
Dimensionality Control of Electronic Phase Transitions in Nickel-Oxide Superlattices
Competition of Superconducting Phenomena and Kondo Screening at the Nanoscale
Chlorinated Indium Tin Oxide Electrodes with High Work Function for Organic Device Compatibility
Probing Asthenospheric Density, Temperature, and Elastic Moduli Below the Western United States
Impact of Polar Ozone Depletion on Subtropical Precipitation
Fossil Evidence on Origin of the Mammalian Brain
Industrial Melanism in British Peppered Moths Has a Singular and Recent Mutational Origin
The Selaginella Genome Identifies Genetic Changes Associated with the Evolution of Vascular Plants
Chromatin "Prepattern" and Histone Modifiers in a Fate Choice for Liver and Pancreas
Spatial Coupling of mTOR and Autophagy Augments Secretory Phenotypes
Diet Drives Convergence in Gut Microbiome Functions Across Mammalian Phylogeny and Within Humans
The Toll-Like Receptor 2 Pathway Establishes Colonization by a Commensal of the Human Microbiota
A Packing Mechanism for Nucleosome Organization Reconstituted Across a Eukaryotic Genome