source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper talks about Pseudo-Projective Dependency Parsing.
0
At each point during the derivation, the parser has a choice between pushing the next input token onto the stack – with or without adding an arc from the token on top of the stack to the token pushed – and popping a token from the stack – with or without adding an arc from the next input token to the token popped.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Full Chinese personal names are in one respect simple: they are always of the form family+given.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The weak hypothesis chosen was then restricted to be a predictor in favor of this label.
Two general approaches are presented and two combination techniques are described for each approach.
0
The PCFG was trained from the same sections of the Penn Treebank as the other three parsers.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Word frequencies are estimated by a re-estimation procedure that involves apply­ ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Microsoft’s approach uses dependency trees, others use hierarchical phrase models.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
handled given appropriate models.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, suppose the current model assigns a belief value of .60 to {A, B}, meaning that it is 60% sure that the correct hypothesis is either A or B. Then new evidence arrives with a belief value of .70 assigned 5 Initially there are no competing hypotheses because all hypotheses are included in θ by definition.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Also, the method of using keywords rules out phrases which don’t contain popular words in the domain.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
1
In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The model was parameterized such that the joint probability of a (label, feature-set) pair P(yi, xi) is written as The model assumes that (y, x) pairs are generated by an underlying process where the label is first chosen with some prior probability P(yi); the number of features mi is then chosen with some probability P(mi); finally the features are independently generated with probabilities P(xulyi).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
At this stage the lattice path corresponds to segments only, with no PoS assigned to them.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
However, MADA is language-specific and relies on manually constructed dictionaries.
They have made use of local and global features to deal with the instances of same token in a document.
0
Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones).
They focused on phrases which two Named Entities, and proceed in two stages.
0
We concentrate on those sets.
These clusters are computed using an SVD variant without relying on transitional structure.
0
2 61.7 64.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The overall performance of our joint framework demonstrates that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperforms upper bounds proposed by previous joint disambiguation systems and achieves segmentation and parsing results on a par with state-of-the-art standalone applications results.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
A receives a votes, and B receives b votes.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3).
There are clustering approaches that assign a single POS tag to each word type.
0
The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (φ, θ|T , α, β) = n (P (φt|α)P (θt|T , α)) t=1 The transition distribution φt for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
In Section 2, we brie y review our approach to statistical machine translation.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
We use w erations of sampling (see Figure 2 for a depiction).
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This supports our main thesis that decisions taken by single, improved, grammar are beneficial for both tasks.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Schapire and Singer show that the training error is bounded above by Thus, in order to greedily minimize an upper bound on training error, on each iteration we should search for the weak hypothesis ht and the weight at that minimize Z.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
97 81.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Decreasing the threshold results in higher mWER due to additional search errors.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
A Stochastic Finite-State Word-Segmentation Algorithm for Chinese
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Process nominals name the action of the transitive or ditransitive verb from which they derive.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Overall, language modeling significantly impacts decoder performance.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
For example, Gazdar (1985) discusses the applicability of Indexed Grammars (IG's) to Natural Language in terms of the structural descriptions assigned; and Berwick (1984) discusses the strong generative capacity of Lexical-Functional Grammar (LFG) and Government and Bindings grammars (GB).
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The reference medicine for Silapo is EPREX/ERYPO, which contains epoetin alfa.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
In our case multi-threading is trivial because our data structures are read-only and uncached.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
And time is short.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
phrase (markContainsVerb).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
It also does not prune, so comparing to our pruned model would be unfair.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Various segmentation approaches were then compared with human performance: 1.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
We call these N − 1 words state.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
1
Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Two general approaches are presented and two combination techniques are described for each approach.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be de­ noted unseen(f,).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Sometime, multiple words are needed, like “vice chairman”, “prime minister” or “pay for” (“pay” and “pay for” are different senses in the CC-domain).
This assumption, however, is not inherent to type-based tagging models.
0
1 61.2 43.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We outlined the definition of a family of constrained grammatical formalisms, called Linear Context-Free Rewriting Systems.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived.
The texts were annotated with the RSTtool.
0
• Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.
A beam search concept is applied as in speech recognition.
0
To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Initially, the DempsterShafer model assumes that all hypotheses are equally likely, so it creates a set called θ that includes all hypotheses.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered.
A beam search concept is applied as in speech recognition.
0
2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
This section describes an algorithm based on boosting algorithms, which were previously developed for supervised machine learning problems.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
An examination of the subjects' bracketings confirmed that these instructions were satisfactory in yielding plausible word-sized units.
Two general approaches are presented and two combination techniques are described for each approach.
0
Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Finally, we add “DT” to the tags for definite nouns and adjectives (Kulick et al., 2006).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
97 78.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, even if the contexts surrounding an anaphor and candidate match exactly, they are not coreferent if they have substantially different meanings 9 We would be happy to make our manually annotated test data available to others who also want to evaluate their coreference resolver on the MUC4 or Reuters collections.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We considered using the MUC6 and MUC7 data sets, but their training sets were far too small to learn reliable co-occurrence statistics for a large set of contextual role relationships.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
For example, the independence assumptions mean that the model fails to capture the dependence between specific and more general features (for example the fact that the feature full.-string=New_York is always seen with the features contains (New) and The baseline method tags all entities as the most frequent class type (organization). contains (York) and is never seen with a feature such as contains (Group) ).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
10 Here we use the Good-Turing estimate (Baayen 1989; Church and Gale 1991), whereby the aggregate probability of previously unseen instances of a construction is estimated as ni/N, where N is the total number of observed tokens and n1 is the number of types observed only once.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
i=1 (f,v)∈Wi
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
(1), Pr(eI 1) is the language model, which is a trigram language model in this case.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In this section we present a partial evaluation of the current system, in three parts.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Employing a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions is not only theoretically clean and linguistically justified and but also probabilistically apropriate and empirically sound.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Exploiting Diversity in Natural Language Processing: Combining Parsers
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Ex: The regime gives itself the right...
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Because the Bikel parser has been parameter- ized for Arabic by the LDC, we do not change the default model settings.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Each spawned process must check if xi , , xn, and , yn, can be derived from B and C, respectively.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Finally, the assignments of PoS tags to OOV segments is subject to language specific constraints relative to the token it was originated from.
It is probably the first analysis of Arabic parsing of this kind.
0
Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Further, the special hash 0 suffices to flag empty buckets.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Each token may admit multiple analyses, each of which a sequence of one or more lexemes (we use li to denote a lexeme) belonging a presupposed Hebrew lexicon LEX.
A beam search concept is applied as in speech recognition.
0
Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
1
We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg- Kirkpatrick et al., 2010).
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ 􀀀L; ; Jg.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Clearly this is not the only way to estimate word-frequencies, however, and one could consider applying other methods: in partic­ ular since the problem is similar to the problem of assigning part-of-speech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags, one might consider a more sophisticated approach such as that described in Kupiec (1992); one could also use methods that depend on a small hand-tagged seed corpus, as suggested by one reviewer.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
It is striking that from this point of view many formalisms can be grouped together as having identically structured derivation tree sets.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We have presented two general approaches to studying parser combination: parser switching and parse hybridization.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Previous approaches have tried to find examples that are similar to the target domain.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The approach builds from an initial seed set for a category, and is quite similar to the decision list approach described in (Yarowsky 95).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
That is, given a choice between segmenting a sequence abc into abc and ab, c, the former will always be picked so long as its cost does not exceed the summed costs of ab and c: while; it is possible for abc to be so costly as to preclude the larger grouping, this will certainly not usually be the case.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Since the work tapes store integers (which can be written in binary) that never exceed the size of the input, no configuration has space exceeding 0(log n).
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The ratio of buckets to entries is controlled by space multiplier m > 1.