source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
BABAR uses a DempsterShafer decision model (Stefik, 1995) to combine the evidence provided by the knowledge sources.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
After tagging a large corpus with an automatic NE tagger, the method tries to find sets of paraphrases automatically without being given a seed phrase or any kind of cue.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
One class comprises words derived by productive morphologi­ cal processes, such as plural noun formation using the suffix ir, menD.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The accuracies for link were 73% and 86% on two evaluated domains.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
This leads to a linear combination of domain-specific probabilities, with weights in [0, 1], normalized to sum to 1.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The breakdown of the different types of words found by ST in the test corpus is given in Table 3.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(e.g., N.Y. would contribute this feature, IBM would not). nonalpha=x Appears if the spelling contains any characters other than upper or lower case letters.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Caching for IRSTLM is smaller at 0.09 GB resident memory, though it supports only a single thread.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993).
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
collected too.
Two general approaches are presented and two combination techniques are described for each approach.
0
These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.
There is no global pruning.
0
(f1; ;mg n fl1; l2g ; l) 4 (f1; ;m 􀀀 1g n fl1; l2; l3g ; l0) !
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
This can be repeated several times to collect a list of author / book title pairs and expressions.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
In this example there are four "input characters," A, B, C and D, and these map respectively to four "pronunciations" a, b, c and d. Furthermore, there are four "words" represented in the dictionary.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique.
They focused on phrases which two Named Entities, and proceed in two stages.
0
For example, in Figure 3, we can see that the phrases in the “buy”, “acquire” and “purchase” sets are mostly paraphrases.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Quantization can be improved by jointly encoding probability and backoff.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(b) F.i'JJI!
Here we present two algorithms.
0
Using the virtual distribution Di (i) and pseudo-labels&quot;y.,„ values for Wo, W± and W_ can be calculated for each possible weak hypothesis (i.e., for each feature x E Xi); the weak hypothesis with minimal value for Wo + 2/WW _ can be chosen as before; and the weight for this weak hypothesis at = ln ww+411:) can be calculated.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
An important subproblem of language model storage is therefore sparse mapping: storing values for sparse keys using little memory then retrieving values given keys using little time.
These clusters are computed using an SVD variant without relying on transitional structure.
0
(2009).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Therefore, for n-gram wn1 , all leftward extensions wn0 are an adjacent block in the n + 1-gram array.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
In contrast, NNP (proper nouns) form a large portion of vocabulary.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Given around 90,000 unlabeled examples, the methods described in this paper classify names with over 91% accuracy.
Here both parametric and non-parametric models are explored.
0
Mi(c) is a binary function returning t when parser i (from among the k parsers) suggests constituent c should be in the parse.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models’ performance against previously reported results.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This result suggests that all of contextual role KSs can provide useful information for resolving anaphora.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The transition from f, to a final state transduces c to the grammatical tag PL with cost cost(unseen(f,)): cost(i¥JJ1l.ir,) == cost(i¥JJ1l.)
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Our results show that BABAR achieves good performance in both domains, and that the contextual role knowledge improves performance, especially on pronouns.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
We then evaluate the approach in two steps.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
For example, the two NEs “Eastern Group Plc” and “Hanson Plc” have the following contexts.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
We report results for the best and median hyperparameter settings obtained in this way.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
This number must be less than or equal to n 􀀀 1.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Training and testing is based on the Europarl corpus.
This assumption, however, is not inherent to type-based tagging models.
0
They are set to fixed constants.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998).
They found replacing it with a ranked evaluation to be more suitable.
0
For the automatic evaluation, we used BLEU, since it is the most established metric in the field.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
This paper talks about Unsupervised Models for Named Entity Classification.
0
In the next section we present an alternative approach that builds two classifiers while attempting to satisfy the above constraints as much as possible.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Note that on some examples (around 2% of the test set) CoBoost abstained altogether; in these cases we labeled the test example with the baseline, organization, label.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Thus, we feel fairly confident that for the examples we have considered from Gan's study a solution can be incorporated, or at least approximated, within a finite-state framework.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The family name set is restricted: there are a few hundred single-hanzi family names, and about ten double-hanzi ones.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
It was filtered to retain the top 30 translations for each source phrase using the TM part of the current log-linear model.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
This supports our main thesis that decisions taken by single, improved, grammar are beneficial for both tasks.
There are clustering approaches that assign a single POS tag to each word type.
0
Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.
It is probably the first analysis of Arabic parsing of this kind.
0
We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The government has to make a decision, and do it quickly.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Several extensions of AdaBoost for multiclass problems have been suggested (Freund and Schapire 97; Schapire and Singer 98).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Bikel 75 training trees 5000 10000 15000 Figure 3: Dev set learning curves for sentence lengths ≤ 70.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Word Re-ordering and DP-based Search in Statistical Machine Translation
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For each domain, we created a blind test set by manually annotating 40 doc uments with anaphoric chains, which represent sets of m3 (S) = ) X ∩Y =S 1 − ) m1 (X ) ∗ m2 (Y ) m1 (X ) ∗ m2 (Y ) (1) noun phrases that are coreferent (as done for MUC6 (MUC6 Proceedings, 1995)).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.
There is no global pruning.
0
An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The result of this is shown in Figure 7.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
One hopes that such a corpus will be forth­ coming.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
For all languages we do not make use of a tagging dictionary.
They have made use of local and global features to deal with the instances of same token in a document.
0
Reference resolution involves finding words that co-refer to the same entity.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Thus, the arc from je to jedna will be labeled 5b↓ (to indicate that there is a syntactic head below it).
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Even this may be nondeterministic, in case the graph contains several non-projective arcs whose lifts interact, but we use the following algorithm to construct a minimal projective transformation D0 = (W, A0) of a (nonprojective) dependency graph D = (W, A): The function SMALLEST-NONP-ARC returns the non-projective arc with the shortest distance from head to dependent (breaking ties from left to right).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For each candidate antecedent, BABAR identifies the caseframe that would extract the candidate, pairs it with the anaphor’s caseframe, and consults the CF Network to see if this pair of caseframes has co-occurred in previous resolutions.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1).
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We can check, what the consequences of less manual annotation of results would have been: With half the number of manual judgements, we can distinguish about 40% of the systems, 10% less.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
As we shall argue, the semantic class affiliation of a hanzi constitutes useful information in predicting its properties.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
English parsing evaluations usually report results on sentences up to length 40.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Instead, we resort to an iterative update based method.
These clusters are computed using an SVD variant without relying on transitional structure.
0
We use w erations of sampling (see Figure 2 for a depiction).
It is probably the first analysis of Arabic parsing of this kind.
0
95 B a s e li n e ( S e lf t a g ) 70 a l l B i k e l ( v 1 . 2 ) B a s e l i n e ( P r e t a g ) 7 0 a l l G o l d P O S 70 0.7 70 0.801 278 0.7 52 0.794 278 0.7 71 0.804 295 0.7 52 0.796 295 0.7 75 0.808 309 77.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Unlike the WSJ corpus which has a high frequency of rules like VP →VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).
The texts were annotated with the RSTtool.
0
Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
This paper does not necessarily reflect the position of the U.S. Government.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For each resolution in the training data, BABAR also associates the co-referring expression of an NP with the NP’s caseframe.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The first modification — cautiousness — is a relatively minor change.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Figure 1 provides some statistics about this corpus.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The orthographic normalization strategy we use is simple.10 In addition to removing all diacritics, we strip instances of taTweel J=J4.i, collapse variants of alif to bare alif,11 and map Ara bic punctuation characters to their Latin equivalents.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We empirically control for the effect of our heuristics to make sure our pruning does not undermine the objectives of our joint task.
Two general approaches are presented and two combination techniques are described for each approach.
0
The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
i=1 (f,v)∈Wi
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We present two algorithms.
They have made use of local and global features to deal with the instances of same token in a document.
0
As such, global information from the whole context of a document is important to more accurately recognize named entities.