source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | We have studied the structural descriptions (tree sets) that can be assigned by various grammatical systems, and classified these formalisms on the basis of two features: path complexity; and path independence. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005). |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | As we have noted in Section 2, the general semantic class to which a hanzi belongs is often predictable from its semantic radical. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Ends with the feminine affix :: p. 4. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | We settled on contrastive evaluations of 5 system outputs for a single test sentence. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Proper names are assumed to be coreferent if they match exactly, or if they closely match based on a few heuristics. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Table 6: Example Translations for the Verbmobil task. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Gazdar (1985) considers a number of linguistic analyses which IG's (but not CFG's) can make, for example, the Norwedish example shown in Figure 1. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Our clue is the NE instance pairs. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | information structure. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The search starts in hypothesis (f;g; 0) and ends in the hypotheses (f1; ; Jg; j), with j 2 f1; ; Jg. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | This design leads to a significant reduction in the computational complexity of training and inference. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | Here we use a slightly different notion of lift, applying to individual arcs and moving their head upwards one step at a time: Intuitively, lifting an arc makes the word wk dependent on the head wi of its original head wj (which is unique in a well-formed dependency graph), unless wj is a root in which case the operation is undefined (but then wj —* wk is necessarily projective if the dependency graph is well-formed). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | 3. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | A dynamic programming recursion similar to the one in Eq. 2 is evaluated. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 1 | Exploiting Diversity in Natural Language Processing: Combining Parsers |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The input for the segmentation task is however highly ambiguous for Semitic languages, and surface forms (tokens) may admit multiple possible analyses as in (BarHaim et al., 2007; Adler and Elhadad, 2006). |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | One is that smaller sets sometime have meaningless keywords, like âstrengthâ or âaddâ in the CC-domain, or âcompareâ in the PC-domain. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The performance of our system on those sentences ap peared rather better than theirs. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Formally, we define dependency graphs as follows: 3. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | They contain about 200M words (25M, 110M, 40M and 19M words, respectively). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The samples from each corpus were independently evaluated. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The annotator can then âclick awayâ those words that are here not used as connectives (such as the conjunction und (âandâ) used in lists, or many adverbials that are ambiguous between connective and discourse particle). |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | In IE, creating the patterns which express the requested scenario, e.g. âmanagement successionâ or âcorporate merger and acquisitionâ is regarded as the hardest task. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Same for . In the case where the next token is a hyphen, then is also used as a feature: (init- Caps, ) is set to 1. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Only tokens with initCaps not found in commonWords are tested against each list in Table 2. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The confidence level is then used as the belief value for the knowledge source. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | 76 16. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | (If the TF/IDF score of that word is below a threshold, the phrase is discarded.) |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The algorithm builds two classifiers in parallel from labeled and unlabeled data. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË and Zema´nek, 2004; Habash and Roth, 2009). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | The importance of dictionaries in NERs has been investigated in the literature (Mikheev et al., 1999). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 08 84. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Table 1 briefly describes the seven syntactic heuristics used by BABAR to resolve noun phrases. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Time for Moses itself to load, including loading the language model and phrase table, is included. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | It is worth noting that the middle words of the Italian trigrams are nouns too, which exhibits the fact that the similarity metric connects types having the same syntactic category. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and costs. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa. |
There is no global pruning. | 0 | Here, the pruning threshold t0 = 10:0 is used. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | If two systems’ scores are close, this may simply be a random effect in the test data. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Note that these observa sider suffix features, capitalization features, punctuation, and digit features. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 2 62.2 39. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Consider for a set of constituents the isolated constituent precision parser metric, the portion of isolated constituents that are correctly hypothesized. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | English was again paired with German, French, and Spanish. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Previous work deals with this problem by correcting inconsistencies between the named entity classes assigned to different occurrences of the same entity (Borthwick, 1999; Mikheev et al., 1998). |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 3.5 Improved models of discourse. |
The corpus was annoted with different linguitic information. | 0 | In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations â or groups of relations in particular configurations â are signalled by speakers with prosodic means. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | We have presented a new method for non-projective dependency parsing, based on a combination of data-driven projective dependency parsing and graph transformation techniques. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | We settled on contrastive evaluations of 5 system outputs for a single test sentence. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | 1). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | There is a (costless) transition between the NC node and f,. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | This solution also obviates the need to perform word sense disambiguation. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The first setting uses the European Medicines Agency (EMEA) corpus (Tiedemann, 2009) as IN, and the Europarl (EP) corpus (www.statmt.org/europarl) as OUT, for English/French translation in both directions. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | It may seem surprising to some readers that the interhuman agreement scores reported here are so low. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 4 69.0 51. |
This assumption, however, is not inherent to type-based tagging models. | 0 | We report results for the best and median hyperparameter settings obtained in this way. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Interpolation search is therefore a form of binary search with better estimates informed by the uniform key distribution. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Once again we present both a non-parametric and a parametric technique for this task. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 63 81. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language. |
A beam search concept is applied as in speech recognition. | 0 | The word joining is done on the basis of a likelihood criterion. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We consider properties of the tree sets generated by CFG's, Tree Adjoining Grammars (TAG's), Head Grammars (HG's), Categorial Grammars (CG's), and IG's. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | They first collect the NE instance pairs and contexts, just like our method. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | (3)), with one term for each classifier. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | TPT Germann et al. (2009) describe tries with better locality properties, but did not release code. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Automatic paraphrase discovery is an important but challenging task. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | We use two common techniques, hash tables and sorted arrays, describing each before the model that uses the technique. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | This highly effective approach is not directly applicable to the multinomial models used for core SMT components, which have no natural method for combining split features, so we rely on an instance-weighting approach (Jiang and Zhai, 2007) to downweight domain-specific examples in OUT. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Table 1 briefly describes the seven syntactic heuristics used by BABAR to resolve noun phrases. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | While the linear precedence of segmental morphemes within a token is subject to constraints, the dominance relations among their mother and sister constituents is rather free. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | They return a value in the range [0,1], where 0 indicates neutrality and 1 indicates the strongest belief that the candidate and anaphor are coreferent. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Bresnan, Kaplan, Peters, and Zaenen (1982) argue that these structures are needed to describe crossed-serial dependencies in Dutch subordinate clauses. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | 2. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The Hebrew token ‘bcl’1, for example, stands for the complete prepositional phrase 'We adopt here the transliteration of (Sima’an et al., 2001). |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | The phrases have to be the expressions of length less than 5 chunks, appear between two NEs. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired, so the trend observed for the results of this method is in the right direction. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Some of these approaches (e.g., Lin, Chiang, and Su [1993]) attempt to identify unknown words, but do not ac tually tag the words as belonging to one or another class of expression. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN). |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Many human evaluation metrics have been proposed. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The model was parameterized such that the joint probability of a (label, feature-set) pair P(yi, xi) is written as The model assumes that (y, x) pairs are generated by an underlying process where the label is first chosen with some prior probability P(yi); the number of features mi is then chosen with some probability P(mi); finally the features are independently generated with probabilities P(xulyi). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Bikel et al. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | Memory-based classifiers for the experiments were created using TiMBL (Daelemans et al., 2003). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.