source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Reference resolution involves finding words that co-refer to the same entity. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Section 2.2 then describes our representation for contextual roles and four types of contextual role knowledge that are learned from the training examples. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We consider properties of the tree sets generated by CFG's, Tree Adjoining Grammars (TAG's), Head Grammars (HG's), Categorial Grammars (CG's), and IG's. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 36 79. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Another way to interpret this is that less than 5% of the correct constituents are missing from the hypotheses generated by the union of the three parsers. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The 13 positions of the source sentence are processed in the order shown. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | (1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994). |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | SRILM (Stolcke, 2002) is widely used within academia. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The Verbmobil task is an appointment scheduling task. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | If e < b then the key is not found. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | We refer to different readings as different analyses whereby the segments are deterministic given the sequence of PoS tags. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 1 | The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | For example, one of the ATB samples was the determiner -"" ; dhalikâthat.â The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Roughly speaking, the new algorithm presented in this paper performs a similar search, but instead minimizes a bound on the number of (unlabeled) examples on which two classifiers disagree. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | θ has a belief value of 1.0, indicating complete certainty that the correct hypothesis is included in the set, and a plausibility value of 1.0, indicating that there is no evidence for competing hypotheses.5 As evidence is collected and the likely hypotheses are whittled down, belief is redistributed to subsets of θ. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | However, it is almost universally the case that no clear definition of what constitutes a "correct" segmentation is given, so these performance measures are hard to evaluate. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003). |
Here both parametric and non-parametric models are explored. | 0 | In Table 1 we see with very few exceptions that the isolated constituent precision is less than 0.5 when we use the constituent label as a feature. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | For a corpus C, let M be the set of tuples ân, l), where n is an n-gram with bracketing label l. If any n appears 6 Generative parsing performance is known to deteriorate with sentence length. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The distribution specifies the relative weight, or importance, of each example — typically, the weak learner will attempt to minimize the weighted error on the training set, where the distribution specifies the weights. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | ICOC and CSPP contributed the greatest im provements. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | This can be repeated several times to collect a list of author / book title pairs and expressions. |
Here we present two algorithms. | 0 | For the purposes of EM, the "observed" data is {(xi, Ya• • • (xrn, Yrn), xfil, and the hidden data is {ym+i y}. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | (S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | However, there will remain a large number of words that are not readily adduced to any produc tive pattern and that would simply have to be added to the dictionary. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and eiô1 = e0. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie ware es denn am ahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about ahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten konnten , ware das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that . |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Table 2 shows these similarity measures. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Fortunately, performance was stable across various values, and we were able to use the same hyperparameters for all languages. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | For example, co-occurring caseframes may reflect synonymy (e.g., â<patient> kidnappedâ and â<patient> abductedâ) or related events (e.g., â<patient> kidnappedâ and â<patient> releasedâ). |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Using these states, we define partial hypothesis extensions, which are of the following type: (S0;C n fjg; j0) ! |
The AdaBoost algorithm was developed for supervised learning. | 0 | Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | How should the absence of vowels and syntactic markers influence annotation choices and grammar development? |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 2. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In addition, there are several approaches to non-projective dependency parsing that are still to be evaluated in the large (Covington, 1990; Kahane et al., 1998; Duchier and Debusmann, 2001; Holan et al., 2001; Hellwig, 2003). |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Keys to the table are hashed, using for example Austin Appleby’s MurmurHash2, to integers evenly distributed over a large range. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Each distance in the traveling salesman problem now corresponds to the negative logarithm of the product of the translation, alignment and language model probabilities. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Second, we identified the 100 most frequent nouns in the training corpus and manually labeled them with semantic tags. |
Here we present two algorithms. | 0 | As in boosting, the algorithm works in rounds. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Annotation consistency is important in any supervised learning task. |
This corpus has several advantages: it is annotated at different levels. | 0 | We will briefly discuss this point in Section 3.1. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | 2.4 Underspecified rhetorical structure. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | The type-level posterior term can be computed according to, P (Ti|W , T âi, β) â Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (Ï, θ|T , α, β) = n (P (Ït|α)P (θt|T , α)) t=1 The transition distribution Ït for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | This is motivated by taking β po(s|t) to be the parameters of a Dirichlet prior on phrase probabilities, then maximizing posterior estimates p(s|t) given the IN corpus. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | att. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | mein 5. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | irL as the product of the probability estimate for i¥JJ1l., and the probability estimate just derived for unseen plurals in ir,: p(i¥1J1l.ir,) p(i¥1J1l.)p(unseen(f,)). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Ex: The government said it ... |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy. |
A beam search concept is applied as in speech recognition. | 0 | In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction. |
The AdaBoost algorithm was developed for supervised learning. | 0 | In fact, during the first rounds many of the predictions of Th., g2 are zero. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Therefore, we only score guess/gold pairs with identical character yields, a condition that allows us to measure parsing, tagging, and segmentation accuracy by ignoring whitespace. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | I = 1X21 N and N is a "medium" sized number so that it is feasible to collect 0(N) unlabeled examples. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Mikheev et al. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We compared the ATB5 to tree- banks for Chinese (CTB6), German (Negra), and English (WSJ) (Table 4). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | This algorithm can be applied to statistical machine translation. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | There is a âcore corpusâ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | G1 and G2 are hanzi, we can estimate the probability of the sequence being a name as the product of: ⢠the probability that a word chosen randomly from a text will be a name-p(rule 1), and ⢠the probability that the name is of the form 1hanzi-family 2hanzi-given-p(rule 2), and ⢠the probability that the family name is the particular hanzi F1-p(rule 6), and ⢠the probability that the given name consists of the particular hanzi G1 and G2-p(rule 9) This model is essentially the one proposed in Chang et al. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Then, moving from connective to connective, ConAno sometimes offers suggestions for its scope (using heuristics like âfor sub- junctor, mark all words up to the next comma as the first segmentâ), which the annotator can accept with a mouseclick or overwrite, marking instead the correct scope with the mouse. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching. |
There is no global pruning. | 0 | A detailed description of the search procedure used is given in this patent. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Lexicon and OOV Handling Our data-driven morphological-analyzer proposes analyses for unknown tokens as described in Section 5. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | BABAR uses a named entity recognizer to identify proper names that refer to people and companies. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | 2.4 Underspecified rhetorical structure. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | If there is a frequent multi-word sequence in a domain, we could use it as a keyword candidate. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | (2010) consistently outperforms ours on English, we obtain substantial gains across other languages. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | DempsterShafer handles this by re-normalizing all the belief values with respect to only the non-null sets (this is the purpose of the denominator in Equation 1). |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Due to many similarly performing systems, we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. |
This assumption, however, is not inherent to type-based tagging models. | 0 | A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Precision. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | In both cases, SRILM walks its trie an additional time to minimize context as mentioned in Section 4.1. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | We picked two domains, the CC-domain and the âPerson â Companyâ domain (PC-domain), for the evaluation, as the entire system output was too large to evaluate. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | 1). |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | And indeed, converging on annotation guidelines is even more difficult than it is with co-reference. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | (2003), which gives 96.8% accuracy on the test set. |
Subsets and Splits