source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Human evaluation is one way to distinguish between the two cases. |
The AdaBoost algorithm was developed for supervised learning. | 0 | (Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different "views" of an example. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Words and punctuation that appear in brackets are considered optional. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | 6 Conclusions. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 9 66.4 47. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This is not ideal for some applications, however. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem. |
There is no global pruning. | 0 | The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | On several languages, we report performance exceeding that of state-of-the art systems. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | We settled on contrastive evaluations of 5 system outputs for a single test sentence. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Simple Type-Level Unsupervised POS Tagging |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations â or groups of relations in particular configurations â are signalled by speakers with prosodic means. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | This is the parse that is closest to the centroid of the observed parses under the similarity metric. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | It is difficult when IN and OUT are dissimilar, as they are in the cases we study. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Only 2 link in the CC- domain (buy-purchase, acquire-acquisition) and 2 links (trader-dealer and head-chief) in the PC- domain are found in the same synset of Word- Net 2.1 (http://wordnet.princeton.edu/). |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | This section describes AdaBoost, which is the basis for the CoBoost algorithm. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned. |
The texts were annotated with the RSTtool. | 0 | When the connective is an adverbial, there is much less clarity as to the range of the spans. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | As can be seen, GR and this "pared-down" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Space- or punctuation-delimited * 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | M(wi) = Li). |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus. |
This assumption, however, is not inherent to type-based tagging models. | 0 | We have presented a method for unsupervised part- of-speech tagging that considers a word type and its allowed POS tags as a primary element of the model. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Keywords with more than one word In the evaluation, we explained that âchairmanâ and âvice chairmanâ are considered paraphrases. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | When this feature type was included, CoBoost chose this default feature at an early iteration, thereby giving non-abstaining pseudo-labels for all examples, with eventual convergence to the two classifiers agreeing by assigning the same label to almost all examples. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 08 84. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Two issues distinguish the various proposals. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a). |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Reflexive pronouns with only 1 NP in scope.. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 08 84. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 0 57.3 51. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. |
Here we present two algorithms. | 0 | We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Following Sproat and Shih (1990), performance for Chinese segmentation systems is generally reported in terms of the dual measures of precision and recalP It is fairly standard to report precision and recall scores in the mid to high 90% range. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | (a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?' |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of studentsâ work in course projects, and to some extent of paid assistentships. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Variants of alif are inconsistently used in Arabic texts. |
Here we present two algorithms. | 0 | We now describe the CoBoost algorithm for the named entity problem. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension. |
The texts were annotated with the RSTtool. | 0 | Besides information structure, the second main goal is to enhance current models of rhetorical structure. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | It is not immediately obvious how to formulate an equivalent to equation (1) for an adapted TM, because there is no well-defined objective for learning TMs from parallel corpora. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This representation gives ir, an appropriate morphological decomposition, pre serving information that would be lost by simply listing ir, as an unanalyzed form. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | In the terrorism domain, 1600 texts were used for training and the 40 test docu X â©Y =â
All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | The resulting model is compact, efficiently learnable and linguistically expressive. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 3.1 Gross Statistics. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | This significantly underperforms log-linear combination. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004). |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | See Section 5. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The PCFG was trained from the same sections of the Penn Treebank as the other three parsers. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The final model tions. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | It is. based on the traditional character set rather than the simplified character set used in Singapore and Mainland China. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Table 9: Dev set results for sentences of length ⤠70. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 1 | Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Using structural information As was explained in the results section, we extracted examples like âSmith estimates Lotusâ, from a sentence like âMr. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | However, using the top-level semantic classes of WordNet proved to be problematic because the class distinctions are too coarse. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | We used a standard one-pass phrase-based system (Koehn et al., 2003), with the following features: relative-frequency TM probabilities in both directions; a 4-gram LM with Kneser-Ney smoothing; word-displacement distortion model; and word count. |
The texts were annotated with the RSTtool. | 0 | Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators. |
Here both parametric and non-parametric models are explored. | 0 | Our original hope in combining these parsers is that their errors are independently distributed. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 2. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | While we have minimized forward-looking state in Section 4.1, machine translation systems could also benefit by minimizing backward-looking state. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Compared with the widely- SRILM, our is 2.4 times as fast while using 57% of the mem- The structure is a trie with bit-level packing, sorted records, interpolation search, and optional quantization aimed lower memory consumption. simultaneously uses less memory than the smallest lossless baseline and less CPU than the baseline. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | The frequency of the Company â Company domain ranks 11th with 35,567 examples. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic. |
Their results show that their high performance NER use less training data than other systems. | 0 | Such constraints are derived from training data, expressing some relationship between features and outcome. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered? |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1. |
The texts were annotated with the RSTtool. | 0 | 5 âUnderspecified Rhetorical Markup Languageâ 6 This confirms the figure given by (Schauer, Hahn. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Figure 5 shows how this model is implemented as part of the dictionary WFST. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The second algorithm builds on a boosting algorithm called AdaBoost (Freund and Schapire 97; Schapire and Singer 98). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | As is standard, we use a fixed constant K for the number of tagging states. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Surprisingly, this effect is much less obvious for out-of-domain test data. |
Here we present two algorithms. | 0 | The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem. |
There is no global pruning. | 0 | The perplexity for the trigram language model used is 26:5. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | ⢠Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol â the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Saving state allows our code to walk the data structure exactly once per query. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The following two error criteria are used in our experiments: mWER: multi-reference WER: We use the Levenshtein distance between the automatic translation and several reference translations as a measure of the translation errors. |
Here we present two algorithms. | 0 | We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Note that Chang, Chen, and Chen (1991), in addition to word-frequency information, include a constraint-satisfication model, so their method is really a hybrid approach. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | We evaluate the system's performance by comparing its segmentation 'Tudgments" with the judgments of a pool of human segmenters, and the system is shown to perform quite well. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | On several languages, we report performance exceeding that of state-of-the art systems. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Here, we process only full-form words within the translation procedure. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Proper-Name Identification. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Another question that remains unanswered is to what extent the linguistic information he considers can be handled-or at least approximated-by finite-state language models, and therefore could be directly interfaced with the segmentation model that we have presented in this paper. |