source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | hanzi in the various name positions, derived from a million names. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | From the point of view of recognition, independent paths in the derivation structures suggests that a top-down parser (for example) can work on each branch independently, which may lead to efficient parsing using an algorithm based on the Divide and Conquer technique. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | paper, and is missing 6 examples from the A set. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 1 2 3. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Multiple features can be used for the same token. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | Previous work has attempted to incorporate such constraints into token-level models via heavy-handed modifications to inference procedure and objective function (e.g., posterior regularization and ILP decoding) (Grac¸a et al., 2009; Ravi and Knight, 2009). |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | For our experiments we also report the mean of precision and recall, which we denote by (P + R)I2 and F-measure. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Each xii is a member of X, where X is a set of possible features. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Prague Dependency Treebank (Hajiˇc et al., 2001b), Danish Dependency Treebank (Kromann, 2003), and the METU Treebank of Turkish (Oflazer et al., 2003), which generally allow annotations with nonprojective dependency structures. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Fourth, we show how to build better models for three different parsers. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | However, the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | The token-level term is similar to the standard HMM sampling equations found in Johnson (2007). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Ex: Mr. Cristiani, president of the country ... |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | We have checked if there are similar verbs in other major domains, but this was the only one. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | As we will see from Table 3, not much improvement is derived from this feature. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Finally, we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Only IRSTLM does not support threading. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Here we propose a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | The samples from each corpus were independently evaluated. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Pseudo-code describing the generalized boosting algorithm of Schapire and Singer is given in Figure 1. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Figure 1 shows sample sentences from these domains, which are widely divergent. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | tai2du2 'Taiwan Independence.' |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance. |
This assumption, however, is not inherent to type-based tagging models. | 0 | .., Tn ) T W Ï E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS Ï : Lexicon parameters θ : Token word emission parameters Ï : Token tag transition parameters Ï Ï t1 t2 θ θ w1 w2 K Ï T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | 88,962 (spelling,context) pairs were extracted as training data. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | As indicated by bolding, for seven out of eight languages the improvements of the “With LP” setting are statistically significant with respect to the other models, including the “No LP” setting.11 Overall, it performs 10.4% better than the hitherto state-of-the-art feature-HMM baseline, and 4.6% better than direct projection, when we macro-average the accuracy over all languages. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Step 3. |
There is no global pruning. | 0 | Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions. |
There is no global pruning. | 0 | Covering the first uncovered position in the source sentence, we use the language model probability p(ej$; $). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | This is especially effective at reducing load time, since raw bytes are read directly to memory—or, as happens with repeatedly used models, are already in the disk cache. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | As with personal names, we also derive an estimate from text of the probability of finding a transliterated name of any kind (PTN). |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | of Articles No. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with default pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abilities). |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | A position is presented by the word at that position. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 30 75. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Both parametric and non-parametric models are explored. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 4 53.7 43. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | So, who won the competition? |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 7 68.3 56. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.' |
A beam search concept is applied as in speech recognition. | 0 | The computing time is given in terms of CPU time per sentence (on a 450MHz PentiumIIIPC). |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: ⢠Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Thus in a two-hanzi word like lflli?J zhong1guo2 (middle country) 'China' there are two syllables, and at the same time two morphemes. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | An examination of the subjects' bracketings confirmed that these instructions were satisfactory in yielding plausible word-sized units. |
This corpus has several advantages: it is annotated at different levels. | 0 | Assigning rhetorical relations thus poses questions that can often be answered only subjectively. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | However, the overall percentage of non-projective arcs is less than 2% in PDT and less than 1% in DDT. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 52 77. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Because the information flow in our graph is asymmetric (from English to the foreign language), we use different types of vertices for each language. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1 <i <n, the following virtual distribution, As before, Ztl is a normalization constant. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Here we do not submit to this view. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Benchmarks use the package’s binary format; our code is also the fastest at building a binary file. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The numbers falling into the location, person, organization categories were 186, 289 and 402 respectively. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | All improvements over the baseline are statistically significant beyond the 0.01 level (McNemar’s test). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The word joining is done on the basis of a likelihood criterion. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | set was based on an earlier version of the Chang et a!. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Finally, we add âDTâ to the tags for definite nouns and adjectives (Kulick et al., 2006). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Conversely, the lattice parser requires no linguistic resources and produces segmentations of comparable quality. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | From now on all lattice arcs are tagged segments and the assignment of probability P(p —* (s, p)) to lattice arcs proceeds as usual.4 A rather pathological case is when our lexical heuristics prune away all segmentation possibilities and we remain with an empty lattice. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | It is difficult to motivate these days why one ministry should be exempt from cutbacks â at the expense of the others. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | In this way we restrict the parameterization of a Language Original case English Danish Dutch German Spanish Swedish Portuguese 94.6 96.3 96.6 95.5 95.4 93.3 95.6 Table 1: Upper bound on tagging accuracy assuming each word type is assigned to majority POS tag. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | This is a rather important source of errors in name identifi cation, and it is not really possible to objectively evaluate a name recognition system without considering the main lexicon with which it is used. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | While we cannot prove there are no such useful features on which one should condition trust, we can give some insight into why the features we explored offered no gain. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Our second contribution is to apply instance weighting at the level of phrase pairs. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In this case, this knowledge source reports that the candidate is not a viable antecedent for the anaphor. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We identified three ways that contextual roles can be exploited: (1) by identifying caseframes that co-occur in resolutions, (2) by identifying nouns that co-occur with case- frames and using them to crosscheck anaphor/candidate compatibility, (3) by identifying semantic classes that co- occur with caseframes and using them to crosscheck anaphor/candidate compatability. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | 0n0'i'i0'2"bin242bn I n = 711 + n2 } On the other hand, no linguistic use is made of this general form of composition and Steedman (personal communication) and Steedman (1986) argues that a more limited definition of composition is more natural. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Note that hanzi that are not grouped into dictionary words (and are not identified as single hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | The PCC is not the result of a funded project. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | A list of words occurring more than 10 times in the training data is also collected (commonWords). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The correct resolution in sentence (b) comes from knowledge that people who are kidnapped are often subsequently released. |
All the texts were annotated by two people. | 0 | Then, moving from connective to connective, ConAno sometimes offers suggestions for its scope (using heuristics like âfor sub- junctor, mark all words up to the next comma as the first segmentâ), which the annotator can accept with a mouseclick or overwrite, marking instead the correct scope with the mouse. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Link phrases based on instance pairs Using NE instance pairs as a clue, we find links between sets of phrases. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Unification is used in LFG's to produce structures having two dependent spines of unbounded length as in Figure 5. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | However, when the linear LM is combined with a linear TM (lm+lin tm) or MAP TM (lm+map TM), the results are much better than a log-linear combination for the EMEA setting, and on a par for NIST. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô1; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô1 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ). |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Removing the leaves from the resulting tree yields a parse for L under G, with the desired probabilities. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB. |
The AdaBoost algorithm was developed for supervised learning. | 0 | There has been additional recent work on inducing lexicons or other knowledge sources from large corpora. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.