source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | From this we see that a finer-grained model for parser combination, at least for the features we have examined, will not give us any additional power. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | lla/llb and 14a/14b respectively). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | We allow any number of bits from 2 to 25, unlike IRSTLM (8 bits) and BerkeleyLM (17−20 bits). |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphorâs sentence and two preceding sentences, and for definite NPs it is the anaphorâs sentence and eight preceding sentences. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Nicola Bertoldi and Marcello Federico assisted with IRSTLM. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Besides information structure, the second main goal is to enhance current models of rhetorical structure. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999). |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | In other words, the set of hidden states F was chosen to be the fine set of treebank tags. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | A high-level relation is agent, which relates an animate nominal to a predicate. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Such resources exist for Hebrew (Itai et al., 2006), but unfortunately use a tagging scheme which is incompatible with the one of the Hebrew Treebank.s For this reason, we use a data-driven morphological analyzer derived from the training data similar to (Cohen and Smith, 2007). |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | This left 962 examples, of which 85 were noise. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | The rest of the paper is structured as follows. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | For example, the passive voice pattern â<subject> were kidnappedâ and the active voice pattern âkidnapped <direct object>â are merged into a single normalized pattern âkidnapped <patient>â.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, â<agent> kidnappedâ or âkidnapped <patient>â), and (2) predicate-argument relations associated with both verbs and nouns (e.g., âkidnapped for <np>â or âvehicle with <np>â). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Tables 4 and 5 show BABARâs performance when just one contextual role knowledge source is used at a time. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | was done by the participants. |
Here we present two algorithms. | 0 | We excluded these from the evaluation as they can be easily identified with a list of days/months. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | âNP NP PP R) and âNP NP ADJP R) are both iDafa attachment. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | For instance, on Spanish, the absolute gap on median performance is 10%. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | For these models we limit the options provided for OOV words by not considering the entire token as a valid segmentation in case at least some prefix segmentation exists. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | It is striking that from this point of view many formalisms can be grouped together as having identically structured derivation tree sets. |
The corpus was annoted with different linguitic information. | 0 | (Again, the goal of also in structural features. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Among these are words derived by various productive processes, including: 1. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | This annotation choice weakens splitIN. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | None of the models we have presented utilize features associated with a particular constituent (i.e. the label, span, parent label, etc.) to influence parser preference. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (HajicË and Zema´nek, 2004; Habash and Roth, 2009). |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Training under this model involves estimation of parameter values for P(y), P(m) and P(x I y). |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Every parse π selects a specific morphological segmentation (l1...lk) (a path through the lattice). |
The texts were annotated with the RSTtool. | 0 | basically complete, yet some improvements and extensions are still under way. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The edge from the root to the subtree for the derivation of 7i is labeled by the address ni. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b). |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Now assume we have n pairs (xi,, x2,i) drawn from X1 X X2, where the first m pairs have labels whereas for i = m+ 1...n the pairs are unlabeled. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Instead, we condition on the type-level tag assignments T . Specifically, let St = {i|Ti = t} denote the indices of theword types which have been assigned tag t accord ing to the tag assignments T . Then θt is drawn from DIRICHLET(α, St), a symmetric Dirichlet which only places mass on word types indicated by St. This ensures that each word will only be assigned a single tag at inference time (see Section 4). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Unsupervised Learning of Contextual Role Knowledge for Coreference Resolution |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | The first thing to note is that projectivizing helps in itself, even if no encoding is used, as seen from the fact that the projective baseline outperforms the non-projective training condition by more than half a percentage point on attachment score, although the gain is much smaller with respect to exact match. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Lack of correct reference translations was pointed out as a short-coming of our evaluation. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 1 | While there are other obstacles to completing this idea, we believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Note also that the costs currently used in the system are actually string costs, rather than word costs. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | HR0011-06-C-0022. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | By contrast, BerkeleyLM’s hash and compressed variants will return incorrect results based on an n −1-gram. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Table 2 shows results for both settings and all methods described in sections 2 and 3. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | German city names are replaced by category markers. |
Here we present two algorithms. | 0 | 123 examples fell into the noise category. |
Their results show that their high performance NER use less training data than other systems. | 0 | If they are found in a list, then a feature for that list will be set to 1. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For each resolution in the training data, BABAR also associates the co-referring expression of an NP with the NPâs caseframe. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | 3.2 Stochastic rhetorical analysis. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | ⢠Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol â the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | German city names are replaced by category markers. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | splitIN captures the verb/preposition idioms that are widespread in Arabic. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | If two systems’ scores are close, this may simply be a random effect in the test data. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Our open-source (LGPL) implementation is also available for download as a standalone package with minimal (POSIX and g++) dependencies. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | For à = 1, a new target language word is generated using the trigram language model p(eje0; e00). |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Taking the intersection of languages in these resources, and selecting languages with large amounts of parallel data, yields the following set of eight Indo-European languages: Danish, Dutch, German, Greek, Italian, Portuguese, Spanish and Swedish. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 32 81. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages. |
There is no global pruning. | 0 | The algorithm works due to the fact that not all permutations of cities have to be considered explicitly. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | (we ignored the 419 trees in their development set.) |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | For example, the MCTAG shown in Figure 7 generates trees of the form shown in Figure 4b. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Almost all annotators expressed their preference to move to a ranking-based evaluation in the future. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | For example, one of the ATB samples was the determiner -"" ; dhalikâthat.â The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The set of n-grams appearing in a model is sparse, and we want to efficiently find their associated probabilities and backoff penalties. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i). |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Here, an NE instance pair is any pair of NEs separated by at most 4 syntactic chunks; for example, âIBM plans to acquire Lotusâ. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 43 80. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | An example of a fairly low-level relation is the affix relation, which holds between a stem morpheme and an affix morpheme, such as f1 -menD (PL). |
A beam search concept is applied as in speech recognition. | 0 | 4.1 The Task and the Corpus. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | IRST is not threadsafe. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The final model tions. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999). |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | It is difficult to directly compare the Matsoukas et al results with ours, since our out-of-domain corpus is homogeneous; given heterogeneous training data, however, it would be trivial to include Matsoukas-style identity features in our instance-weighting model. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | In total there are O(K 2) parameters associated with the transition parameters. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Two sets of examples from Gan are given in (1) and (2) (:::::: Gan's Appendix B, exx. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 1 | The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | When this feature type was included, CoBoost chose this default feature at an early iteration, thereby giving non-abstaining pseudo-labels for all examples, with eventual convergence to the two classifiers agreeing by assigning the same label to almost all examples. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | In Input: {(x1,i, Initialize: Vi, j : e(xi) = 0. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | (2009). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.