source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | 1). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | To provide a thorough analysis, we evaluated three baselines and two oracles in addition to two variants of our graph-based approach. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Frontier nodes are annotated by zero arty functions corresponding to elementary structures. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Smith estimates Lotus will make a profit this quarterâ¦â, our system extracts âSmith esti mates Lotusâ as an instance. |
This corpus has several advantages: it is annotated at different levels. | 0 | The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Each set is assigned two values: belief and plausibility. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | The overall performance of our joint framework demonstrates that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperforms upper bounds proposed by previous joint disambiguation systems and achieves segmentation and parsing results on a par with state-of-the-art standalone applications results. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | We simulate lexical constraints by using an external lexical resource against which we verify whether OOV segments are in fact valid Hebrew lexemes. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The relativizer f(“that”) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | First, the model assumes independence between the first and second hanzi of a double given name. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | For the TM, this is: where cI(s, t) is the count in the IN phrase table of pair (s, t), po(s|t) is its probability under the OUT TM, and cI(t) = "s, cI(s', t). |
Their results show that their high performance NER use less training data than other systems. | 0 | At most one feature in this group will be set to 1. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Second, we identified the 100 most frequent nouns in the training corpus and manually labeled them with semantic tags. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Each xii is a member of X, where X is a set of possible features. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Part-of-speech (POS) tag distributions are known to exhibit sparsity â a word is likely to take a single predominant tag in a corpus. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | From the point of view of computational implementation this can be problematic, since the inclusion of non-projective structures makes the parsing problem more complex and therefore compromises efficiency and in practice also accuracy and robustness. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | nan2gual 'pumpkin.' |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | This range is collapsed to a number of buckets, typically by taking the hash modulo the number of buckets. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993). |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The CoBoost algorithm described above divides the function Zco into two parts: Zco = 40 + 40. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | This task measures how well each package performs in machine translation. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 1 | We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We quantify error categories in both evaluation settings. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Exposing this information to the decoder will lead to better hypothesis recombination. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Since the early days of statistical NLP, researchers have observed that a part-of-speech tag distribution exhibits âone tag per discourseâ sparsity â words are likely to select a single predominant tag in a corpus, even when several tags are possible. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | However, the next step is clearly different. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | computing the precision of the other's judgments relative to this standard. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | In a model we built with default settings, 1.2% of n + 1-grams were missing their ngram suffix. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | This limitation is the obstacle to making the technology âopen domainâ. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Hyperparameter settings are sorted according to the median one-to-one metric over runs. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | What both of these approaches presume is that there is a sin gle correct segmentation for a sentence, against which an automatic algorithm can be compared. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 2.1 Reliable Case Resolutions. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Figure 4 Input lattice (top) and two segmentations (bottom) of the sentence 'How do you say octopus in Japanese?'. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 49 99. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Since pronouns carry little semantics of their own, resolving them depends almost entirely on context. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Informally, two or more paths can be dependent on each other: for example, they could be required to be of equal length as in the trees in Figure 4. generates such a tree set. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | The ATB gives several different analyses to these words to indicate different types of coordination. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2 |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The entries in this table can be compared with those of Table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parser. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | It may be more realistic to replace the second criteria with a softer one, for example (Blum and Mitchell 98) suggest the alternative Alternatively, if Ii and 12 are probabilistic learners, it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learners. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | We checked whether the discovered links are listed in WordNet. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | For each experiment we gave an nonparametric and a parametric technique for combining parsers. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | annotation guidelines that tell annotators what to do in case of doubt. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | The second setting uses the news-related subcorpora for the NIST09 MT Chinese to English evaluation8 as IN, and the remaining NIST parallel Chinese/English corpora (UN, Hong Kong Laws, and Hong Kong Hansard) as OUT. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | The significant drop in number of pupils will begin in the fall of 2003. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | ICOC and CSPP contributed the greatest im provements. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | The frequency of the Company â Company domain ranks 11th with 35,567 examples. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | The goal of our research was to explore the use of contextual role knowledge for coreference resolution. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Memory usage in PROBING is high, though SRILM is even larger, so where memory is of concern we recommend using TRIE, if it fits in memory. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | In such cases we assign all of the estimated probability mass to the form with the most likely pronunciation (determined by inspection), and assign a very small probability (a very high cost, arbitrarily chosen to be 40) to all other variants. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Adam Pauls provided a pre-release comparison to BerkeleyLM and an initial Java interface. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | We extend the Matsoukas et al approach in several ways. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | The data is sorted based on the frequency of the context (âa unit ofâ appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. âNBCâ and âGeneral Electric Co.â appeared 10 times with the context âa unit ofâ). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | A list of words occurring more than 10 times in the training data is also collected (commonWords). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 4.3 Morphological Analysis. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 41. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | In the natural disasters domain, agents are often forces of nature, such as hurricanes or wildfires. |
Here we present two algorithms. | 0 | The method shares some characteristics of the decision list algorithm presented in this paper. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | For each token , zero, one, or more of the features in each feature group are set to 1. |
Here both parametric and non-parametric models are explored. | 0 | When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | Combining multiple highly-accurate independent parsers yields promising results. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | State will ultimately be used as context in a subsequent query. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | For statistics on this test set, refer to Figure 1. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (6), with W+ > W_. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Adding the isolated constituents to our hypothesis parse could increase our expected recall, but in the cases we investigated it would invariably hurt our precision more than we would gain on recall. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | An inverted alignment is defined as follows: inverted alignment: i ! j = bi: Target positions i are mapped to source positions bi. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Such tag sequences are often treated as “complex tags” (e.g. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Finally, we wish to reiterate an important point. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 9 50.2 +P RI OR be st me dia n 47. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | The accuracies for link were 73% and 86% on two evaluated domains. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 11 taTweel (-) is an elongation character used in Arabic script to justify text. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | This is similar to using the Linux MAP POPULATE flag that is our default loading mechanism. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | The data is sorted based on the frequency of the context (âa unit ofâ appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. âNBCâ and âGeneral Electric Co.â appeared 10 times with the context âa unit ofâ). |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 87 Table 7: Test set results. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We develop our POS induction model based on the feature-based HMM of Berg-Kirkpatrick et al. (2010). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The breakdown of the different types of words found by ST in the test corpus is given in Table 3. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 41. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | For the automatic scoring method BLEU, we can distinguish three quarters of the systems. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The pseudo-code describing the algorithm is given in Fig. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.