source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | On the other hand, if it is seen as McCann Pte. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different "views" of an example. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | To evaluate proper-name identification, we randomly se lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin gle unambiguous answer, unlike the more general question of which segmentation is correct. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The type-level tag assignments T generate features associated with word types W . The tag assignments constrain the HMM emission parameters θ. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Section 5 covers relevant previous work on SMT adaptation, and section 6 concludes. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | tai2du2 'Taiwan Independence.' |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Informally, two or more paths can be dependent on each other: for example, they could be required to be of equal length as in the trees in Figure 4. generates such a tree set. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | In other words, the set of hidden states F was chosen to be the fine set of treebank tags. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | pronunciations of individual words; they also need to compute intonational phrase boundaries in long utterances and assign relative prominence to words in those utterances. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | In TAG's the elementary tree and addresses where adjunction takes place are used to instantiate the operation. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | (1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains. |
The AdaBoost algorithm was developed for supervised learning. | 0 | We are currently exploring other methods that employ similar ideas and their formal properties. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The combining technique must act as a multi-position switch indicating which parser should be trusted for the particular sentence. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998). |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | We do not experiment with models larger than physical memory in this paper because TPT is unreleased, factors such as disk speed are hard to replicate, and in such situations we recommend switching to a more compact representation, such as RandLM. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Our smoothing procedure takes into account all the aforementioned aspects and works as follows. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | We have shown that the maximum entropy framework is able to use global information directly. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | IRSTLM’s quantized variant is the inspiration for our quantized variant. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Standard SMT systems have a hierarchical parameter structure: top-level log-linear weights are used to combine a small set of complex features, interpreted as log probabilities, many of which have their own internal parameters and objectives. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Of these cases, 38 were temporal expressions (either a day of the week or month of the year). |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | This is less effective in our setting, where IN and OUT are disparate. |
There is no global pruning. | 0 | Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | This approach needs a phrase as an initial seed and thus the possible relationships to be extracted are naturally limited. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | This stage of label propagation results in a tag distribution ri over labels y, which encodes the proportion of times the middle word of ui E Vf aligns to English words vy tagged with label y: The second stage consists of running traditional label propagation to propagate labels from these peripheral vertices Vf� to all foreign language vertices in the graph, optimizing the following objective: 5 POS Induction After running label propagation (LP), we compute tag probabilities for foreign word types x by marginalizing the POS tag distributions of foreign trigrams ui = x− x x+ over the left and right context words: where the qi (i = 1, ... , |Vf|) are the label distributions over the foreign language vertices and µ and ν are hyperparameters that we discuss in §6.4. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Since pronouns carry little semantics of their own, resolving them depends almost entirely on context. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 9 50.2 +P RI OR be st me dia n 47. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Table 1 shows results of the benchmark. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | These results are promising and there are several avenues for improving on these results. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Otherwise, the scope of the search problem shrinks recursively: if A[pivot] < k then this becomes the new lower bound: l +— pivot; if A[pivot] > k then u +— pivot. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | For example, from the sentence âMr. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | 4. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | However, there are phrases which express the same meanings even though they do not share the same keyword. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Email: gale@research. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The other half was replaced by other participants, so we ended up with roughly the same number. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Almost all annotators expressed their preference to move to a ranking-based evaluation in the future. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Inflectional features marking pronominal elements may be attached to different kinds of categories marking their pronominal complements. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | . |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | We have developed a coreference resolver called BABAR that uses contextual role knowledge to make coreference decisions. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Our experiments all concern the analytical annotation, and the first experiment is based only on the training part. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We attempt to formalize this notion in terms of the tee pumping lemma which can be used to show that a tee set does not have dependent paths. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | 2. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | The discovered paraphrases can be a big help to reduce human labor and create a more comprehensive pattern set. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | We apply a beam search concept as in speech recognition. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The human judges were presented with the following definition of adequacy and fluency, but no additional instructions: |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The performance was 80.99% recall and 61.83% precision. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | an event. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Various segmentation approaches were then compared with human performance: 1. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | We also collapse unary chains withidentical basic categories like NP â NP. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Applications The discovered paraphrases have multiple applications. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 98 15. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | (1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We tabulate this increase in Table 3. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | These three parsers have given the best reported parsing results on the Penn Treebank Wall Street Journal corpus (Marcus et al., 1993). |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | This allow the learners to "bootstrap" each other by filling the labels of the instances on which the other side has abstained so far. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Next we show that the ATB is similar to other tree- banks in gross statistical terms, but that annotation consistency remains low relative to English (§3). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | As noted for the perplexity task, we do not expect cache to grow substantially with model size, so RandLM remains a low-memory option. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The use of weighted transducers in particular has the attractive property that the model, as it stands, can be straightforwardly interfaced to other modules of a larger speech or natural language system: presumably one does not want to segment Chinese text for its own sake but instead with a larger purpose in mind. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The final block in table 2 shows models trained on feature subsets and on the SVM feature described in 3.4. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 9 61.0 44. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Each vertex within a connected component must have the same label — in the binary classification case, we need a single labeled example to identify which component should get which label. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | In contrast to these approaches, our method directly incorporates these constraints into the structure of the model. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | A total of 13,976 phrases were grouped. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For example, in MUC6, there are four zones (TXT, HL, DATELINE, DD). |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | In Figure 4 we show an example of variation between the parsing models. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at oneâs disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The NP is a complement to a preposition, which is the head of a PP. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | (2) Description Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries The McCann family . . . |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Initially, the DempsterShafer model assumes that all hypotheses are equally likely, so it creates a set called θ that includes all hypotheses. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Gabbard and Kulick (2008) show that there is significant attachment ambiguity associated with iDafa, which occurs in 84.3% of the trees in our development set. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | A greedy algorithm (or maximum-matching algorithm), GR: proceed through the sentence, taking the longest match with a dictionary entry at each point. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | They return a value in the range [0,1], where 0 indicates neutrality and 1 indicates the strongest belief that the candidate and anaphor are coreferent. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | For the error counts, a range from 0:0 to 1:0 is used. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Local features are features that are based on neighboring tokens, as well as the token itself. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In defining LCFRS's, we hope to generalize the definition of CFG's to formalisms manipulating any structure, e.g. strings, trees, or graphs. |
There is no global pruning. | 0 | Mai. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The implication of this ambiguity for a parser is that the yield of syntactic trees no longer consists of spacedelimited tokens, and the expected number of leaves in the syntactic analysis in not known in advance. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The NP is a complement to a preposition, which is the head of a PP. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.