source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Here we present two algorithms. | 0 | Unsupervised Models for Named Entity Classification Collins |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Consider first the examples in (2). |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 2 62.6 45. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | This has the effect of randomly permuting vocabulary identifiers, meeting the requirements of interpolation search when vocabulary identifiers are used as keys. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations. |
The texts were annotated with the RSTtool. | 0 | The knowledge base then can be tested for its relation-inference capabilities on the basis of full-blown co-reference information. |
Here we present two algorithms. | 0 | This paper discusses the use of unlabeled examples for the problem of named entity classification. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching. |
The texts were annotated with the RSTtool. | 0 | 2.3 Rhetorical structure. |
All the texts were annotated by two people. | 0 | All commentaries have been tagged with part-of-speech information using Brantsâ TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.) |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | In fact, we found that enabling IRSTLM’s cache made it slightly slower, so results in Table 1 use IRSTLM without caching. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The result of this is shown in Figure 7. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | We substantially outperform all of them on query speed and offer lower memory consumption than lossless alternatives. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | Special thanks to Jan Hajiˇc and Matthias Trautner Kromann for assistance with the Czech and Danish data, respectively, and to Jan Hajiˇc, Tom´aˇs Holan, Dan Zeman and three anonymous reviewers for valuable comments on a preliminary version of the paper. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Because these two words have identical complements, syntax rules are typically unhelpful for distinguishing between them. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | If they are found in a list, then a feature for that list will be set to 1. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The rationale for treating these semantic labels differently is that they are specific and reliable (as opposed to the WordNet classes, which are more coarse and more noisy due to polysemy). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This model is easily incorporated into the segmenter by building a WFST restrict ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | 2.4 Underspecified rhetorical structure. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Note that because we extracted only high-confidence alignments, many foreign vertices will not be connected to any English vertices. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 37. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 0 55.3 34. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | SRILM inefficiently stores 64-bit pointers. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | It should be clear from the onset that the particle b (“in”) in ‘bcl’ may then attach higher than the bare noun cl (“shadow”). |
Here both parametric and non-parametric models are explored. | 0 | Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Other approaches encode sparsity as a soft constraint. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | The form mnh itself can be read as at least three different verbs (“counted”, “appointed”, “was appointed”), a noun (“a portion”), and a possessed noun (“her kind”). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Figure 5 shows how this model is implemented as part of the dictionary WFST. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | When this feature type was included, CoBoost chose this default feature at an early iteration, thereby giving non-abstaining pseudo-labels for all examples, with eventual convergence to the two classifiers agreeing by assigning the same label to almost all examples. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The type-level posterior term can be computed according to, P (Ti|W , T âi, β) â Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | An initial step of any text analysis task is the tokenization of the input into words. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | DempsterShafer handles this by re-normalizing all the belief values with respect to only the non-null sets (this is the purpose of the denominator in Equation 1). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps. |
A beam search concept is applied as in speech recognition. | 0 | 4.3 Translation Experiments. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The second weakness is purely conceptual, and probably does not affect the per formance of the model. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | The first stage identifies a keyword in each phrase and joins phrases with the same keyword into sets. |
All the texts were annotated by two people. | 0 | Reicheâs colleagues will make sure that the concept is waterproof. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Two issues distinguish the various proposals. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | After tagging a large corpus with an automatic NE tagger, the method tries to find sets of paraphrases automatically without being given a seed phrase or any kind of cue. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | On the one hand, the type-level error rate is not calibrated for the number of n-grams in the sample. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Gan's solution depends upon a fairly sophisticated language model that attempts to find valid syntactic, semantic, and lexical relations between objects of various linguistic types (hanzi, words, phrases). |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | As is standard, we use a fixed constant K for the number of tagging states. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Ã;e00 np(jjj0; J) p(Ã) pÃ(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | So, it is too costly to make IE technology âopen- domainâ or âon-demandâ like IR or QA. |
All the texts were annotated by two people. | 0 | This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | For example, out of 905 phrases in the CC- domain, 211 phrases contain keywords found in step 2. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | (f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Searching a probing hash table consists of hashing the key, indexing the corresponding bucket, and scanning buckets until a matching key is found or an empty bucket is encountered, in which case the key does not exist in the table. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | But in most cases they can be used interchangably. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 18 77. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Sometimes extracted phrases by themselves are not meaningful to consider without context, but we set the following criteria. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | However, the learning curves in Figure 3 show that the Berkeley parser does not exceed our manual grammar by as wide a margin as has been shown for other languages (Petrov, 2009). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Previous Work. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The normalization factor plays an important role in the AdaBoost algorithm. |
They have made use of local and global features to deal with the instances of same token in a document. | 1 | Global features are extracted from other occurrences of the same token in the whole document. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | In Section 6 we dis cuss other issues relating to how higher-order language models could be incorporated into the model. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The method reported in this paper makes use solely of unigram probabilities, and is therefore a zeroeth-order model: the cost of a particular segmentation is estimated as the sum of the costs of the individual words in the segmentation. |
The texts were annotated with the RSTtool. | 0 | The PCC is not the result of a funded project. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Third, all remaining anaphora are evaluated by 11 different knowledge sources: the four contextual role knowledge sources just described and seven general knowledge sources. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | This group of features attempts to capture such information. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | All improvements over the baseline are statistically significant beyond the 0.01 level (McNemar’s test). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The confidence level is then used as the belief value for the knowledge source. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | This is demonstrated by average scores over all systems, in terms of BLEU, fluency and adequacy, as displayed in Figure 5. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | For the English RST-annotated corpus that is made available via LDC, his corresponding result is 62%. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Recently, statistical NERs have achieved results that are comparable to hand-coded systems. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify. |
The corpus was annoted with different linguitic information. | 0 | For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (âSonderforschungsbereichâ) revolving around 9 2.7 Information structure. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | gaolbu4-gaolxing4 (hap-not-happy) 'happy?' |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | All features were conjoined with the state z. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | However, when the linear LM is combined with a linear TM (lm+lin tm) or MAP TM (lm+map TM), the results are much better than a log-linear combination for the EMEA setting, and on a par for NIST. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Given that weights on all outgoing arcs sum up to one, weights induce a probability distribution on the lattice paths. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | While the generative power of CG's is greater that of CFG's, it appears to be highly constrained. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | The following three sections elaborate these different stages is more detail. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | On each language we investigate the contribution of each component of our model. |
The corpus was annoted with different linguitic information. | 0 | Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Each visited entry wni stores backoff b(wni ). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.