source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Mikheev et al. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Figure 1 depicts the lattice for a 2-words sentence bclm hneim. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Systems that generally do worse than others will receive a negative one. |
The AdaBoost algorithm was developed for supervised learning. | 0 | In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | We propose an unsupervised method to discover paraphrases from a large untagged corpus, without requiring any seed phrase or other cue. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.' |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | The trie data structure is commonly used for language modeling. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â5% F1. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | First, we aim to explicitly characterize examples from OUT as belonging to general language or not. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 3. |
Their results show that their high performance NER use less training data than other systems. | 0 | Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | A dynamic programming recursion similar to the one in Eq. 2 is evaluated. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | In our experiments we set the parameter values randomly, and then ran EM to convergence. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Then we ran binary search to determine the least amount of memory with which it would run. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | We use the log-linear tagger of Toutanova et al. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | As shown in Table 3, the proportion of sentences containing some non-projective dependency ranges from about 15% in DDT to almost 25% in PDT. |
Here we present two algorithms. | 0 | Each xt E 2x is the set of features constituting the ith example. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Schapire and Singer show that the training error is bounded above by Thus, in order to greedily minimize an upper bound on training error, on each iteration we should search for the weak hypothesis ht and the weight at that minimize Z. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Unigram records store probability, backoff, and an index in the bigram table. |
Their results show that their high performance NER use less training data than other systems. | 0 | To eliminate such sequences, we define a transition probability between word classes to be equal to 1 if the sequence is admissible, and 0 otherwise. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | This supports our main thesis that decisions taken by single, improved, grammar are beneficial for both tasks. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | We identified three ways that contextual roles can be exploited: (1) by identifying caseframes that co-occur in resolutions, (2) by identifying nouns that co-occur with case- frames and using them to crosscheck anaphor/candidate compatibility, (3) by identifying semantic classes that co- occur with caseframes and using them to crosscheck anaphor/candidate compatability. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | A modified language model probability pÃ(eje0; e00) is defined as follows: pÃ(eje0; e00) = 1:0 if à = 0 p(eje0; e00) if à = 1 : We associate a distribution p(Ã) with the two cases à = 0 and à = 1 and set p(à = 1) = 0:7. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Hence, we take the probability of the event fmnh analyzed as REL VB to be This means that we generate f and mnh independently depending on their corresponding PoS tags, and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. based on linear context. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Entries for 2 < n < N store a vocabulary identifier, probability, backoff, and an index into the n + 1-gram table. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | For example, we might have VP â VB NP PP, where the NP is the subject. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The less favored reading may be selected in certain contexts, however; in the case of , for example, the nominal reading jiang4 will be selected if there is morphological information, such as a following plural affix ir, menD that renders the nominal reading likely, as we shall see in Section 4.3. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | 7). |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | We tokenize MWUs and their POS tags; this reduces the tag set size to 12. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The two new terms force the two classifiers to agree, as much as possible, on the unlabeled examples. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i). |
Here we present two algorithms. | 0 | Taking only the highest frequency rules is much "safer", as they tend to be very accurate. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | In the case of adverbial reduplication illustrated in (3b) an adjective of the form AB is reduplicated as AABB. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton. |
Here we present two algorithms. | 0 | (3) shows learning curves for CoBoost. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | All features were conjoined with the state z. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) â P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, Ï, θ, Ï, w|α, β)dÏdθdÏ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Their theoretical finding is simply stated: classification error rate decreases toward the noise rate exponentially in the number of independent, accurate classifiers. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | The 8 similarity-to-IN features are based on word frequencies and scores from various models trained on the IN corpus: To avoid numerical problems, each feature was normalized by subtracting its mean and dividing by its standard deviation. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | This annotation choice weakens splitIN. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Besides the lack of a clear definition of what constitutes a correct segmentation for a given Chinese sentence, there is the more general issue that the test corpora used in these evaluations differ from system to system, so meaningful comparison between systems is rendered even more difficult. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Maamouri et al. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Ex: Mr. Cristiani, president of the country ... |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser. |
This corpus has several advantages: it is annotated at different levels. | 0 | When the connective is an adverbial, there is much less clarity as to the range of the spans. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | P St = n. β T VARIABLES Ï Y W : Word types (W1 ,. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | However, it can be noted that the results for the least informative encoding, Path, are almost comparable, while the third encoding, Head, gives substantially worse results for both data sets. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | This representation gives ir, an appropriate morphological decomposition, pre serving information that would be lost by simply listing ir, as an unanalyzed form. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | We tokenize MWUs and their POS tags; this reduces the tag set size to 12. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | As mentioned above, it is not obvious how to apply Daum´e’s approach to multinomials, which do not have a mechanism for combining split features. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Semantic expectations are analogous to lexical expectations except that they represent semantic classes rather than nouns. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | This method, one instance of which we term the "greedy algorithm" in our evaluation of our own system in Section 5, involves starting at the beginning (or end) of the sentence, finding the longest word starting (ending) at that point, and then repeating the process starting at the next (previous) hanzi until the end (begin ning) of the sentence is reached. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | State will ultimately be used as context in a subsequent query. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | In such cases we use the non-pruned lattice including all (possibly ungrammatical) segmentation, and let the statistics (including OOV) decide. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | Figure 4 shows some such phrase sets based on keywords in the CC-domain. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 9 66.4 47. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | But we will show that the use of unlabeled data can drastically reduce the need for supervision. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The second modification is more important, and is discussed in the next section. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Hereafter, each pair of NE categories will be called a domain; e.g. the âCompany â Companyâ domain, which we will call CC- domain (Step 2). |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Bresnan, Kaplan, Peters, and Zaenen (1982) argue that these structures are needed to describe crossed-serial dependencies in Dutch subordinate clauses. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | If these sets do not overlap, then the words cannot be coreferent. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684). |
All the texts were annotated by two people. | 0 | We use MMAX for this annotation as well. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | paper, and is missing 6 examples from the A set. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The final model tions. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The second case involves existential noun phrases (Allen, 1995), which are noun phrases that uniquely specify an object or concept and therefore do not need a prior referent in the discourse. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | In sequential tagging models such as (Adler and Elhadad, 2006; Bar-Haim et al., 2007; Smith et al., 2005) weights are assigned according to a language model The input for the joint task is a sequence W = w1, ... , wn of space-delimited tokens. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts. |
A beam search concept is applied as in speech recognition. | 0 | In Eq. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be de noted unseen(f,). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | This suggests that different types of anaphora may warrant different treatment: definite NP resolution may depend more on lexical semantics, while pronoun resolution may depend more on contextual semantics. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | 2. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | As noted for the perplexity task, we do not expect cache to grow substantially with model size, so RandLM remains a low-memory option. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG. |
All the texts were annotated by two people. | 0 | A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | This class of formalisms have the properties that their derivation trees are local sets, and manipulate objects, using a finite number of composition operations that use a finite number of symbols. |
There is no global pruning. | 0 | For each extension a new position is added to the coverage set. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | (1992). |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 9 66.4 47. |
This assumption, however, is not inherent to type-based tagging models. | 0 | La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | For example, Gazdar (1985) discusses the applicability of Indexed Grammars (IG's) to Natural Language in terms of the structural descriptions assigned; and Berwick (1984) discusses the strong generative capacity of Lexical-Functional Grammar (LFG) and Government and Bindings grammars (GB). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For the examples given in (1) and (2) this certainly seems possible. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We applied the AutoSlog system (Riloff, 1996) to our unannotated training texts to generate a set of extraction patterns for each domain. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Overview of the method 2.2 Step by Step Algorithm. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 12 For English, our Evalb implementation is identical to the most recent reference (EVALB20080701). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | 3 Throughout this paper we shall give Chinese examples in traditional orthography, followed. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | (a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm. |
Their results show that their high performance NER use less training data than other systems. | 0 | Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | By contrast, BerkeleyLM’s hash and compressed variants will return incorrect results based on an n −1-gram. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Only one of the first n positions which are not already aligned in a partial hypothesis may be chosen, where n is set to 4. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.