source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Even if an example like this is not labeled, it can be interpreted as a "hint" that Mr and president imply the same category. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | We extend Subramanya et al.’s intuitions to our bilingual setup. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Systems that generally do worse than others will receive a negative one. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | There any many techniques for improving language model speed and reducing memory consumption. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Location list is processed into a list of unigrams and bigrams (e.g., New York). |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | • We evaluated translation from English, in addition to into English. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | This withdrawal by the treasury secretary is understandable, though. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 0 70.9 42. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 9 61.0 44. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | In this section, we try to compare our results with those obtained by IdentiFinder ' 97 (Bikel et al., 1997), IdentiFinder ' 99 (Bikel et al., 1999), and MENE (Borthwick, 1999). |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | T e r r o r i s m Ca sef ra me Semantic Classes <a ge nt > ass ass ina ted group, human inv esti gat ion int o < N P> event exp lod ed out sid e < N P> building N a t u r a l D i s a s t e r s Ca sef ra me Semantic Classes <a ge nt > inv esti gat ing cau se group, human sur viv or of < N P> event, natphenom hit wit h < N P> attribute, natphenom Figure 3: Semantic Caseframe Expectations Figure 2: Lexical Caseframe Expectations To illustrate how lexical expectations are used, suppose we want to determine whether noun phrase X is the antecedent for noun phrase Y. If they are coreferent, then X and Y should be substitutable for one another in the story.4 Consider these sentences: (S1) Fred was killed by a masked man with a revolver. |
This assumption, however, is not inherent to type-based tagging models. | 0 | See Table 2 for the tag set size of other languages. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | A similar structure is used in speech recognition. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Contextual role knowledge provides evidence as to whether a candidate is a plausible antecedent for an anaphor. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\xiao3jiang4 'little general.' |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | For example, one of the ATB samples was the determiner -"" ; dhalikâthat.â The sample occurred in 1507 corpus po sitions, and we found that the annotations were consistent. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | One class comprises words derived by productive morphologi cal processes, such as plural noun formation using the suffix ir, menD. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance. |
All the texts were annotated by two people. | 0 | âested in correlations between prosody and dis course structure. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Our evaluation includes both weighted and un- weighted lattices. |
The AdaBoost algorithm was developed for supervised learning. | 0 | In principle a feature could be an arbitrary predicate of the (spelling, context) pair; for reasons that will become clear, features are limited to querying either the spelling or context alone. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | However, it is desirable if we can separate them. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Mohri [1995]) shows promise for improving this situation. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Before describing the unsupervised case we first describe the supervised version of the algorithm: Input to the learning algorithm: n labeled examples of the form (xi, y„). y, is the label of the ith example (given that there are k possible labels, y, is a member of y = {1 ... 0). xi is a set of mi features {x,1, Xi2 . |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Methods that allow multiple segmentations must provide criteria for choosing the best segmentation. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The Expectation Maximization (EM) algorithm (Dempster, Laird and Rubin 77) is a common approach for unsupervised training; in this section we describe its application to the named entity problem. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | In the maximum entropy framework, there is no such constraint. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | The first experiment uses data from two dependency treebanks. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | One of the difficulties in Natural Language Processing is the fact that there are many ways to express the same thing or event. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | This procedure is repeated for T rounds while alternating between the two classifiers. |
A beam search concept is applied as in speech recognition. | 0 | Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | For the TM, this is: where cI(s, t) is the count in the IN phrase table of pair (s, t), po(s|t) is its probability under the OUT TM, and cI(t) = "s, cI(s', t). |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | (8) can now be rewritten5 as which is of the same form as the function Zt used in AdaBoost. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Again, we can compute average scores for all systems for the different language pairs (Figure 6). |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | In the rhetorical tree, nuclearity information is then used to extract a âkernel treeâ that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially. |
Here both parametric and non-parametric models are explored. | 0 | Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Given a sufficient number of randomly drawn unlabeled examples (i.e., edges), we will induce two completely connected components that together span the entire graph. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | This is especially bad with PROBING because it is based on hashing and performs random lookups, but it is not intended to be used in low-memory scenarios. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | [Hasegawa et al. 04] reported only on relation discovery, but one could easily acquire para phrases from the results. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | More examples are shown in Figure 5. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | However, there will remain a large number of words that are not readily adduced to any produc tive pattern and that would simply have to be added to the dictionary. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Reicheâs colleagues will make sure that the concept is waterproof. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | BABAR uses two methods to identify anaphors that can be easily and reliably resolved with their antecedent: lexical seeding and syntactic seeding. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Although this feature helps, we encounter one consequence of variable word order. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | One way to approach this discrepancy is to assume a preceding phase of morphological segmentation for extracting the different lexical items that exist at the token level (as is done, to the best of our knowledge, in all parsing related work on Arabic and its dialects (Chiang et al., 2006)). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems). |
This corpus has several advantages: it is annotated at different levels. | 0 | The state in future has not enough work for its many teachers. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | But we follow the more direct adaptation of Evalb suggested by Tsarfaty (2006), who viewed exact segmentation as the ultimate goal. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | I ⢠JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- ⢠:zhang! |
This assumption, however, is not inherent to type-based tagging models. | 0 | This alters generation of T as follows: n P (T |Ï) = n P (Ti|Ï) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Ends with the feminine affix :: p. 4. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 4.1 Corpora. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | AdaBoost is given access to a weak learning algorithm, which accepts as input the training examples, along with a distribution over the instances. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | ⢠Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | If an unlabeled vertex does not have a path to any labeled vertex, this term ensures that the converged marginal for this vertex will be uniform over all tags, allowing the middle word of such an unlabeled vertex to take on any of the possible tags. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | We present two algorithms. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Minimal perfect hashing is used to find the index at which a quantized probability and possibly backoff are stored. |
Their results show that their high performance NER use less training data than other systems. | 0 | We believe that the underlying principles of the maximum entropy framework are suitable for exploiting information from diverse sources. |
All the texts were annotated by two people. | 0 | One key issue here is to seek a discourse-based model of information structure. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Finally we show the combining techniques degrade very little when a poor parser is added to the set. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | 3.5 Improved models of discourse. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | For pronouns, however, all of the knowledge sources increased recall, often substantially, and with little if any decrease in precision. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We would like to relax somewhat the constraint on the path complexity of formalisms in LCFRS. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | This paper presents a maximum entropy-based named entity recognizer (NER). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | This is less effective in our setting, where IN and OUT are disparate. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | f;g denotes the empty set, where no source sentence position is covered. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Given a document to process, BABAR uses four modules to perform coreference resolution. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | This can be seen as a rough approximation of Yarowsky and Ngai (2001). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | But gold segmentation is not available in application settings, so a segmenter and parser are arranged in a pipeline. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | We trained this model by optimizing the following objective function: Note that this involves marginalizing out all possible state configurations z for a sentence x, resulting in a non-convex objective. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization). |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | About half of the participants of last year’s shared task participated again. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | In this specific case, as these two titles could fill the same column of an IE table, we regarded them as paraphrases for the evaluation. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | This design leads to a significant reduction in the computational complexity of training and inference. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The maximum likelihood estimates (i.e., parameter values which maximize 10) can not be found analytically, but the EM algorithm can be used to hill-climb to a local maximum of the likelihood function from some initial parameter settings. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Our analysis identifies three key factors driving our performance gain: 1) selecting a model structure which directly encodes tag sparsity, 2) a type-level prior on tag assignments, and 3) a straightforward na¨ıveBayes approach to incorporate features. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.