source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Another approach to finding paraphrases is to find phrases which take similar subjects and objects in large corpora by using mutual information of word distribution [Lin and Pantel 01]. |
This corpus has several advantages: it is annotated at different levels. | 0 | It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie ware es denn am ahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about ahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten konnten , ware das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that . |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Table 5: Individual Performance of KSs for Disasters (e.g., âthe mayorâ vs. âthe journalistâ). |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The approach uses both spelling and contextual rules. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | All the sentences have been analyzed by our chunker and NE tag- ger. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | 10) and trained both EM and L-BFGS for 1000 iterations. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | (1), Pr(eI 1) is the language model, which is a trigram language model in this case. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Thus, the effects of spontaneous speech are present in the corpus, e.g. the syntactic structure of the sentence is rather less restricted, however the effect of speech recognition errors is not covered. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | The accuracy of the sets in representing paraphrase ranged from 73% to 99%, depending on the NE categories and set sizes; the accuracy of the links for two evaluated domains was 73% and 86%. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Unigram records store probability, backoff, and an index in the bigram table. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 3 These are not full case frames in the traditional sense, but they approximate a simple case frame with a single slot. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The second row is the accuracy of the best of the three parsers.' |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | It is a relatively frequent word in the domain, but it can be used in different extraction scenarios. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We will refer to the semantic classes that co-occur with a caseframe as the semantic expectations of the caseframe. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993). |
All the texts were annotated by two people. | 0 | The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) ⢠the full text, ⢠the annotation values for the activated annotation set (co-reference), ⢠the actual annotation tiers, and ⢠the portion of text currently âin focusâ (which also appears underlined in the full text). |
A beam search concept is applied as in speech recognition. | 0 | Ignoring the identity of the target language words e and e0, the possible partial hypothesis extensions due to the IBM restrictions are shown in Table 2. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 0 70.9 42. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | The second experiment is limited to data from PDT.5 The training part of the treebank was projectivized under different encoding schemes and used to train memory-based dependency parsers, which were run on the test part of the treebank, consisting of 7,507 sentences and 125,713 tokens.6 The inverse transformation was applied to the output of the parsers and the result compared to the gold standard test set. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The major problem for all segmentation systems remains the coverage afforded by the dictionary and the lexical rules used to augment the dictionary to deal with unseen words. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the âfrequencyâ of Corp. is 2). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | This flexibility, along with the simplicity of implementation and expansion, makes this framework an attractive base for continued research. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Email: gale@research. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | As a partial solution, for pairs of hanzi that co-occur sufficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In this paper, we show how non-projective dependency parsing can be achieved by combining a datadriven projective parser with special graph transformation techniques. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For example, management succession systems must distinguish between a person who is fired and a person who is hired. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | We have developed a general approach for combining parsers when preserving the entire structure of a parse tree is important. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Cluster phrases based on Links We now have a set of phrases which share a keyword. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Now we have sets of phrases which share a keyword and we have links between those sets. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | For parsing, this is a mistake, especially in the case of interrogatives. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | In both cases, the instanceweighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline, and gains of between 0.6 and 1.8 over an equivalent mixture model (with an identical training procedure but without instance weighting). |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | The way we cant distinction between system performance. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The two knowledge sources that use semantic expectations, WordSemCFSem and CFSemCFSem, always return values of -1 or 0. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | If two nouns have mutually exclusive semantic classes, then they cannot be coreferent. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Figure 2: An ATB sample from the human evaluation. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (Blum and Mitchell 98) go on to give PAC results for learning in the cotraining case. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | , December, then the feature MonthName is set to 1. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Among these 32 sets, we found the following pairs of sets which have two or more links. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Segmentation errors cascade into the parsing phase, placing an artificial limit on parsing performance. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | For à = 0, no new target word is generated, while an additional source sentence position is covered. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | There is a fairly large body of work on SMT adaptation. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 3. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | A secondary reference resolution classifier has information on the class assigned by the primary classifier. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Thus in a two-hanzi word like lflli?J zhong1guo2 (middle country) 'China' there are two syllables, and at the same time two morphemes. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Each xt E 2x is the set of features constituting the ith example. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | If one of these checks fails then this knowledge source reports that the candidate is not a viable antecedent for the anaphor. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | 3) A tight coupling with the speech recognizer output. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Secondly, some segments in a proposed segment sequence may in fact be seen lexical events, i.e., for some p tag Prf(p —* (s, p)) > 0, while other segments have never been observed as a lexical event before. |
This assumption, however, is not inherent to type-based tagging models. | 0 | This design leads to a significant reduction in the computational complexity of training and inference. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | It has no syntactic function. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | Instead, we focused on phrases and set the frequency threshold to 2, and so were able to utilize a lot of phrases while minimizing noise. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | For example, in the sentence that starts with âBush put a freeze on . . . |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | It is also true of the adaptation of the Collins parser for Czech (Collins et al., 1999) and the finite-state dependency parser for Turkish by Oflazer (2003). |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | A compatible view is presented by Charniak et al. (1996) who consider the kind of probabilities a generative parser should get from a PoS tagger, and concludes that these should be P(w|t) “and nothing fancier”.3 In our setting, therefore, the Lattice is not used to induce a probability distribution on a linear context, but rather, it is used as a common-denominator of state-indexation of all segmentations possibilities of a surface form. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The way we cant distinction between system performance. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS). |
This corpus has several advantages: it is annotated at different levels. | 0 | This withdrawal by the treasury secretary is understandable, though. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Particles are uninflected. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The way judgements are collected, human judges tend to use the scores to rank systems against each other. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | A summary of the corpus used in the experiments is given in Table 3. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | It also does not prune, so comparing to our pruned model would be unfair. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | In this situation, BABAR takes the conservative approach and declines to make a resolution. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | The way judgements are collected, human judges tend to use the scores to rank systems against each other. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 0 57.2 43. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Since trees in a tree set are adjoined together, the addressing scheme uses a sequence of pairings of the address and name of the elementary tree adjoined at that address. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993). |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The Hebrew token ‘bcl’1, for example, stands for the complete prepositional phrase 'We adopt here the transliteration of (Sima’an et al., 2001). |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | At first glance, the problem seems quite complex: a large number of rules is needed to cover the domain, suggesting that a large number of labeled examples is required to train an accurate classifier. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | In all cases, the key is collapsed to its 64-bit hash. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | âAgreeâ is a subject control verb, which dominates another verb whose subject is the same as that of âagreeâ; the latter verb is generally the one of interest for extraction. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Process nominals name the action of the transitive or ditransitive verb from which they derive. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | For corporate suffixes, a list of tokens cslist that occur frequently as the last token of an organization name is collected from the training data. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | (Other classes handled by the current system are discussed in Section 5.) |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Given n-gram counts {cn}Nn=1, we use Flog2 c1] bits per vocabulary identifier and Flog2 cn] per index into the table of ngrams. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Others depend upon various lexical heuris tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.