source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | But we will show that the use of unlabeled data can drastically reduce the need for supervision. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 2.1 Part-of-speech tags. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | Before presenting our results, we describe the datasets that we used, as well as two baselines. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | The idea of distinguishing between general and domain-specific examples is due to Daum´e and Marcu (2006), who used a maximum-entropy model with latent variables to capture the degree of specificity. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | They first collect the NE instance pairs and contexts, just like our method. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Table 3 contains the results for evaluating our systems on the test set (section 22). |
Here we present two algorithms. | 0 | (4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | This is especially bad with PROBING because it is based on hashing and performs random lookups, but it is not intended to be used in low-memory scenarios. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | For à = 0, no new target word is generated, while an additional source sentence position is covered. |
Here we present two algorithms. | 0 | We are currently exploring such algorithms. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | By definition, each existential NP uniquely specifies an object or concept, so we can infer that all instances of the same existential NP are coreferent (e.g., âthe FBIâ always refers to the same entity). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Table 4 shows the results. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Applications The discovered paraphrases have multiple applications. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Each member of a set of trees can be adjoined into distinct nodes of trees in a single elementary tree set, i.e, derivations always involve the adjunction of a derived auxiliary tree set into an elementary tree set. |
All the texts were annotated by two people. | 0 | The corpus has been annotated with six different types of information, which are characterized in the following subsections. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | In this paper, we make a simplifying assumption of one-tag-per-word. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Motivated by these questions, we significantly raise baselines for three existing parsing models through better grammar engineering. |
A beam search concept is applied as in speech recognition. | 0 | Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Mohri [1995]) shows promise for improving this situation. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Finally, this effort is part of a much larger program that we are undertaking to develop stochastic finite-state methods for text analysis with applications to TIS and other areas; in the final section of this paper we will briefly discuss this larger program so as to situate the work discussed here in a broader context. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | In section 2 we introduce the graph transformation techniques used to projectivize and deprojectivize dependency graphs, and in section 3 we describe the data-driven dependency parser that is the core of our system. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | GL is then used to parse the string tn1 ... tnk_1, where tni is a terminal corresponding to the lattice span between node ni and ni+1. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Let n be some node labeled X in a tree -y (see Figure 3). |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., "speedometer" from "car") from a large corpus using hand-crafted patterns. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | computing the recall of the other's judgments relative to this standard. |
Here we present two algorithms. | 0 | The NP is a complement to a preposition, which is the head of a PP. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | We can do that . Input: Das ist zu knapp , weil ich ab dem dritten in Kaiserslautern bin . Genaugenommen nur am dritten . Wie ware es denn am ahm Samstag , dem zehnten Februar ? MonS: That is too tight , because I from the third in Kaiserslautern . In fact only on the third . How about ahm Saturday , the tenth of February ? QmS: That is too tight , because I am from the third in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? IbmS: That is too tight , from the third because I will be in Kaiserslautern . In fact only on the third . Ahm how about Saturday , February the tenth ? Input: Wenn Sie dann noch den siebzehnten konnten , ware das toll , ja . MonS: If you then also the seventeenth could , would be the great , yes . QmS: If you could then also the seventeenth , that would be great , yes . IbmS: Then if you could even take seventeenth , that would be great , yes . Input: Ja , das kommt mir sehr gelegen . Machen wir es dann am besten so . MonS: Yes , that suits me perfectly . Do we should best like that . QmS: Yes , that suits me fine . We do it like that then best . IbmS: Yes , that suits me fine . We should best do it like that . |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | An important subproblem of language model storage is therefore sparse mapping: storing values for sparse keys using little memory then retrieving values given keys using little time. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The dev and test sets were randomly chosen from the EMEA corpus. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | By definition, each existential NP uniquely specifies an object or concept, so we can infer that all instances of the same existential NP are coreferent (e.g., âthe FBIâ always refers to the same entity). |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | As long as the main evaluation metric is dependency accuracy per word, with state-of-the-art accuracy mostly below 90%, the penalty for not handling non-projective constructions is almost negligible. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | The phrases have to be the expressions of length less than 5 chunks, appear between two NEs. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | As we shall argue, the semantic class affiliation of a hanzi constitutes useful information in predicting its properties. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | This work was supported in part by the National Science Foundation under grant IRI9704240. |
The corpus was annoted with different linguitic information. | 0 | One conclusion drawn from this annotation effort was that for humans and machines alike, 2 www.sfs.nphil.unituebingen.de/Elwis/stts/ stts.html 3 www.coli.unisb.de/sfb378/negra-corpus/annotate. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | was done by the participants. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | This is less effective in our setting, where IN and OUT are disparate. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For example, suppose one is building a ITS system for Mandarin Chinese. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Other work includes transferring latent topic distributions from source to target language for LM adaptation, (Tam et al., 2007) and adapting features at the sentence level to different categories of sentence (Finch and Sumita, 2008). |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Intuitively, it places more weight on OUT when less evidence from IN is available. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Notice that the CC-domain is a special case. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The Berkeley parser gives state-of-the-art performance for all metrics. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | Even when there is training data available in the domain of interest, there is often additional data from other domains that could in principle be used to improve performance. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 0 57.2 43. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Methods that allow multiple segmentations must provide criteria for choosing the best segmentation. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | There has also been work using a bootstrap- ping approach [Brin 98; Agichtein and Gravano 00; Ravichandran and Hovy 02]. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | â). |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | One obvious application is information extraction. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | besuchen 9. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | The first two rows of the table are baselines. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Consider first the examples in (2). |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Not every annotator was fluent in both the source and the target language. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | We use a simple TF/IDF method to measure the topicality of words. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di graphernic word. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The sequence of states needed to carry out the word reordering example in Fig. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | In this case nonalpha is the string formed by removing all upper/lower case letters from the spelling (e.g., for Thomas E. Petry nonalpha= . |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | But Arabic contains a variety of linguistic phenomena unseen in English. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | D o m ai n Li n k ac cu ra cy W N c o v e r a g e C C 7 3 . 3 % 2 / 1 1 P C 8 8 . 9 % 2 / 8 Table 2. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The operations must be linear and nonerasing, i.e., they can not duplicate or erase structure from their arguments. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | And time is short. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | TIS systems in general need to do more than simply compute the. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | We train and test on the CoNLL-X training set. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | SRILM’s compact variant, IRSTLM, MITLM, and BerkeleyLM’s sorted variant are all based on this technique. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Supervised part-of-speech (POS) taggers, for example, approach the level of inter-annotator agreement (Shen et al., 2007, 97.3% accuracy for English). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The model we use provides a simple framework in which to incorporate a wide variety of lexical information in a uniform way. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Table 2 shows the features used in the current version of the parser. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | These enable much larger models in memory, compensating for lost accuracy. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Sparse lookup is a key subproblem of language model queries. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | We utilized two kinds of datasets in our experiments: (i) monolingual treebanks9 and (ii) large amounts of parallel text with English on one side. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Using structural information As was explained in the results section, we extracted examples like âSmith estimates Lotusâ, from a sentence like âMr. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Cohen and Smith (2007) followed up on these results and proposed a system for joint inference of morphological and syntactic structures using factored models each designed and trained on its own. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For example, it is well-known that one can build a finite-state bigram (word) model by simply assigning a state Si to each word Wi in the vocabulary, and having (word) arcs leaving that state weighted such that for each Wj and corresponding arc aj leaving Si, the cost on aj is the bigram cost of WiWj- (Costs for unseen bigrams in such a scheme would typically be modeled with a special backoff state.) |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Lazy mapping reduces memory requirements by loading pages from disk only as necessary. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Email: gale@research. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | When a token fmnh is to be interpreted as the lexeme sequence f/REL mnh/VB, the analysis introduces two distinct entities, the relativizer f (“that”) and the verb mnh (“counted”), and not as the complex entity “that counted”. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.