source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | There has been recent interest in the application of Indexed Grammars (IG's) to natural languages. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Instead, we focused on phrases and set the frequency threshold to 2, and so were able to utilize a lot of phrases while minimizing noise. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | In speech recognition the arcs of the lattice are typically weighted in order to indicate the probability of specific transitions. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The text type are editorials instead of speech transcripts. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | While building a machine translation system is a serious undertaking, in future we hope to attract more newcomers to the field by keeping the barrier of entry as low as possible. |
Their results show that their high performance NER use less training data than other systems. | 0 | (1999) did not report using any dictionaries, but mentioned in a footnote that they have added list membership features, which have helped marginally in certain domains. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications. |
There is no global pruning. | 0 | These alignment models are similar to the concept of hidden Markov models (HMM) in speech recognition. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Although matching is done at the sentence level, this information is subsequently discarded when all matches are pooled. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The details are given in (Tillmann, 2000). |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | All the NE pair instances which co-occur separated by at most 4 chunks are collected along with information about their NE types and the phrase between the NEs (the âcontextâ). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | We resolve this problem by inserting an entry with probability set to an otherwise-invalid value (−oc). |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | Finally, since non-projective constructions often involve long-distance dependencies, the problem is closely related to the recovery of empty categories and non-local dependencies in constituency-based parsing (Johnson, 2002; Dienes and Dubey, 2003; Jijkoun and de Rijke, 2004; Cahill et al., 2004; Levy and Manning, 2004; Campbell, 2004). |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | 7). |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Interpolation search formalizes the notion that one opens a dictionary near the end to find the word “zebra.” Initially, the algorithm knows the array begins at b +— 0 and ends at e +— |A|−1. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | During coreference resolution, the caseframe network provides evidence that an anaphor and prior noun phrase might be coreferent. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | We currently simulate this by crafting a WCFG and feeding it to BitPar. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | To be short, we omit the target words e; e0 in the formulation of the search hypotheses. |
This assumption, however, is not inherent to type-based tagging models. | 0 | On one end of the spectrum are clustering approaches that assign a single POS tag to each word type (Schutze, 1995; Lamar et al., 2010). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | segmentation (Table 2). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | In this paper, we will propose an unsupervised method to discover paraphrases from a large untagged corpus. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | When two partial hypotheses have equal state (including that of other features), they can be recombined and thereafter efficiently handled as a single packed hypothesis. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Recently, statistical NERs have achieved results that are comparable to hand-coded systems. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.' |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Realizing gains in practice can be challenging, however, particularly when the target domain is distant from the background data. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | to explore how well we can induce POS tags using only the one-tag-per-word constraint. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase. |
The texts were annotated with the RSTtool. | 0 | It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005). |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | was done by the participants. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The AdaBoost algorithm was developed for supervised learning. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | We present a coreference resolver called BABAR that uses contextual role knowledge to evaluate possible antecedents for an anaphor. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | The phrases have to be the expressions of length less than 5 chunks, appear between two NEs. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered? |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | A total of 13,976 phrases were grouped. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Note that while the standard HMM, has O(K n) emission parameters, our model has O(n) effective parameters.3 Token Component Once HMM parameters (Ï, θ) have been drawn, the HMM generates a token-level corpus w in the standard way: P (w, t|Ï, θ) = P (T , W , θ, Ï, Ï, t, w|α, β) = P (T , W , Ï|β) [Lexicon]  n n ï£ (w,t)â(w,t) j  P (tj |Ïtjâ1 )P (wj |tj , θtj ) P (Ï, θ|T , α, β) [Parameter] P (w, t|Ï, θ) [Token] We refer to the components on the right hand side as the lexicon, parameter, and token component respectively. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | These 140 NE categories are designed by extending MUCâs 7 NE categories with finer sub-categories (such as Company, Institute, and Political Party for Organization; and Country, Province, and City for Location) and adding some new types of NE categories (Position Title, Product, Event, and Natural Object). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 5 64.7 42. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | (2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Then, moving from connective to connective, ConAno sometimes offers suggestions for its scope (using heuristics like âfor sub- junctor, mark all words up to the next comma as the first segmentâ), which the annotator can accept with a mouseclick or overwrite, marking instead the correct scope with the mouse. |
All the texts were annotated by two people. | 0 | The price shocked her.), or same-kind (e.g., Her health insurance paid for the hospital fees, but the automobile insurance did not cover the repair.). |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 4 69.0 51. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Taking /V, to be the number of examples an algorithm classified correctly (where all gold standard items labeled noise were counted as being incorrect), we calculated two measures of accuracy: See Tab. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Having found (spelling, context) pairs in the parsed data, a number of features are extracted. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | of Tokens No. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Formally, for a lexicon L and segments I â L, O â/ L, each word automaton accepts the language Iâ(O + I)Iâ. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | GL is then used to parse the string tn1 ... tnk_1, where tni is a terminal corresponding to the lattice span between node ni and ni+1. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | For each sentence, we counted how many n-grams in the system output also occurred in the reference translation. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Further, the special hash 0 suffices to flag empty buckets. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | 2. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | was done by the participants. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Text generation, or at least the two phases of text planning and sentence planning, is a process driven partly by well-motivated choices (e.g., use this lexeme X rather than that more colloquial near-synonym Y ) and partly by con tation like that of PCC can be exploited to look for correlations in particular between syntactic structure, choice of referring expressions, and sentence-internal information structure. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009). |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | We checked whether the discovered links are listed in WordNet. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | This is the form of recursive levels in iDafa constructs. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Training and testing is based on the Europarl corpus. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | So, 1: f, xue2shengl+men0 (student+PL) 'students' occurs and we estimate its cost at 11.43; similarly we estimate the cost of f, jiang4+men0 (general+PL) 'generals' (as in 'J' f, xiao3jiang4+men0 'little generals'), at 15.02. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | Feature weights were set using Och’s MERT algorithm (Och, 2003). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen con structions, including morphological derivatives and personal names. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | shortest match at each point. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Roughly speaking, previous work can be divided into three categories, namely purely statistical approaches, purely lexi cal rule-based approaches, and approaches that combine lexical information with sta tistical information. |
This corpus has several advantages: it is annotated at different levels. | 0 | Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | was done by the participants. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | At present, the âPotsdam Commentary Corpusâ (henceforth âPCCâ for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | Across eight European languages, our approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | We often encounter sentences that are highly ambiguous in themselves, without some prior knowledge of the entities concerned. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The confidence level is then used as the belief value for the knowledge source. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | In all cases, the key is collapsed to its 64-bit hash. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | This is orthographically represented as 7C. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | An input ABCD can be represented as an FSA as shown in Figure 2(b). |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | This paper does not necessarily reflect the position of the U.S. Government. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 64 94. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Less frequently studied is the interplay among language, annotation choices, and parsing model design (Levy and Manning, 2003; Ku¨ bler, 2005). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.