source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | We asked six native speakers-three from Taiwan (TlT3), and three from the Mainland (M1M3)-to segment the corpus. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The task is to learn a function from an input string (proper name) to its type, which we will assume to be one of the categories Person, Organization, or Location. |
Here we present two algorithms. | 0 | 2. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In this section, we brie y review our translation approach. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | We have used the Java-based opennlp maximum entropy package1. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 5.1 Data Sets. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | In our experiments, we used the same set of features as BergKirkpatrick et al. (2010): an indicator feature based In a traditional Markov model, the emission distribution PΘ(Xi = xi |Zi = zi) is a set of multinomials. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | For definite NPs, the results are a mixed bag: some knowledge sources increased recall a little, but at the expense of some precision. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | For example, in Northern dialects (such as Beijing), a full tone (1, 2, 3, or 4) is changed to a neutral tone (0) in the final syllable of many words: Jll donglgual 'winter melon' is often pronounced donglguaO. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | So we decided to use semantic class information only to rule out candidates. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Participants and other volunteers contributed about 180 hours of labor in the manual evaluation. |
A beam search concept is applied as in speech recognition. | 0 | For the experiments, we use a simple preprocessing step. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | The standard measures for evaluating Penn Treebank parsing performance are precision and recall of the predicted constituents. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Many human evaluation metrics have been proposed. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 68 96. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 1 55.8 38. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | When two partial hypotheses have equal state (including that of other features), they can be recombined and thereafter efficiently handled as a single packed hypothesis. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | 2.2.1 The Caseframe Representation Information extraction (IE) systems use extraction patterns to identify noun phrases that play a specific role in 1 Our implementation only resolves NPs that occur in the same document, but in retrospect, one could probably resolve instances of the same existential NP in different documents too. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We briefly review it here for completeness. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Ltd., then organization will be more probable. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijeiô1 iô2) max bI 1 I Yi=1 [p(bijbiô1; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijeiô1 iô2) p(bijbiô1; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijeiô1 iô2) is the trigram language model probability. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | All of these systems were run on data that was not seen during their development. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The way we cant distinction between system performance. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 43 80. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | For parsing, this is a mistake, especially in the case of interrogatives. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Making the ten judgements (2 types for 5 systems) takes on average 2 minutes. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Morphological Analyzer Ideally, we would use an of-the-shelf morphological analyzer for mapping each input token to its possible analyses. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | P St = n. β T VARIABLES Ï Y W : Word types (W1 ,. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Named Entity Recognition: A Maximum Entropy Approach Using Global Information |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For each caseframe, BABAR collects the semantic classes associated with the head nouns of NPs that were extracted by the caseframe. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Also, the argument has been made that machine translation performance should be evaluated via task-based evaluation metrics, i.e. how much it assists performing a useful task, such as supporting human translators or aiding the analysis of texts. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | In IE, creating the patterns which express the requested scenario, e.g. âmanagement successionâ or âcorporate merger and acquisitionâ is regarded as the hardest task. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | We found that contextual role knowledge was more beneficial for pronouns than for definite noun phrases. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Several extensions of AdaBoost for multiclass problems have been suggested (Freund and Schapire 97; Schapire and Singer 98). |
A beam search concept is applied as in speech recognition. | 0 | Note that in line 4 the last visited position for the successor hypothesis must be m. Otherwise , there will be four uncovered positions for the predecessor hypothesis violating the restriction. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | The problem is a binary classification problem. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | This alters generation of T as follows: n P (T |Ï) = n P (Ti|Ï) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Training and testing is based on the Europarl corpus. |
A beam search concept is applied as in speech recognition. | 0 | the number of permutations carried out for the word reordering is given. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Vocabulary lookup is a hash table mapping from word to vocabulary index. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | The tool we use is MMAX8, which has been specifically designed for marking co-reference. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The structure uses linear probing hash tables and is designed for speed. |
Here we present two algorithms. | 0 | AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The theory has also been validated empirically. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | In this paper we study the problem of using a parallel corpus from a background domain (OUT) to improve performance on a target domain (IN) for which a smaller amount of parallel training material—though adequate for reasonable performance—is also available. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 6 Our knowledge sources return some sort of probability estimate, although in some cases this estimate is not especially well-principled (e.g., the Recency KS). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | For the seen word ir, 'gen erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,). |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | N, portion of examples on which both classifiers give a label rather than abstaining), and the proportion of these examples on which the two classifiers agree. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The latter arcs correspond to OOV words in English. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | To initialize the graph we tag the English side of the parallel text using a supervised model. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Table 2 shows single-threaded results, mostly for comparison to IRSTLM, and Table 3 shows multi-threaded results. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Consider first the examples in (2). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | constitute names, since we have only their segmentation, not the actual classification of the segmented words. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | We then evaluate the approach in two steps. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | In this paper we have argued that Chinese word segmentation can be modeled ef fectively using weighted finite-state transducers. |
Here both parametric and non-parametric models are explored. | 0 | Furthermore, we know one of the original parses will be the hypothesized parse, so the direct method of determining which one is best is to compute the probability of each of the candidate parses using the probabilistic model we developed in Section 2.1. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The edge from the root to the subtree for the derivation of 7i is labeled by the address ni. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | On average, 6 reference translations per automatic translation are available. |
This assumption, however, is not inherent to type-based tagging models. | 0 | On several languages, we report performance exceeding that of state-of-the art systems. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | By definition, each existential NP uniquely specifies an object or concept, so we can infer that all instances of the same existential NP are coreferent (e.g., âthe FBIâ always refers to the same entity). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | 0 Figure 5 An example of affixation: the plural affix. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 3.1 Gross Statistics. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 5 âUnderspecified Rhetorical Markup Languageâ 6 This confirms the figure given by (Schauer, Hahn. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | An alternate approximation to (8) would be to let w,\(s, t) directly approximate pˆI(s, t). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1. |
Their results show that their high performance NER use less training data than other systems. | 0 | of Articles No. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | It may be more realistic to replace the second criteria with a softer one, for example (Blum and Mitchell 98) suggest the alternative Alternatively, if Ii and 12 are probabilistic learners, it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learners. |
The texts were annotated with the RSTtool. | 0 | And indeed, converging on annotation guidelines is even more difficult than it is with co-reference. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | In turn we use two sorts of heuristics, orthogonal to one another, to prune segmentation possibilities based on lexical and grammatical constraints. |
This assumption, however, is not inherent to type-based tagging models. | 0 | The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Two strings are letter equivalent if they contain equal number of occurrences of each terminal symbol, and two languages are letter equivalent if every string in one language is letter equivalent to a string in the other language and vice-versa. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The probability distribution that satisfies the above property is the one with the highest entropy. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For example, the passive voice pattern â<subject> were kidnappedâ and the active voice pattern âkidnapped <direct object>â are merged into a single normalized pattern âkidnapped <patient>â.2 For the sake of sim plicity, we will refer to these normalized extraction patterns as caseframes.3 These caseframes can capture two types of contextual role information: (1) thematic roles corresponding to events (e.g, â<agent> kidnappedâ or âkidnapped <patient>â), and (2) predicate-argument relations associated with both verbs and nouns (e.g., âkidnapped for <np>â or âvehicle with <np>â). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Thus corresponding pseudo-labels for instances on which gj abstain are set to zero and these instances do not contribute to the objective function. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Nonetheless, the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of words. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis. |
A beam search concept is applied as in speech recognition. | 0 | input: source string f1:::fj :::fJ initialization for each cardinality c = 1; 2; ; J do for each pair (C; j), where j 2 C and jCj = c do for each target word e 2 E Qe0 (e; C; j) = p(fj je) max Ã;e00 j02Cnfjg fp(jjj0; J) p(Ã) pÃ(eje0; e00) Qe00 (e0;C n fjg; j0)g words fj in the input string of length J. For the final translation each source position is considered exactly once. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | So, 1: f, xue2shengl+men0 (student+PL) 'students' occurs and we estimate its cost at 11.43; similarly we estimate the cost of f, jiang4+men0 (general+PL) 'generals' (as in 'J' f, xiao3jiang4+men0 'little generals'), at 15.02. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Decreasing the threshold results in higher mWER due to additional search errors. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.