source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 36 79. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | However their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our case- frame network.(Niyu et al., 1998) used unsupervised learning to ac quire gender, number, and animacy information from resolutions produced by a statistical pronoun resolver. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Entries for 2 < n < N store a vocabulary identifier, probability, backoff, and an index into the n + 1-gram table. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | On the MUC6 data, Bikel et al. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive. |
All the texts were annotated by two people. | 0 | (Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (âcommunicative-weight assignmentâ). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Both parameters depend on a single hyperparameter α. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | These knowledge sources determine whether the contexts surrounding an anaphor and antecedent are compatible. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Tables 4 and 5 also show that putting all of the contextual role KSs in play at the same time produces the greatest performance gain. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | On the English side, however, the vertices (denoted by Ve) correspond to word types. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 72 77. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Timing is based on plentiful memory. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | 9 65.5 46. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction). |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | All annotations are done with specific tools and in XML; each layer has its own DTD. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | so that 'door' would be and in this case the hanzi 7C, does not represent a syllable. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Specifically, the lexicon is generated as: P (T , W |Ï) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010). |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | The Danish Dependency Treebank (DDT) comprises about 100K words of text selected from the Danish PAROLE corpus, with annotation of primary and secondary dependencies (Kromann, 2003). |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | This alters generation of T as follows: n P (T |Ï) = n P (Ti|Ï) i=1 Note that this distribution captures the frequency of a tag across word types, as opposed to tokens. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems). |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Since the inclusion of out-ofdomain test data was a very late decision, the participants were not informed of this. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | This decreases the statistical significance of our results compared to those studies. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Input: Ja , wunderbar . Konnen wir machen . MonS: Yes, wonderful. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | This logic applies recursively: if wnf+1 similarly does not extend and has zero log backoff, it too should be omitted, terminating with a possibly empty context. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Many statistical or machine-learning approaches for natural language problems require a relatively large amount of supervision, in the form of labeled training examples. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï). |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The combining algorithm is presented with the candidate parses and asked to choose which one is best. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Evaluation of Morphological Analysis. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | These performance gains transfer to improved system runtime performance; though we focused on Moses, our code is the best lossless option with cdec and Joshua. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | 92 77. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | For the error counts, a range from 0:0 to 1:0 is used. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The ATB segmentation scheme is one of many alternatives. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Particular instances of relations are associated with goodness scores. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | The data is sorted based on the frequency of the context (âa unit ofâ appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. âNBCâ and âGeneral Electric Co.â appeared 10 times with the context âa unit ofâ). |
There are clustering approaches that assign a single POS tag to each word type. | 0 | This design does not guarantee âstructural zeros,â but biases towards sparsity. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In order to observe the similarity between these constrained systems, it is crucial to abstract away from the details of the structures and operations used by the system. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Table 4 Differences in performance between our system and Wang, Li, and Chang (1992). |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | As lower frequency examples include noise, we set a threshold that an NE category pair should appear at least 5 times to be considered and an NE instance pair should appear at least twice to be considered. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: 'j";gil, xiao3 [lao3 shu3] 'little rat,' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 'rat,' blocking its phrasal application. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Although this feature helps, we encounter one consequence of variable word order. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | On average, 6 reference translations per automatic translation are available. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | RandLM’s stupid backoff variant stores counts instead of probabilities and backoffs. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Case and Zone of and : Similarly, if (or ) is initCaps, a feature (initCaps, zone) (or (initCaps, zone) ) is set to 1, etc. Token Information: This group consists of 10 features based on the string , as listed in Table 1. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The present proposal falls into the last group. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Our analysis identifies three key factors driving our performance gain: 1) selecting a model structure which directly encodes tag sparsity, 2) a type-level prior on tag assignments, and 3) a straightforward na¨ıveBayes approach to incorporate features. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | We also see that the increase in the size of the label sets for Head and Head+Path is far below the theoretical upper bounds given in Table 1. |
Here we present two algorithms. | 0 | It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president). |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | conceptual relationship in the discourse. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | • We evaluated translation from English, in addition to into English. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | was done by the participants. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Part of the gap between resident and virtual memory is due to the time at which data was collected. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | of Articles No. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | We check, how likely only up to k = 20 better scores out of n = 100 would have been generated by two equal systems, using the binomial distribution: If p(0..k; n, p) < 0.05, or p(0..k; n, p) > 0.95 then we have a statistically significant difference between the systems. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | The most frequent NE category pairs are âPerson - Person (209,236), followed by âCountry - Coun- tryâ (95,123) and âPerson - Countryâ (75,509). |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | The resulting model is compact, efficiently learnable and linguistically expressive. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | If there is a frequent multi-word sequence in a domain, we could use it as a keyword candidate. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | shows some keywords with their scores. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(âi), w, α) â n P (w|Ti, t(âi), w(âi), α) (tb ,ta ) P (Ti, t(i)|T , W , t(âi), w, α, β) = P (T |tb, t(âi), α)P (ta|T , t(âi), α) âi (i) i i (âi) P (Ti|W , T âi, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(âi)where T âi denotes all type-level tag assignment ex cept Ti and t(âi) denotes all token-level tags except and w (âi) (Johnson, 2007). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | As you can see in the figure, the accuracy for the domain is quite high except for the âagreeâ set, which contains various expressions representing different relationships for an IE application. |
All the texts were annotated by two people. | 0 | This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | It may be more realistic to replace the second criteria with a softer one, for example (Blum and Mitchell 98) suggest the alternative Alternatively, if Ii and 12 are probabilistic learners, it might make sense to encode the second constraint as one of minimizing some measure of the distance between the distributions given by the two learners. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 5 67.3 55. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | (2006) developed a technique for splitting and chunking long sentences. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Lexical rules are estimated in a similar manner. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | Global features are extracted from other occurrences of the same token in the whole document. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | (Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (âcommunicative-weight assignmentâ). |
Their results show that their high performance NER use less training data than other systems. | 0 | Ltd., then organization will be more probable. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Therefore, we want state to encode the minimum amount of information necessary to properly compute language model scores, so that the decoder will be faster and make fewer search errors. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | This group contains a large number of features (one for each token string present in the training data). |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2). |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Because all English vertices are going to be labeled, we do not need to disambiguate them by embedding them in trigrams. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The TRIE model continues to use the least memory of ing (-P) with MAP POPULATE, the default. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | We observe similar trends when using another measure â type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 â â 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 â â 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 â â 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 â â 4 0 . 6 7 3 . 2 â â 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.