source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Word Re-ordering and DP-based Search in Statistical Machine Translation |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Like CFG's, the choice is predetermined by a finite number of rules encapsulated in the grammar. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The composition operations are mapped onto operations that use concatenation to define the substrings spanned by the resulting structures. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Location list is processed into a list of unigrams and bigrams (e.g., New York). |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 00 76. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The model described here thus demonstrates great potential for use in widespread applications. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | Let s = a + b. |
A beam search concept is applied as in speech recognition. | 0 | 3) A tight coupling with the speech recognizer output. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | The linear and nonerasing assumptions about the operations discussed in Section 4.1 require that each z, and yk is used exactly once to define the strings zi, ,z1,3. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.' |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | The system described in this paper is similar to the MENE system of (Borthwick, 1999). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 10 Here we use the Good-Turing estimate (Baayen 1989; Church and Gale 1991), whereby the aggregate probability of previously unseen instances of a construction is estimated as ni/N, where N is the total number of observed tokens and n1 is the number of types observed only once. |
Their results show that their high performance NER use less training data than other systems. | 0 | Borth 2MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu 3Training data for IdentiFinder is actually given in words (i.e., 650K & 790K words), rather than tokens Table 6: Comparison of results for MUC7 wick (1999) reported using dictionaries of person first names, corporate names and suffixes, colleges and universities, dates and times, state abbreviations, and world regions. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | TIS systems in general need to do more than simply compute the. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Without HSPELL-pruning, our simpler grammars are somewhat lagging behind, but as the grammars improve the gap is bridged. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Methods that allow multiple segmentations must provide criteria for choosing the best segmentation. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Our approach uses word-to-word dependencies between source and target words. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | For each pair of judges, consider one judge as the standard,. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | The tee in Figure 9a can be denoted by t1 i223 where tee substitution is used instead of concatenation. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Under certain conditions the constituent voting and naïve Bayes constituent combination techniques are guaranteed to produce sets of constituents with no crossing brackets. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | If two systems’ scores are close, this may simply be a random effect in the test data. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | P St = n. β T VARIABLES Ï Y W : Word types (W1 ,. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Next, we describe four contextual role knowledge sources that are created from the training examples and the caseframes. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Among these 32 sets, we found the following pairs of sets which have two or more links. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | A list of words occurring more than 10 times in the training data is also collected (commonWords). |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | In Semitic languages the situation is very different. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 4 65.9 48. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | In MUC6, the best result is achieved by SRA (Krupka, 1995). |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Compared with SRILM, IRSTLM adds several features: lower memory consumption, a binary file format with memory mapping, caching to increase speed, and quantization. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Note that hanzi that are not grouped into dictionary words (and are not identified as single hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Further, the probing hash table does only one random lookup per query, explaining why it is faster on large data. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Table 4 Differences in performance between our system and Wang, Li, and Chang (1992). |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In the following, we use the notation wi wj to mean that (wi, r, wj) E A; r we also use wi wj to denote an arc with unspecified label and wi —*∗ wj for the reflexive and transitive closure of the (unlabeled) arc relation. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | When the connective is an adverbial, there is much less clarity as to the range of the spans. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Maamouri et al. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left. |
A beam search concept is applied as in speech recognition. | 0 | The goal of machine translation is the translation of a text given in some source language into a target language. |
All the texts were annotated by two people. | 0 | The annotator can then âclick awayâ those words that are here not used as connectives (such as the conjunction und (âandâ) used in lists, or many adverbials that are ambiguous between connective and discourse particle). |
This assumption, however, is not inherent to type-based tagging models. | 0 | 5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008). |
The texts were annotated with the RSTtool. | 0 | ⢠Some tools would allow for the desired annotation mode, but are so complicated (they can be used for many other purposes as well) that annotators take a long time getting used to them. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | First, we describe how the caseframes are represented and learned. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Finally, we add âDTâ to the tags for definite nouns and adjectives (Kulick et al., 2006). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | For. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Chris Dyer integrated the code into cdec. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Then, token- level HMM emission parameters are drawn conditioned on these assignments such that each word is only allowed probability mass on a single assigned tag. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | Hyperparameter settings are sorted according to the median one-to-one metric over runs. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | We have provided methods for handling certain classes of unknown words, and models for other classes could be provided, as we have noted. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | We call these N − 1 words state. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found. |
All the texts were annotated by two people. | 0 | The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The authors provided us with a ratio between TPT and SRI under different conditions. aLossy compression with the same weights. bLossy compression with retuned weights. ditions make the value appropriate for estimating repeated run times, such as in parameter tuning. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | What both of these approaches presume is that there is a sin gle correct segmentation for a sentence, against which an automatic algorithm can be compared. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Given the limited number of judgements we received, we did not try to evaluate this. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Sc opi ng filters candidate if outside the anaphorâs scope. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | In this case, this knowledge source reports that the candidate is not a viable antecedent for the anaphor. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Each extraction pattern represents a linguistic expression and a syntactic position indicating where a role filler can be found. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | A novel element of our model is the ability to capture type-level tag frequencies. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | An example is in (i), where the system fails to group t;,f;?"$?t!: lin2yang2gang3 as a name, because all three hanzi can in principle be separate words (t;,f; lin2 'wood';?"$ yang2 'ocean'; ?t!; gang3 'harbor'). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated. |
Here both parametric and non-parametric models are explored. | 0 | The set is then compared with the set generated from the Penn Treebank parse to determine the precision and recall. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | A totally non stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We can show that languages generated by LCFRS's are semilinear as long as the composition operation does not remove any terminal symbols from its arguments. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | What both of these approaches presume is that there is a sin gle correct segmentation for a sentence, against which an automatic algorithm can be compared. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Both BBN and NYU have tagged their own data to supplement the official training data. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | A few annotators suggested to break up long sentences into clauses and evaluate these separately. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We will refer to the semantic classes that co-occur with a caseframe as the semantic expectations of the caseframe. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | The basic strategy is, for a given pair of entity types, to start with some examples, like several famous book title and author pairs; and find expressions which contains those names; then using the found expressions, find more author and book title pairs. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The details are given in (Tillmann, 2000). |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | There are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent set. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | For RandLM and IRSTLM, the effect of caching can be seen on speed and memory usage. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In this paper, we have presented a new, eÃcient DP-based search procedure for statistical machine translation. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | It is important to bear in mind, though, that this is not an inherent limitation of the model. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The approach assumes that the word reordering is restricted to a few positions in the source sentence. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The first concerns how to deal with ambiguities in segmentation. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | During coreference resolution, BABAR checks (1) whether the anaphor is among the lexical expectations for the caseframe that extracts the candidate antecedent, and (2) whether the candidate is among the lexical expectations for the caseframe that extracts the anaphor. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.