source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Log-linear combination (loglin) improves on this in all cases, and also beats the pure IN system. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Once we figure out the important word (e.g. keyword), we believe we can capture the meaning of the phrase by the keyword. |
There is no global pruning. | 0 | Additionally, for a given coverage set, at most 250 different hypotheses are kept during the search process, and the number of different words to be hypothesized by a source word is limited. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | The last four columns in Table 3 show the distribution of nonprojective arcs with respect to the number of lifts required. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | The second setting uses the news-related subcorpora for the NIST09 MT Chinese to English evaluation8 as IN, and the remaining NIST parallel Chinese/English corpora (UN, Hong Kong Laws, and Hong Kong Hansard) as OUT. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Each trie node is individually allocated and full 64-bit pointers are used to find them, wasting memory. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | This pumping lemma states that if there is tree, t = t2t3t4t5, generated by a TAG G, such that its height is more than a predetermined bound k, then all trees of the form ti it tstt ts for each i > 0 will also generated by G. Similarly, for tree sets with independent paths and more complex path sets, tree pumping lemmas can be given. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | (1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Their default variant implements a forward trie, in which words are looked up in their natural left-to-right order. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | The PCFG was trained from the same sections of the Penn Treebank as the other three parsers. |
The corpus was annoted with different linguitic information. | 0 | The price shocked her.), or same-kind (e.g., Her health insurance paid for the hospital fees, but the automobile insurance did not cover the repair.). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Our code has been publicly available and intergrated into Moses since October 2010. |
Here we present two algorithms. | 0 | It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | It is a relatively frequent word in the domain, but it can be used in different extraction scenarios. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | However, reads in the TRIE data structure are more expensive due to bit-level packing, so we found that it is faster to use interpolation search the entire time. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | token-level HMM to reflect lexicon sparsity. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | In Semitic languages the situation is very different. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | and f,. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | For example: McCann initiated a new global system. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In a grammar which generates independent paths the derivations of sibling constituents can not share an unbounded amount of information. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Here we push the single-framework conjecture across the board and present a single model that performs morphological segmentation and syntactic disambiguation in a fully generative framework. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | This is less effective in our setting, where IN and OUT are disparate. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Twentieth-century linguistic work on Chinese (Chao 1968; Li and Thompson 1981; Tang 1988,1989, inter alia) has revealed the incorrectness of this traditional view. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Table 1 shows four words â 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Find keywords for each NE pair When we look at the contexts for each domain, we noticed that there is one or a few important words which indicate the relation between the NEs (for example, the word âunitâ for the phrase âa unit ofâ). |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007). |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The problem is to store these two values for a large and sparse set of n-grams in a way that makes queries efficient. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We use the default inference parameters. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | As a result, Habash et al. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | The second stage links sets which involve the same pairs of individual NEs. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | They return a value in the range [0,1], where 0 indicates neutrality and 1 indicates the strongest belief that the candidate and anaphor are coreferent. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In this paper, we show how non-projective dependency parsing can be achieved by combining a datadriven projective parser with special graph transformation techniques. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | For example, the two NEs âEastern Group Plcâ and âHanson Plcâ have the following contexts. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | For a language like English, this problem is generally regarded as trivial since words are delimited in English text by whitespace or marks of punctuation. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Ex: The government said it ... |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Morphological disambiguators that consider a token in context (an utterance) and propose the most likely morphological analysis of an utterance (including segmentation) were presented by Bar-Haim et al. (2005), Adler and Elhadad (2006), Shacham and Wintner (2007), and achieved good results (the best segmentation result so far is around 98%). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Then each arc of D maps either from an element of H to an element of p, or from E-i.e., the empty string-to an element of P. More specifically, each word is represented in the dictionary as a sequence of arcs, starting from the initial state of D and labeled with an element 5 of Hxp, which is terminated with a weighted arc labeled with an element of Ex P. The weight represents the estimated cost (negative log probability) of the word. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y). |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | IRSTLM and BerkeleyLM use this state function (and a limit of N −1 words), but it is more strict than necessary, so decoders using these packages will miss some recombination opportunities. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Note that these observa sider suffix features, capitalization features, punctuation, and digit features. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | We attempt to formalize this notion in terms of the tee pumping lemma which can be used to show that a tee set does not have dependent paths. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 72 78. |
This assumption, however, is not inherent to type-based tagging models. | 0 | The extent to which this constraint is enforced varies greatly across existing methods. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Compared to last year’s shared task, the participants represent more long-term research efforts. |
The AdaBoost algorithm was developed for supervised learning. | 0 | In the namedentity problem each example is a (spelling,context) pair. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Mohri [1995]) shows promise for improving this situation. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Participants and other volunteers contributed about 180 hours of labor in the manual evaluation. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 1 61.2 43. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 3 60.7 50. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | We identified three ways that contextual roles can be exploited: (1) by identifying caseframes that co-occur in resolutions, (2) by identifying nouns that co-occur with case- frames and using them to crosscheck anaphor/candidate compatibility, (3) by identifying semantic classes that co- occur with caseframes and using them to crosscheck anaphor/candidate compatability. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | A very small excerpt from an Italian-English graph is shown in Figure 1. |
Here we present two algorithms. | 0 | AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.' |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | 2. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | HR0011-06-C-0022. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Statistics are printed before Moses exits and after parts of the decoder have been destroyed. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Again, we can compute average scores for all systems for the different language pairs (Figure 6). |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | All commentaries have been tagged with part-of-speech information using Brantsâ TnT1 tagger and the Stuttgart/Tu¨bingen Tag Set automatic analysis was responsible for this decision.) |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Thus at each iteration the method induces at most n x k rules, where k is the number of possible labels (k = 3 in the experiments in this paper). step 3. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 36. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Moving beyond directly related work, major themes in SMT adaptation include the IR (Hildebrand et al., 2005; L¨u et al., 2007; Zhao et al., 2004) and mixture (Finch and Sumita, 2008; Foster and Kuhn, 2007; Koehn and Schroeder, 2007; L¨u et al., 2007) approaches for LMs and TMs described above, as well as methods for exploiting monolingual in-domain text, typically by translating it automatically and then performing self training (Bertoldi and Federico, 2009; Ueffing et al., 2007; Schwenk and Senellart, 2009). |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | We also collapse unary chains withidentical basic categories like NP â NP. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | Memory-based classifiers for the experiments were created using TiMBL (Daelemans et al., 2003). |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | A final alternate approach would be to combine weighted joint frequencies rather than conditional estimates, ie: cI(s, t) + w,\(s, t)co(, s, t), suitably normalized.5 Such an approach could be simulated by a MAP-style combination in which separate 0(t) values were maintained for each t. This would make the model more powerful, but at the cost of having to learn to downweight OUT separately for each t, which we suspect would require more training data for reliable performance. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | Again, the monotone search performs worst. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | 4.2 Global Features. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | In Section 4, we present the performance measures used and give translation results on the Verbmobil task. |
The corpus was annoted with different linguitic information. | 0 | This paper, however, provides a comprehensive overview of the data collection effort and its current state. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | We have shown the efficacy of graph-based label propagation for projecting part-of-speech information across languages. |
Here both parametric and non-parametric models are explored. | 0 | Each parse is converted into a set of constituents represented as a tuples: (label, start, end). |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | MCTAG's are able to generate tee sets having dependent paths. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Collisions between two keys in the table can be identified at model building time. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | ). context=x The context for the entity. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The Potsdam Commentary Corpus |
A beam search concept is applied as in speech recognition. | 0 | Decreasing the threshold results in higher mWER due to additional search errors. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | To combat the proliferation of parsing edges, we prune the lattices according to a hand-constructed lexicon of 31 clitics listed in the ATB annotation guidelines (Maamouri et al., 2009a). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Hieu Hoang named the code “KenLM” and assisted with Moses along with Barry Haddow. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.