source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | The sentence-selection approach is crude in that it imposes a binary distinction between useful and non-useful parts of OUT. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | Although the tag distributions of the foreign words (Eq. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | This heuristics is used to prune all segmentation possibilities involving “lexically improper” segments. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Modifying the Berkeley parser for Arabic is straightforward. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | We thank members of the MIT NLP group for their suggestions and comments. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In considering the recognition of these languages, we were forced to be more specific regarding the relationship between the structures derived by these formalisms and the substrings they span. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Each feature concept is akin to a random variable and its occurrence in the text corresponds to a particular instantiation of that random variable. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | If the semantic expectations do not intersect, then we know that the case- frames extract mutually exclusive types of noun phrases. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # â , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ â . +F EA TS JJ NN S CD NN P UH , PR P$ # . â Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 21 In Chinese, numerals and demonstratives cannot modify nouns directly, and must be accompanied by. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Even without features, but still using the tag prior, our median result is 52.0%, still significantly outperforming Grac¸a et al. |
The corpus was annoted with different linguitic information. | 0 | In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information â as in PCC. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Word type N % Dic tion ary entr ies 2 , 5 4 3 9 7 . 4 7 Mor pho logi call y deri ved wor ds 3 0 . 1 1 Fore ign tran slite rati ons 9 0 . 3 4 Per son al na mes 5 4 2 . 0 7 cases. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Furthermore, some of the arcs represent lexemes not present in the input tokens (e.g. h/DT, fl/POS), however these are parts of valid analyses of the token (cf. super-segmental morphology section 2). |
Their results show that their high performance NER use less training data than other systems. | 0 | We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | They contain about 200M words (25M, 110M, 40M and 19M words, respectively). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Arguably this consists of about three phonological words. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | We allow any number of bits from 2 to 25, unlike IRSTLM (8 bits) and BerkeleyLM (17−20 bits). |
There is no global pruning. | 0 | Here, $ is the sentence boundary symbol, which is thought to be at position 0 in the target sentence. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 92 77. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Fourth, we show how to build better models for three different parsers. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The work of the first author was supported by the Lynn and William Frankel Center for Computer Sciences. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | If we were working with more than three parsers we could investigate minority constituents, those constituents that are suggested by at least one parser, but which the majority of the parsers do not suggest. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Certainly these linguistic factors increase the difficulty of syntactic disambiguation. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 7 Conclusion and Future Work. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Oracle results). |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | In our model, we associate these features at the type-level in the lexicon. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | The NIST06 and NIST08 evaluation sets were used for testing. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | The natural language processing community is in the strong position of having many available approaches to solving some of its most fundamental problems. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | In the rhetorical tree, nuclearity information is then used to extract a âkernel treeâ that supposedly represents the key information from which the summary can be generated (which in turn may involve co-reference information, as we want to avoid dangling pronouns in a summary). |
There is no global pruning. | 0 | Our approach uses word-to-word dependencies between source and target words. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The evaluation framework for the shared task is similar to the one used in last year’s shared task. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | In the numerator, however, the counts of ni1s are quite irregular, in cluding several zeros (e.g., RAT, none of whose members were seen). |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | The choice between different actions is in general nondeterministic, and the parser relies on a memorybased classifier, trained on treebank data, to predict the next action based on features of the current parser configuration. |
This corpus has several advantages: it is annotated at different levels. | 0 | Preferences for constituent order (especially in languages with relatively free word order) often belong to this group. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The problem is to store these two values for a large and sparse set of n-grams in a way that makes queries efficient. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The work of the first author was supported by the Lynn and William Frankel Center for Computer Sciences. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | For humans, this characteristic can impede the acquisition of literacy. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | In principle a feature could be an arbitrary predicate of the (spelling, context) pair; for reasons that will become clear, features are limited to querying either the spelling or context alone. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | If somewhere else in the document we see ârestrictions put in place by President Bushâ, then we can be surer that Bush is a name. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | To conserve memory at the expense of accuracy, values may be quantized using q bits per probability and r bits per backoff6. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The method shares some characteristics of the decision list algorithm presented in this paper. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | The data is sorted based on the frequency of the context (âa unit ofâ appeared 314 times in the corpus) and the NE pair instances appearing with that context are shown with their frequency (e.g. âNBCâ and âGeneral Electric Co.â appeared 10 times with the context âa unit ofâ). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | This result suggests the benefit of using the automatic discovery method. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | First, in section 4, we evaluate the graph transformation techniques in themselves, with data from the Prague Dependency Treebank and the Danish Dependency Treebank. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | token-level HMM to reflect lexicon sparsity. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | There has been additional recent work on inducing lexicons or other knowledge sources from large corpora. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired, so the trend observed for the results of this method is in the right direction. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The tool we use is MMAX8, which has been specifically designed for marking co-reference. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | The entries in such a lexicon may be thought of as meaningful surface segments paired up with their PoS tags li = (si, pi), but note that a surface segment s need not be a space-delimited token. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | We thank members of the MIT NLP group for their suggestions and comments. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Semantic expectations are analogous to lexical expectations except that they represent semantic classes rather than nouns. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., âheâ vs. âJohn F. Kennedyâ). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The baseline system in Table 3 refers to the maximum entropy system that uses only local features. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | In (b) âtheyâ refers to the kidnapping victims, but in (c) âtheyâ refers to the armed men. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | It did 402 queries/ms using 1.80 GB. cMemory use increased during scoring due to batch processing (MIT) or caching (Rand). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Context from the whole document can be important in classifying a named entity. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | At this step, we will try to link those sets, and put them into a single cluster. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We adopted this state-of-the-art model because it makes it easy to experiment with various ways of incorporating our novel constraint feature into the log-linear emission model. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The features we used can be divided into 2 classes: local and global. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal Chi nese personal name, retains a foreign flavor because of liM. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | This is orthographically represented as 7C. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | This vector tx is constructed for every word in the foreign vocabulary and will be used to provide features for the unsupervised foreign language POS tagger. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | SEGTok(noH) is the segmentation accuracy ignoring mistakes involving the implicit definite article h.11 To evaluate our performance on the tagging task we report CPOS and FPOS corresponding to coarse- and fine-grained PoS tagging results (F1) measure. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | The P (T |Ï) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | . |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Our use of an unweighted lattice reflects our belief that all the segmentations of the given input sentence are a-priori equally likely; the only reason to prefer one segmentation over the another is due to the overall syntactic context which is modeled via the PCFG derivations. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 3 61.7 38. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | A configuration of M consists of a state of the finite control, the nonblank contents of the input tape and k work tapes, and the position of each head. |
The AdaBoost algorithm was developed for supervised learning. | 0 | In our experiments we set the parameter values randomly, and then ran EM to convergence. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | For a trigram language model, the partial hypotheses are of the form (e0; e; C; j). |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Hence, we take the probability of the event fmnh analyzed as REL VB to be This means that we generate f and mnh independently depending on their corresponding PoS tags, and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. based on linear context. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | In (1) the sequencema3lu4 cannot be resolved locally, but depends instead upon broader context; similarly in (2), the sequence :::tcai2neng2 cannot be resolved locally: 1. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Hence we decided to restrict ourselves to only information from the same document. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | To set β, we used the same criterion as for α, over a dev corpus: The MAP combination was used for TM probabilities only, in part due to a technical difficulty in formulating coherent counts when using standard LM smoothing techniques (Kneser and Ney, 1995).3 Motivated by information retrieval, a number of approaches choose “relevant” sentence pairs from OUT by matching individual source sentences from IN (Hildebrand et al., 2005; L¨u et al., 2007), or individual target hypotheses (Zhao et al., 2004). |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 3.2 Inter-annotator Agreement. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 0 70.9 42. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Both parameters depend on a single hyperparameter α. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The range of the score is between 0 and 1 (higher is better). |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | 3.5 Improved models of discourse. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Linear probing places at most one entry in each bucket. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Each model was able to produce hypotheses for all input sentences. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | This is not completely surprising, since all systems use very similar technology. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | However, such an approach requires a number of hand-coded systems, which may not be available in languages other than English. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. |
Subsets and Splits