source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The AdaBoost algorithm was developed for supervised learning. | 0 | . |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Although the tag distributions of the foreign words (Eq. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | For instance, the common "suffixes," -nia (e.g.,. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | (8) can now be rewritten5 as which is of the same form as the function Zt used in AdaBoost. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The CoBoost algorithm described above divides the function Zco into two parts: Zco = 40 + 40. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | In this paper, we will propose an unsupervised method to discover paraphrases from a large untagged corpus. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | This may seem surprising, given the experiments reported in section 4, but the explanation is probably that the non-projective dependencies that can be recovered at all are of the simple kind that only requires a single lift, where the encoding of path information is often redundant. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | From Thatcher's (1973) work, it is obvious that the complexity of the set of paths from root to frontier of trees in a local set (the tree set of a CFG) is regular'. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | This approach is compared to another reordering scheme presented in (Berger et al., 1996). |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Thus our proposed model is a proper model assigning probability mass to all (7r, L) pairs, where 7r is a parse tree and L is the one and only lattice that a sequence of characters (and spaces) W over our alpha-beth gives rise to. |
The texts were annotated with the RSTtool. | 0 | All annotations are done with specific tools and in XML; each layer has its own DTD. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | There has also been some work on adapting the word alignment model prior to phrase extraction (Civera and Juan, 2007; Wu et al., 2005), and on dynamically choosing a dev set (Xu et al., 2007). |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The model was built with open vocabulary, modified Kneser-Ney smoothing, and default pruning settings that remove singletons of order 3 and higher. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) ⢠the full text, ⢠the annotation values for the activated annotation set (co-reference), ⢠the actual annotation tiers, and ⢠the portion of text currently âin focusâ (which also appears underlined in the full text). |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Can we do . QmS: Yes, wonderful. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | All four of the techniques studied result in parsing systems that perform better than any previously reported. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Thus at each iteration the algorithm is forced to pick features for the location, person and organization in turn for the classifier being trained. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | For each set, the phrases with bracketed frequencies are considered not paraphrases in the set. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b). |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Both authors are members of the Center for Language and Speech Processing at Johns Hopkins University. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | The semilinearity of Tree Adjoining Languages (TAL's), MCTAL's, and Head Languages (HL's) can be proved using this property, with suitable restrictions on the composition operations. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | All our results are obtained by using only the official training data provided by the MUC conferences. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | For example, management succession systems must distinguish between a person who is fired and a person who is hired. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | We are given a source string fJ 1 = f1:::fj :::fJ of length J, which is to be translated into a target string eI 1 = e1:::ei:::eI of length I. Among all possible target strings, we will choose the string with the highest probability: ^eI 1 = arg max eI 1 fPr(eI 1jfJ 1 )g = arg max eI 1 fPr(eI 1) Pr(fJ 1 jeI 1)g : (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003). |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 3.2 Inter-annotator Agreement. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | Once HMM parameters (θ, Ï) are drawn, a token-level tag and word sequence, (t, w), is generated in the standard HMM fashion: a tag sequence t is generated from Ï. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | German English Training: Sentences 58 073 Words 519 523 549 921 Words* 418 979 453 632 Vocabulary Size 7939 4648 Singletons 3454 1699 Test-147: Sentences 147 Words 1 968 2 173 Perplexity { 26:5 Table 4: Multi-reference word error rate (mWER) and subjective sentence error rate (SSER) for three different search procedures. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | For example, suppose the current model assigns a belief value of .60 to {A, B}, meaning that it is 60% sure that the correct hypothesis is either A or B. Then new evidence arrives with a belief value of .70 assigned 5 Initially there are no competing hypotheses because all hypotheses are included in θ by definition. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | This might be because our features are more comprehensive than those used by Borthwick. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The algorithm in Fig. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Figure 3 shows a small fragment of the WFST encoding the dictionary, containing both entries forjust discussed, g:t¥ zhonglhua2 min2guo2 (China Republic) 'Republic of China,' and i¥inl. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Finally, since non-projective constructions often involve long-distance dependencies, the problem is closely related to the recovery of empty categories and non-local dependencies in constituency-based parsing (Johnson, 2002; Dienes and Dubey, 2003; Jijkoun and de Rijke, 2004; Cahill et al., 2004; Levy and Manning, 2004; Campbell, 2004). |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Because we don’t have a separate development set, we used the training set to select among them and found 0.2 to work slightly better than 0.1 and 0.3. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | For unaligned words, we set the tag to the most frequent tag in the corresponding treebank. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | If one of these checks fails then this knowledge source reports that the candidate is not a viable antecedent for the anaphor. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We used C = 1.0 as the L2 regularization constant in (Eq. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | MENE (Maximum Entropy Named Entity) (Borth- wick, 1999) was combined with Proteus (a hand- coded system), and came in fourth among all MUC 7 participants. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Morphological Analyzer Ideally, we would use an of-the-shelf morphological analyzer for mapping each input token to its possible analyses. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The quasi-monotone search performs best in terms of both error rates mWER and SSER. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Nu mb er filters candidate if number doesnât agree. |
The corpus was annoted with different linguitic information. | 0 | Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | ), and thosethat begin with a verb (� ub..i �u _.. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | We used it to score all phrase pairs in the OUT table, in order to provide a feature for the instance-weighting model. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Proper-Name Identification. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | In our model, we associate these features at the type-level in the lexicon. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Such a classification can be seen as a not-always-correct summary of global features. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Given parameter estimates, the label for a test example x is defined as We should note that the model in equation 9 is deficient, in that it assigns greater than zero probability to some feature combinations that are impossible. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | A less canonical representation of segmental morphology is triggered by a morpho-phonological process of omitting the definite article h when occurring after the particles b or l. This process triggers ambiguity as for the definiteness status of Nouns following these particles.We refer to such cases in which the concatenation of elements does not strictly correspond to the original surface form as super-segmental morphology. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | However, it is robust, efficient, and easy to implement.4 To perform the maximization in (7), we used the popular L-BFGS algorithm (Liu and Nocedal, 1989), which requires gradient information. |
A beam search concept is applied as in speech recognition. | 0 | A summary of the corpus used in the experiments is given in Table 3. |
Here we present two algorithms. | 0 | This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | In-domain Systran scores on this metric are lower than all statistical systems, even the ones that have much worse human scores. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4). |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | A simple extension will be used to handle this problem. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical. |
Here both parametric and non-parametric models are explored. | 0 | We plan to explore more powerful techniques for exploiting the diversity of parsing methods. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The likelihood of the observed data under the model is where P(yi, xi) is defined as in (9). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Based on revision 4041, we modified Moses to print process statistics before terminating. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | Table 5: Effect of the beam threshold on the number of search errors (147 sentences). |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | The aforementioned surface form bcl, for example, may also stand for the lexical item “onion”, a Noun. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The cost estimate, cost(i¥JJ1l.fn is computed in the obvious way by summing the negative log probabilities of i¥JJ1l. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Xim, } associated with the ith example. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Process nominals name the action of the transitive or ditransitive verb from which they derive. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | For à = 0, no new target word is generated, while an additional source sentence position is covered. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | pre-processing. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The Recency KS computes the distance between the candidate and the anaphor relative to its scope. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | Each out-of-domain phrase pair is characterized by a set of simple features intended to reflect how useful it will be. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | We propose a limit of 70 words for Arabic parsing evaluations. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching framework. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The method shares some characteristics of the decision list algorithm presented in this paper. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Clearly the percentage of productively formed words is quite small (for this particular corpus), meaning that dictionary entries are covering most of the 15 GR is .73 or 96%.. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | We now describe the CoBoost algorithm for the named entity problem. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Here we push the single-framework conjecture across the board and present a single model that performs morphological segmentation and syntactic disambiguation in a fully generative framework. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | The zone to which a token belongs is used as a feature. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | For example, in Information Retrieval (IR), we have to match a userâs query to the expressions in the desired documents, while in Question Answering (QA), we have to find the answer to the userâs question even if the formulation of the answer in the document is different from the question. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.