source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Table 2 shows these similarity measures. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | As discussed in Section 3, independent paths in tree sets, rather than the path complexity, may be crucial in characterizing semilinearity and polynomial time recognition. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | To this end, we construct a bilingual graph over word types to establish a connection between the two languages (§3), and then use graph label propagation to project syntactic information from English to the foreign language (§4). |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Surprisingly, this effect is much less obvious for out-of-domain test data. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | We have tested the translation system on the Verbmobil task (Wahlster 1993). |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | We carried out translation experiments in two different settings. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.' |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 37. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 0 . 8 3 1 0.859 496 76. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | SEGTok(noH) is the segmentation accuracy ignoring mistakes involving the implicit definite article h.11 To evaluate our performance on the tagging task we report CPOS and FPOS corresponding to coarse- and fine-grained PoS tagging results (F1) measure. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Dynamic programming efficiently scores many hypotheses by exploiting the fact that an N-gram language model conditions on at most N − 1 preceding words. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The test accuracy more or less asymptotes. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Pairwise comparison is done using the sign test. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | As can be seen in the example, the first two phrases have a different order of NE names from the last two, so we can determine that the last two phrases represent a reversed relation. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | We have shown that the maximum entropy framework is able to use global information directly. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The correct resolution in sentence (b) comes from knowledge that people who are kidnapped are often subsequently released. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | (2010)âs richest model: optimized via either EM or LBFGS, as their relative performance depends on the language. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | However, we do not explore this possibility in the current work. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | The size of TRIE is particularly sensitive to F1092 c11, so vocabulary filtering is quite effective at reducing model size. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Given the limited number of judgements we received, we did not try to evaluate this. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Our derivation is slightly different from the one presented in (Schapire and Singer 98) as we restrict at to be positive. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The highestorder N-gram array omits backoff and the index, since these are not applicable. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d). |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 8 1 2. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | This paper is based on work supported in part by DARPA through IBM. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Moses sets the cache size parameter to 50 so we did as well; the resulting cache size is 2.82 GB. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Smith estimates Lotus will make profit this quarterâ¦â. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | We have not to date explored these various options. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | However, our full model takes advantage of word features not present in Grac¸a et al. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation). |
The corpus was annoted with different linguitic information. | 0 | All annotations are done with specific tools and in XML; each layer has its own DTD. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | 8 1 2. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | (Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Although the tag distributions of the foreign words (Eq. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | a classifier. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Figure 1: Translation of PCC sample commentary (STTS)2. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | This might be because our features are more comprehensive than those used by Borthwick. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | As in boosting, the algorithm works in rounds. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | All commentaries have been annotated with rhetorical structure, using RSTTool4 and the definitions of discourse relations provided by Rhetorical Structure Theory (Mann, Thompson 1988). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | (3) shows learning curves for CoBoost. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | There exist a few robust broad-coverage parsers that produce non-projective dependency structures, notably Tapanainen and J¨arvinen (1997) and Wang and Harper (2004) for English, Foth et al. (2004) for German, and Holan (2004) for Czech. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Head Grammars (HG's), introduced by Pollard (1984), is a formalism that manipulates headed strings: i.e., strings, one of whose symbols is distinguished as the head. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This is not ideal for some applications, however. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | ( b ) s u p p o r t s c a n d i d a t e i f s e l e c t e d s e m a n t i c t a g s m a t c h t h o s e o f t h e a n a p h o r . Le xic al computes degree of lexical overlap b e t w e e n t h e c a n d i d a t e a n d t h e a n a p h o r . Re cen cy computes the relative distance between the c a n d i d a t e a n d t h e a n a p h o r . Sy nR ole computes relative frequency with which the c a n d i d a t e â s s y n t a c t i c r o l e o c c u r s i n r e s o l u t i o n s . Figure 4: General Knowledge Sources The Lexical KS returns 1 if the candidate and anaphor are identical, 0.5 if their head nouns match, and 0 otherwise. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | In general, different modalities (âplanned to buyâ, âagreed to buyâ, âboughtâ) were considered to express the same relationship within an extraction setting. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | 2. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | 4 Evaluation Results. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | There have been other kinds of efforts to discover paraphrase automatically from corpora. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (âSonderforschungsbereichâ) revolving around 9 2.7 Information structure. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | For a language like English, this problem is generally regarded as trivial since words are delimited in English text by whitespace or marks of punctuation. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | Previous approaches have tried to find examples that are similar to the target domain. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | The second main result is that the pseudo-projective approach to parsing (using special arc labels to guide an inverse transformation) gives a further improvement of about one percentage point on attachment score. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Since guess and gold trees may now have different yields, the question of evaluation is complex. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | AdaBoost.MH can be applied to the problem using these pseudolabels in place of supervised examples. |
All the texts were annotated by two people. | 0 | The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Previous Work. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Similarly, for all the LCFRS's, discussed in Section 2, we can define the relationship between a structure and the sequence of substrings it spans, and the effect of the composition operations on sequences of substrings. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | We call this approach parser switching. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Both (Tsarfaty, 2006; Cohen and Smith, 2007) have shown that a single integrated framework outperforms a completely streamlined implementation, yet neither has shown a single generative model which handles both tasks. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | On the other hand, we are interested in the application of rhetorical analysis or âdiscourse parsingâ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5). |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | This is summarized in Equation 5. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | More formally, we start by representing the dictionary D as a Weighted Finite State Trans ducer (WFST) (Pereira, Riley, and Sproat 1994). |
The texts were annotated with the RSTtool. | 0 | Two annotators received training with the RST definitions and started the process with a first set of 10 texts, the results of which were intensively discussed and revised. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | To control for the effect of the HSPELL-based pruning, we also experimented with a morphological analyzer that does not perform this pruning. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | While processing the source sentence monotonically, the initial state I is entered whenever there are no uncovered positions to the left of the rightmost covered position. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Specifically, for both settings we report results on the median run for each setting. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | In the terrorism domain, 1600 texts were used for training and the 40 test docu X â©Y =â
All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | (2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours). |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This Good Turing estimate of p(unseen(f,) If,) can then be used in the normal way to define the probability of finding a novel instance of a construction in ir, in a text: p(unseen(f,)) = p(unseen(f,) I f,) p(fn Here p(ir,) is just the probability of any construction in ft as estimated from the frequency of such constructions in the corpus. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For Arabic we M o d e l S y s t e m L e n g t h L e a f A n c e s t o r Co rpu s Sent Exact E v a l b L P LR F1 T a g % B a s e l i n e 7 0 St an for d (v 1. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | f, nan2gual+men0 'pumpkins' is by no means impossible. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Training under this model involves estimation of parameter values for P(y), P(m) and P(x I y). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.