source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | 0750271 and by the DARPA GALE program. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The range of the score is between 0 and 1 (higher is better). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Third, we develop a human interpretable grammar that is competitive with a latent variable PCFG. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Nonetheless, the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of words. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | There is a fairly large body of work on SMT adaptation. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | This may be the sign of a maturing research environment. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | The way we cant distinction between system performance. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The difference in precision between similarity and Bayes switching techniques is significant, but the difference in recall is not. |
This corpus has several advantages: it is annotated at different levels. | 0 | 2.6 Co-reference. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | 10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007). |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI! |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.' |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The high 1 tone of J1l would not normally neutralize in this fashion if it were functioning as a word on its own. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | 2. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The domain is general politics, economics and science. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | The probability distribution that satisfies the above property is the one with the highest entropy. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | There are two weaknesses in Chang et al.'s model, which we improve upon. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | To approximate these baselines, we implemented a very simple sentence selection algorithm in which parallel sentence pairs from OUT are ranked by the perplexity of their target half according to the IN language model. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | It is well know that language pairs such as EnglishGerman pose more challenges to machine translation systems than language pairs such as FrenchEnglish. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | There has been additional recent work on inducing lexicons or other knowledge sources from large corpora. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | The system described in this paper is similar to the MENE system of (Borthwick, 1999). |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | For all these annotation taks, G¨otze developed a series of questions (essentially a decision tree) designed to lead the annotator to the ap propriate judgement. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | tai2du2 'Taiwan Independence.' |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | This section measures performance on shared tasks in order of increasing complexity: sparse lookups, evaluating perplexity of a large file, and translation with Moses. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Since our destructor is an efficient call to munmap, bypassing the destructor favors only other packages. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | As can be seen in Figure 3, the phrases in the âagreeâ set include completely different relationships, which are not paraphrases. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Because of their size, the examples (Figures 2 to 4) appear at the end of the paper. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | This is the first time that we organized a large-scale manual evaluation. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | input: source string f1:::fj :::fJ initialization for each cardinality c = 1; 2; ; J do for each pair (C; j), where j 2 C and jCj = c do for each target word e 2 E Qe0 (e; C; j) = p(fj je) max Ã;e00 j02Cnfjg fp(jjj0; J) p(Ã) pÃ(eje0; e00) Qe00 (e0;C n fjg; j0)g words fj in the input string of length J. For the final translation each source position is considered exactly once. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | This paper discusses the use of unlabeled examples for the problem of named entity classification. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We must adjoin all trees in an auxiliary tree set together as a single step in the derivation. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | In our work, we demonstrate that using a simple na¨ıveBayes approach also yields substantial performance gains, without the associated training complexity. |
The texts were annotated with the RSTtool. | 0 | Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | It may seem surprising to some readers that the interhuman agreement scores reported here are so low. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | As will be obvious later, their derivation tree sets will be local sets as are those of CFG's. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | (2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours). |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | This may be the sign of a maturing research environment. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Because b is a function, no additional hypothesis splitting happens. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | However, for our purposes it is not sufficient to repre sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | For each domain, we created a blind test set by manually annotating 40 doc uments with anaphoric chains, which represent sets of m3 (S) = ) X â©Y =S 1 â ) m1 (X ) â m2 (Y ) m1 (X ) â m2 (Y ) (1) noun phrases that are coreferent (as done for MUC6 (MUC6 Proceedings, 1995)). |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000). |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | If enough parsers suggest that a particular constituent belongs in the parse, we include it. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Gather phrases using keywords Now, the keyword with the top TF/ITF score is selected for each phrase. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | (2009) on Portuguese (Grac¸a et al. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Examples are given in Table 4. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | However, in practice, unknown word models also make the distribution improper. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | For example, in the CC-domain, 96 keywords are found which have TF/ITF scores above a threshold; some of them are shown in Figure 3. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page). |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | For example, the independence assumptions mean that the model fails to capture the dependence between specific and more general features (for example the fact that the feature full.-string=New_York is always seen with the features contains (New) and The baseline method tags all entities as the most frequent class type (organization). contains (York) and is never seen with a feature such as contains (Group) ). |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | It is. based on the traditional character set rather than the simplified character set used in Singapore and Mainland China. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW). |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | One is that smaller sets sometime have meaningless keywords, like âstrengthâ or âaddâ in the CC-domain, or âcompareâ in the PC-domain. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Evaluation of the Segmentation as a Whole. |
There is no global pruning. | 0 | In Eq. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | These models generally outperform our memory consumption but are much slower, even when cached. |
This corpus has several advantages: it is annotated at different levels. | 0 | Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The features are used to represent each example for the learning algorithm. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | The 3rd block contains the mixture baselines. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For parsing, this is a mistake, especially in the case of interrogatives. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | For example, let us consider a tree set containing trees of the form shown in Figure 4a. |
All the texts were annotated by two people. | 0 | 2.1 Part-of-speech tags. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | of Articles No. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | We run the baseline Moses system for the French-English track of the 2011 Workshop on Machine Translation,9 translating the 3003-sentence test set. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Table 3 Classes of words found by ST for the test corpus. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Word frequencies are estimated by a re-estimation procedure that involves apply ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | It is formally straightforward to extend the grammar to include these names, though it does increase the likelihood of overgeneration and we are unaware of any working systems that incorporate this type of name. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | If the semantic expectations do not intersect, then we know that the case- frames extract mutually exclusive types of noun phrases. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | (a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?' |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The data structure was populated with 64-bit integers sampled uniformly without replacement. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Surprisingly, the non-parametric switching technique also exhibited robust behaviour in this situation. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%). |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | And time is short. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | Each member of a set of trees can be adjoined into distinct nodes of trees in a single elementary tree set, i.e, derivations always involve the adjunction of a derived auxiliary tree set into an elementary tree set. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Hence, the different averages of manual scores for the different language pairs reflect the behaviour of the judges, not the quality of the systems on different language pairs. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.