source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The AdaBoost algorithm was developed for supervised learning. | 0 | The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Not all the layers have been produced for all the texts yet. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Here a set is represented by the keyword and the number in parentheses indicates the number of shared NE pair instances. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | This decreases the statistical significance of our results compared to those studies. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | More details on the parsing algorithm can be found in Nivre (2003). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | x EG, has agreed to be bought by H x EG, now owned by H x H to acquire EG x Hâs agreement to buy EG Three of those phrases are actually paraphrases, but sometime there could be some noise; such as the second phrase above. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | Furthermore, some of the arcs represent lexemes not present in the input tokens (e.g. h/DT, fl/POS), however these are parts of valid analyses of the token (cf. super-segmental morphology section 2). |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse? |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Since these are distinct syntactic units, they are typically segmented. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | For each terminal, the Leaf Ancestor metric extracts the shortest path to the root. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | These clusters are computed using an SVD variant without relying on transitional structure. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | From here on we will refer to the named-entity string itself as the spelling of the entity, and the contextual predicate as the context. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Our use of an unweighted lattice reflects our belief that all the segmentations of the given input sentence are a-priori equally likely; the only reason to prefer one segmentation over the another is due to the overall syntactic context which is modeled via the PCFG derivations. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Statistics are printed before Moses exits and after parts of the decoder have been destroyed. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | In our experiments we set the parameter values randomly, and then ran EM to convergence. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | We substantially outperform all of them on query speed and offer lower memory consumption than lossless alternatives. |
A beam search concept is applied as in speech recognition. | 0 | Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The kind of annotation work presented here would clearly benefit from the emergence of standard formats and tag sets, which could lead to sharable resources of larger size. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For all other recursive NPs, we add a common annotation to the POS tag of the head (recursiveNPHead). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | na me =>2 ha nzi fa mi ly 1 ha nzi gi ve n 6.1 ha nzi fa mi ly => ha nz ii 7.2 ha nzi fa mi ly => ha nzi i ha nz ij 8.1 ha nzi gi ve n => ha nz ii 9.2 ha nzi giv en => ha nzi i ha nz ij The difficulty is that given names can consist, in principle, of any hanzi or pair of hanzi, so the possible given names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | All three models evaluated in this paper incorrectly analyze the constituent as iDafa; none of the models attach the attributive adjectives properly. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Unfortunately, we have much less data to work with than with the automatic scores. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Also, in Information Extraction (IE), in which the system tries to extract elements of some events (e.g. date and company names of a corporate merger event), several event instances from different news articles have to be aligned even if these are expressed differently. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | If the token is the first word of a sentence, then this feature is set to 1. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The monotone search performs worst in terms of both error rates mWER and SSER. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For example, both a chair and a truck would be labeled as artifacts, but this does not at all suggest that they are coreferent. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | After tagging a large corpus with an automatic NE tagger, the method tries to find sets of paraphrases automatically without being given a seed phrase or any kind of cue. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | In the namedentity problem each example is a (spelling,context) pair. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | The authors acknowledge the support of the NSF (CAREER grant IIS0448168, and grant IIS 0904684). |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given. |
Their results show that their high performance NER use less training data than other systems. | 0 | The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Typical data structures are generalized Bloom filters that guarantee a customizable probability of returning the correct answer. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Here, the term frequency (TF) is the frequency of a word in the bag and the inverse term frequency (ITF) is the inverse of the log of the frequency in the entire corpus. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In this situation, BABAR takes the conservative approach and declines to make a resolution. |
This corpus has several advantages: it is annotated at different levels. | 0 | Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | This limitation is the obstacle to making the technology âopen domainâ. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Intuitively, it places more weight on OUT when less evidence from IN is available. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The PROBING model was designed to improve upon SRILM by using linear probing hash tables (though not arranged in a trie), allocating memory all at once (eliminating the need for full pointers), and being easy to compile. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | 2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | These tag distributions are used to initialize the label distributions over the English vertices in the graph. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Most computational models for coreference resolution rely on properties of the anaphor and candidate antecedent, such as lexical matching, grammatical and syntactic features, semantic agreement, and positional information. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This is a rather important source of errors in name identifi cation, and it is not really possible to objectively evaluate a name recognition system without considering the main lexicon with which it is used. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | During development, we sensed that the Recency and Syn- role KSs did not deserve to be on equal footing with the other KSs because their knowledge was so general. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Tokens were converted to vocabulary identifiers in advance and state was carried from each query to the next. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | We measured recall (Rec), precision (Pr), and the F-measure (F) with recall and precision equally weighted. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The productions of HG's are very similar to those of CFG's except that the operation used must be made explicit. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Lexicalizing several POS tags improves performance. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 1 | The second algorithm builds on a boosting algorithm called AdaBoost (Freund and Schapire 97; Schapire and Singer 98). |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | As can be seen in Figure 3, the phrases in the âagreeâ set include completely different relationships, which are not paraphrases. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The test accuracy more or less asymptotes. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Using the virtual distribution Di (i) and pseudo-labels"y.,„ values for Wo, W± and W_ can be calculated for each possible weak hypothesis (i.e., for each feature x E Xi); the weak hypothesis with minimal value for Wo + 2/WW _ can be chosen as before; and the weight for this weak hypothesis at = ln ww+411:) can be calculated. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Quite often, though, these directives fulfill the goal of increasing annotator agreement without in fact settling the theoretical question; i.e., the directives are clear but not always very well motivated. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 3 61.7 38. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | For example, in the phrase âCompany-A last week purchased rival Marshalls from Company-Bâ, the purchased company is Marshalls, not Company-B. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: 'j";gil, xiao3 [lao3 shu3] 'little rat,' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 'rat,' blocking its phrasal application. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | Central to our approach (see Algorithm 1) is a bilingual similarity graph built from a sentence-aligned parallel corpus. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | We substantially outperform all of them on query speed and offer lower memory consumption than lossless alternatives. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Judges varied in the average score they handed out. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 12 One class of full personal names that this characterization does not cover are married women's names. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The trie data structure is commonly used for language modeling. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Dagan and Itai (Dagan and Itai, 1990) experimented with co-occurrence statistics that are similar to our lexical caseframe expectations. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The use of ILP in learning the desired grammar significantly increases the computational complexity of this method. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | First, we learn weights on individual phrase pairs rather than sentences. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1. |
Their results show that their high performance NER use less training data than other systems. | 0 | In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | The effect of UNIQ is very small on both data sets. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Systems that generally do better than others will receive a positive average normalizedjudgement per sentence. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | The use of ILP in learning the desired grammar significantly increases the computational complexity of this method. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Contextual role knowledge provides evidence as to whether a candidate is a plausible antecedent for an anaphor. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | The availability of these resources guided our selection of foreign languages. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | SEGTok(noH) is the segmentation accuracy ignoring mistakes involving the implicit definite article h.11 To evaluate our performance on the tagging task we report CPOS and FPOS corresponding to coarse- and fine-grained PoS tagging results (F1) measure. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Time for Moses itself to load, including loading the language model and phrase table, is included. |
A beam search concept is applied as in speech recognition. | 0 | Our approach uses word-to-word dependencies between source and target words. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | We have not yet tried this. |
There is no global pruning. | 0 | In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962). |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Overview of the method 2.2 Step by Step Algorithm. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | In contrast to results reported in Johnson (2007), we found that the per P (Ti|T âi, β) n (f,v)âWi P (v|Ti, f, W âi, T âi, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts. |
Subsets and Splits