source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The AdaBoost algorithm was developed for supervised learning. | 0 | The model was parameterized such that the joint probability of a (label, feature-set) pair P(yi, xi) is written as The model assumes that (y, x) pairs are generated by an underlying process where the label is first chosen with some prior probability P(yi); the number of features mi is then chosen with some probability P(mi); finally the features are independently generated with probabilities P(xulyi). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | Second, rather than relying on a division of the corpus into manually-assigned portions, we use features intended to capture the usefulness of each phrase pair. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | We used these three parsers to explore parser combination techniques. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Let us consider an example of ambiguity caused by devocalization. |
The texts were annotated with the RSTtool. | 0 | structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The data structure was populated with 64-bit integers sampled uniformly without replacement. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | This concerns on the one hand the basic question of retrieval, i.e. searching for information across the annotation layers (see 3.1). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | and âHâ represents âHanson Plcâ. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | Making the ten judgements (2 types for 5 systems) takes on average 2 minutes. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Statistics for all data sets are shown in Table 2. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Because there might be some controversy about the exact definitions of such universals, this set of coarse-grained POS categories is defined operationally, by collapsing language (or treebank) specific distinctions to a set of categories that exists across all languages. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | As the name implies, space is O(m) and linear in the number of entries. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | The prediction based on these features is a knearest neighbor classification, using the IB1 algorithm and k = 5, the modified value difference metric (MVDM) and class voting with inverse distance weighting, as implemented in the TiMBL software package (Daelemans et al., 2003). |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | This suggests a direct parallel to (1): where ˜p(s, t) is a joint empirical distribution extracted from the IN dev set using the standard procedure.2 An alternative form of linear combination is a maximum a posteriori (MAP) combination (Bacchiani et al., 2004). |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | For the error counts, a range from 0:0 to 1:0 is used. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Here we use a slightly different notion of lift, applying to individual arcs and moving their head upwards one step at a time: Intuitively, lifting an arc makes the word wk dependent on the head wi of its original head wj (which is unique in a well-formed dependency graph), unless wj is a root in which case the operation is undefined (but then wj —* wk is necessarily projective if the dependency graph is well-formed). |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | (If the TF/IDF score of that word is below a threshold, the phrase is discarded.) |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Linguistic intuitions like those in the previous section inform language-specific annotation choices. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 7 Unlike Dickinson (2005), we strip traces and only con-. |
The corpus was annoted with different linguitic information. | 0 | Thus it is possible, for illustration, to look for a noun phrase (syntax tier) marked as topic (information structure tier) that is in a bridging relation (co-reference tier) to some other noun phrase. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | The resulting algorithm is depicted in Table 1. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | The developers suggested some changes, such as building the model from scratch with IRSTLM, but these did not resolve the problem. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | am 11. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | 3.1 Corpora. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | We use label propagation in two stages to generate soft labels on all the vertices in the graph. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | The resulting structural differences between tree- banks can account for relative differences in parsing performance. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | So, we set a threshold that at least two examples are required to build a link. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | All the links in the âCC-domain are shown in Step 4 in subsection 3.2. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The role that each noun phrase plays in the kidnapping event is key to distinguishing these cases. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In Section 4, we present the performance measures used and give translation results on the Verbmobil task. |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The translation scores for the hypotheses generated with different threshold values t0 are compared to the translation scores obtained with a conservatively large threshold t0 = 10:0 . For each test series, we count the number of sentences whose score is worse than the corresponding score of the test series with the conservatively large threshold t0 = 10:0, and this number is reported as the number of search errors. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | So, this was a surprise element due to practical reasons, not malice. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | In this paper we have argued that Chinese word segmentation can be modeled ef fectively using weighted finite-state transducers. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | We applied the AutoSlog system (Riloff, 1996) to our unannotated training texts to generate a set of extraction patterns for each domain. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 8 57.3 +F EA TS be st me dia n 50. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | Borth- wick (1999) successfully made use of other hand- coded systems as input for his MENE system, and achieved excellent results. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The P (W |T , Ï) term in the lexicon component now decomposes as: n P (W |T , Ï) = n P (Wi|Ti, Ï) i=1 n   tions are not modeled by the standard HMM, which = n ï£ n P (v|ÏTi f ) instead can model token-level frequency. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The wounds are still healing.), entity-attribute (e.g., She 2001), who determined that in their corpus of German computer tests, 38% of relations were lexically signalled. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | A multicomponent Tree Adjoining Grammar (MCTAG) consists of a finite set of finite elementary tree sets. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | We resolve this problem by inserting an entry with probability set to an otherwise-invalid value (−oc). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | The type-level posterior term can be computed according to, P (Ti|W , T âi, β) â Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | As expected, the most informative encoding, Head+Path, gives the highest accuracy with over 99% of all non-projective arcs being recovered correctly in both data sets. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Recall. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | This left 962 examples, of which 85 were noise. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In the second scheme, Head+Path, we in addition modify the label of every arc along the lifting path from the syntactic to the linear head so that if the original label is p the new label is p↓. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 1 | It is well-known that constituency parsing models designed for English often do not generalize easily to other languages and treebanks.1 Explanations for this phenomenon have included the relative informativeness of lexicalization (Dubey and Keller, 2003; Arun and Keller, 2005), insensitivity to morphology (Cowan and Collins, 2005; Tsarfaty and Simaâan, 2008), and the effect of variable word order (Collins et al., 1999). |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | They first collect the NE instance pairs and contexts, just like our method. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | The evidence may not say anything about whether A is more likely than B, only that C and D are not likely. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Our evaluation includes both weighted and un- weighted lattices. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | A moment's reflection will reveal that things are not quite that simple. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | If a candidate has a belief value ⥠.50, then we select that candidate as the antecedent for the anaphor. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The proof is given in (Tillmann, 2000). |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Both (Tsarfaty, 2006; Cohen and Smith, 2007) have shown that a single integrated framework outperforms a completely streamlined implementation, yet neither has shown a single generative model which handles both tasks. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | We define an ATM, M, recognizing a language generated by a grammar, G, having the properties discussed in Section 43. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The segmentation chosen is the best path through the WFST, shown in (d). |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | Our test machine has two Intel Xeon E5410 processors totaling eight cores, 32 GB RAM, and four Seagate Barracuda disks in software RAID 0 running Linux 2.6.18. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | In. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Also, âagreeâ in the CC-domain is not a desirable keyword. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | We refer to this process as Reliable Case Resolution because it involves finding cases of anaphora that can be easily resolved with their antecedents. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | A list of words occurring more than 10 times in the training data is also collected (commonWords). |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | The discovered paraphrases can be a big help to reduce human labor and create a more comprehensive pattern set. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter). |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents. |
The texts were annotated with the RSTtool. | 0 | What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | The local features used are similar to those used in BBN' s IdentiFinder (Bikel et al., 1999) or MENE (Borthwick, 1999). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | How do additional ambiguities caused by devocalization affect statistical learning? |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | It has no syntactic function. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Thus, rather than give a single evaluative score, we prefer to compare the performance of our method with the judgments of several human subjects. |
The corpus was annoted with different linguitic information. | 0 | The tool we use is MMAX8, which has been specifically designed for marking co-reference. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Each pattern represents the role that a noun phrase plays in the surrounding context. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | Section 2.2 then describes our representation for contextual roles and four types of contextual role knowledge that are learned from the training examples. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | We elected run Moses single-threaded to minimize the impact of RandLM’s cache on memory use. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | (Kehler, 1997) also used a DempsterShafer model to merge evidence from different sources for template-level coreference. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | See Table 2 for the tag set size of other languages. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | For each experiment we gave an nonparametric and a parametric technique for combining parsers. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | BABAR uses a DempsterShafer decision model (Stefik, 1995) to combine the evidence provided by the knowledge sources. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | For example Chen and Liu (1992) report precision and recall rates of over 99%, but this counts only the words that occur in the test corpus that also occur in their dictionary. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | However, as we have noted, nothing inherent in the approach precludes incorporating higher-order constraints, provided they can be effectively modeled within a finite-state framework. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | In this paper, we will propose an unsupervised method to discover paraphrases from a large untagged corpus. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Second, comparisons of different methods are not meaningful unless one can eval uate them on the same corpus. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The manual scores are averages over the raw unnormalized scores. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | These clusters are computed using an SVD variant without relying on transitional structure. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Since the segmentation corresponds to the sequence of words that has the lowest summed unigram cost, the segmenter under discussion here is a zeroth-order model. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Finally, our U (unparsed) measure is used to report the number of sentences to which our system could not propose a joint analysis. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state. |
Here we present two algorithms. | 0 | For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.