source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Their results show that their high performance NER use less training data than other systems. | 0 | Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Analysis of the data revealed that the contextual role knowledge is especially helpful for resolving pronouns because, in general, they are semantically weaker than definite NPs. |
Here we present two algorithms. | 0 | The unlabeled data gives many such "hints" that two features should predict the same label, and these hints turn out to be surprisingly useful when building a classifier. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | With this restriction the resulting tree sets will have independent paths. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | 2.1.1 Lexical Seeding It is generally not safe to assume that multiple occurrences of a noun phrase refer to the same entity. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Segments with the same surface form but different PoS tags are treated as different lexemes, and are represented as separate arcs (e.g. the two arcs labeled neim from node 6 to 7). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Note that the sets of possible classifiers for a given noun can easily be encoded on that noun by grammatical features, which can be referred to by finite-state grammatical rules. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The negative logarithm of t0 is reported. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The dictionary sizes reported in the literature range from 17,000 to 125,000 entries, and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches, possibly more important than the particular set of methods used in the segmentation. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Chang of Tsinghua University, Taiwan, R.O.C., for kindly providing us with the name corpora. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The approach builds from an initial seed set for a category, and is quite similar to the decision list approach described in (Yarowsky 95). |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | A totally non stochastic rule-based system such as Wang, Li, and Chang's will generally succeed in such cases, but of course runs the risk of overgeneration wherever the single-hanzi word is really intended. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | If it is made up of all capital letters, then (allCaps, zone) is set to 1. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | ogy (Koskenniemi 1983; Antworth 1990; Tzoukermann and Liberman 1990; Karttunen, Kaplan, and Zaenen 1992; Sproat 1992); we represent the fact that ir, attaches to nouns by allowing t:-transitions from the final states of all noun entries, to the initial state of the sub-WFST representing f,. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | The structure uses linear probing hash tables and is designed for speed. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible. |
Here both parametric and non-parametric models are explored. | 0 | The corpus-based statistical parsing community has many fast and accurate automated parsing systems, including systems produced by Collins (1997), Charniak (1997) and Ratnaparkhi (1997). |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | (2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The corpus has been annotated with six different types of information, which are characterized in the following subsections. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Employing a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions is not only theoretically clean and linguistically justified and but also probabilistically apropriate and empirically sound. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | . |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Projectivizing a dependency graph by lifting nonprojective arcs is a nondeterministic operation in the general case. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The state-to-tag mapping is obtained from the best hyperparameter setting for 11 mapping shown in Table 3. |
There is no global pruning. | 0 | 2) An improved language model, which takes into account syntactic structure, e.g. to ensure that a proper English verbgroup is generated. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | The text type are editorials instead of speech transcripts. |
The texts were annotated with the RSTtool. | 0 | Within the RST âuser communityâ there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational). |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | We performed three experiments to evaluate our techniques. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Table 2 shows single-threaded results, mostly for comparison to IRSTLM, and Table 3 shows multi-threaded results. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Although this feature helps, we encounter one consequence of variable word order. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | 2 for the accuracy of the different methods. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The present proposal falls into the last group. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\xiao3jiang4 'little general.' |
It is probably the first analysis of Arabic parsing of this kind. | 0 | To differentiate between the coordinating and discourse separator functions of conjunctions (Table 3), we mark each CC with the label of its right sister (splitCC). |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | BerkeleyLM uses states to optimistically search for longer n-gram matches first and must perform twice as many random accesses to retrieve backoff information. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first âPresident George Bushâ then âBushâ). |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | This analyzer setting is similar to that of (Cohen and Smith, 2007), and models using it are denoted nohsp, Parser and Grammar We used BitPar (Schmid, 2004), an efficient general purpose parser,10 together with various treebank grammars to parse the input sentences and propose compatible morphological segmentation and syntactic analysis. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | We can use the preceding linguistic and annotation insights to build a manually annotated Arabic grammar in the manner of Klein and Manning (2003). |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In the following, we use the notation wi wj to mean that (wi, r, wj) E A; r we also use wi wj to denote an arc with unspecified label and wi —*∗ wj for the reflexive and transitive closure of the (unlabeled) arc relation. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | We have already mentioned the closely related work by Matsoukas et al (2009) on discriminative corpus weighting, and Jiang and Zhai (2007) on (nondiscriminative) instance weighting. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Segmental morphology Hebrew consists of seven particles m(“from”) f(“when”/“who”/“that”) h(“the”) w(“and”) k(“like”) l(“to”) and b(“in”). which may never appear in isolation and must always attach as prefixes to the following open-class category item we refer to as stem. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The tree denoting this derivation of 7 is rooted with a node labeled 7' having k subtrees for the derivations of 71, ... ,7a. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | There is a fairly large body of work on SMT adaptation. |
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model. | 0 | The extent to which this constraint is enforced varies greatly across existing methods. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | The following three sections elaborate these different stages is more detail. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | This range is collapsed to a number of buckets, typically by taking the hash modulo the number of buckets. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | The 1st block contains the simple baselines from section 2.1. |
The AdaBoost algorithm was developed for supervised learning. | 0 | This section describes AdaBoost, which is the basis for the CoBoost algorithm. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | To explicitly handle the word reordering between words in source and target language, we use the concept of the so-called inverted alignments as given in (Ney et al., 2000). |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | It's not clear how to apply these methods in the unsupervised case, as they required cross-validation techniques: for this reason we use the simpler smoothing method shown here. input to the unsupervised algorithm is an initial, "seed" set of rules. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | By aggregating the POS labels of the English tokens to types, we can generate label distributions for the English vertices. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | For example, take ..., says Maury Cooper, a vice president at S.&P. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Specifically, for the ith word type, the set of token-level tags associated with token occurrences of this word, denoted t(i), must all take the value Ti to have nonzero mass. Thus in the context of Gibbs sampling, if we want to block sample Ti with t(i), we only need sample values for Ti and consider this setting of t(i). |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | For example, we might have VP â VB NP PP, where the NP is the subject. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The following algorithm was then used to induce new rules: Let Count' (x) be the number of times feature x is seen with some known label in the training data. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered? |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | The Danish Dependency Treebank (DDT) comprises about 100K words of text selected from the Danish PAROLE corpus, with annotation of primary and secondary dependencies (Kromann, 2003). |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The theory has also been validated empirically. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The range of the score is between 0 and 1 (higher is better). |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | 2.1 Overview. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement. |
The AdaBoost algorithm was developed for supervised learning. | 0 | Recent results (e.g., (Yarowsky 95; Brill 95; Blum and Mitchell 98)) have suggested that unlabeled data can be used quite profitably in reducing the need for supervision. |
This corpus has several advantages: it is annotated at different levels. | 0 | Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Decreasing the threshold results in higher mWER due to additional search errors. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003). |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | MENE has only been tested on MUC7. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Human evaluation is one way to distinguish between the two cases. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | It was also proposed to allow annotators to skip sentences that they are unable to judge. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | We outlined the definition of a family of constrained grammatical formalisms, called Linear Context-Free Rewriting Systems. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Evaluation within a set The evaluation of paraphrases within a set of phrases which share a keyword is illustrated in Figure 4. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | In general, several gross corpus statistics favor the ATB, so other factors must contribute to parsing underperformance. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI! |
It is probably the first analysis of Arabic parsing of this kind. | 0 | For parsing, the most challenging form of ambiguity occurs at the discourse level. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei. |
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques. | 0 | Except for the left3The graphs satisfy all the well-formedness conditions given in section 2 except (possibly) connectedness. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | immediately by a Romanization into the pinyin transliteration scheme; numerals following each pinyin syllable represent tones. |
Here we present two algorithms. | 0 | (Blum and Mitchell 98) offer a promising formulation of redundancy, also prove some results about how the use of unlabeled examples can help classification, and suggest an objective function when training with unlabeled examples. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | However, we do not explore this possibility in the current work. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Non-empty buckets contain an entry belonging to them or to a preceding bucket where a conflict occurred. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | I ⢠JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- ⢠:zhang! |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | Table 3: Dev set frequencies for the two most significant discourse markers in Arabic are skewed toward analysis as a conjunction. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary. |
There is no global pruning. | 0 | Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Figure 2: An ATB sample from the human evaluation. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | To evaluate proper-name identification, we randomly se lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin gle unambiguous answer, unlike the more general question of which segmentation is correct. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | So, this was a surprise element due to practical reasons, not malice. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.