source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The AdaBoost algorithm was developed for supervised learning. | 0 | 971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN). |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | This PCFG is incorporated into the Stanford Parser, a factored model that chooses a 1-best parse from the product of constituency and dependency parses. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | A token that is allCaps will also be initCaps. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | A related point is that mutual information is helpful in augmenting existing electronic dictionaries, (cf. |
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources. | 0 | Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM). |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | 25 16. |
The texts were annotated with the RSTtool. | 0 | Our annotators pointed out that very often they made almost random decisions as to what relation to choose, and where to locate the boundary of a span. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | The inventions disclosed herein are the subject of a patent application owned by the University of Utah and licensed on an exclusive basis to Attensity Corporation. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Given a document to process, BABAR uses four modules to perform coreference resolution. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We define a symmetric similarity function K(uZ7 uj) over two foreign language vertices uZ7 uj E Vf based on the co-occurrence statistics of the nine feature concepts given in Table 1. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | In (b) âtheyâ refers to the kidnapping victims, but in (c) âtheyâ refers to the armed men. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | A list of words occurring more than 10 times in the training data is also collected (commonWords). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | We have shown that the maximum entropy framework is able to use global information directly. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | For robustness reasons, the parser may output a set of dependency trees instead of a single tree. most dependent of the next input token, dependency type features are limited to tokens on the stack. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We then use linguistic and annotation insights to develop a manually annotated grammar for Arabic (§4). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | 0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a). |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | The goal of machine translation is the translation of a text given in some source language into a target language. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 1 53.8 47. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Statistical NERs usually find the sequence of tags that maximizes the probability , where is the sequence of words in a sentence, and is the sequence of named-entity tags assigned to the words in . Attempts have been made to use global information (e.g., the same named entity occurring in different sentences of the same document), but they usually consist of incorporating an additional classifier, which tries to correct the errors in the output of a first NER (Mikheev et al., 1998; Borthwick, 1999). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | For each feature type f and tag t, a multinomial Ïtf is drawn from a symmetric Dirichlet distribution with concentration parameter β. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Conditioned on T , features of word types W are drawn. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Text generation, or at least the two phases of text planning and sentence planning, is a process driven partly by well-motivated choices (e.g., use this lexeme X rather than that more colloquial near-synonym Y ) and partly by con tation like that of PCC can be exploited to look for correlations in particular between syntactic structure, choice of referring expressions, and sentence-internal information structure. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Therefore, performance is more closely tied to the underlying data structure than to the cache. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 0 57.3 51. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Manual and Automatic Evaluation of Machine Translation between European Languages |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | com t 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 2 61.7 64. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Extract NE instance pairs with contexts First, we extract NE pair instances with their context from the corpus. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 3 58.3 40. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For pronouns, however, all of the knowledge sources increased recall, often substantially, and with little if any decrease in precision. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Other packages walk their respective data structures once to find wnf and again to find {b(wn−1 i )}f−1 i=1if necessary. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | We thank members of the MIT NLP group for their suggestions and comments. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | gaolbu4-gaolxing4 (hap-not-happy) 'happy?' |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Moreover, the Stanford parser achieves the most exact Leaf Ancestor matches and tagging accuracy that is only 0.1% below the Bikel model, which uses pre-tagged input. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | However, the accuracy is considerably higher than previously reported results for robust non-projective parsing of Czech, with a best performance of 73% UAS (Holan, 2004). |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | A promising direction for future work is to explicitly model a distribution over tags for each word type. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The Bikel GoldPOS configuration only supplies the gold POS tags; it does not force the parser to use them. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | This section measures performance on shared tasks in order of increasing complexity: sparse lookups, evaluating perplexity of a large file, and translation with Moses. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | We consider the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | 1,000 of these were picked at random, and labeled by hand to produce a test set. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | ((fii, Q2, Pa) , —■ (01, i32, 03) , e),(02,e) Oa, en) The path complexity of the tee set generated by a MCTAG is not necessarily context-free. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Cohen and Smith (2007) later on based a system for joint inference on factored, independent, morphological and syntactic components of which scores are combined to cater for the joint inference task. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly. |
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language. | 0 | In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | Among these 32 sets, we found the following pairs of sets which have two or more links. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Hence we decided to select ten commentaries to form a âcore corpusâ, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | To improve agreement during the revision process, a dual-blind evaluation was performed in which 10% of the data was annotated by independent teams. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | 6 Our knowledge sources return some sort of probability estimate, although in some cases this estimate is not especially well-principled (e.g., the Recency KS). |
There are clustering approaches that assign a single POS tag to each word type. | 0 | The tokens w are generated by token-level tags t from an HMM parameterized by the lexicon structure. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | As we shall see, most of the linked sets are paraphrases. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | 2. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | So, this was a surprise element due to practical reasons, not malice. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | We assume that the goal in dependency parsing is to construct a labeled dependency graph of the kind depicted in Figure 1. |
There is no global pruning. | 0 | 13. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | Other kinds of productive word classes, such as company names, abbreviations (termed fijsuolxie3 in Mandarin), and place names can easily be 20 Note that 7 in E 7 is normally pronounced as leO, but as part of a resultative it is liao3.. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | For each pair we also record the context, i.e. the phrase between the two NEs (Step1). |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | This work has been supported as part of the Verbmobil project (contract number 01 IV 601 A) by the German Federal Ministry of Education, Science, Research and Technology and as part of the Eutrans project (ESPRIT project number 30268) by the European Community. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Subsets of partial hypotheses with coverage sets C of increasing cardinality c are processed. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | For example, the phrase â's New York-based trust unit,â is not a paraphrase of the other phrases in the âunitâ set. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | That is, given a choice between segmenting a sequence abc into abc and ab, c, the former will always be picked so long as its cost does not exceed the summed costs of ab and c: while; it is possible for abc to be so costly as to preclude the larger grouping, this will certainly not usually be the case. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | For example, the Wang, Li, and Chang system fails on the sequence 1:f:p:]nian2 nei4 sa3 in (k) since 1F nian2 is a possible, but rare, family name, which also happens to be written the same as the very common word meaning 'year.' |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 1 | We present a coreference resolver called BABAR that uses contextual role knowledge to evaluate possible antecedents for an anaphor. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | (If the TF/IDF score of that word is below a threshold, the phrase is discarded.) |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | Model components cascade, so the row corresponding to +FEATS also includes the PRIOR component (see Section 3). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | 99 94. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | We use the log-linear tagger of Toutanova et al. |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | In this case, we have no finite-state restrictions for the search space. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The possible analyses of a surface token pose constraints on the analyses of specific segments. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Can we do . QmS: Yes, wonderful. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | The segmenter handles the grouping of hanzi into words and outputs word pronunciations, with default pronunciations for hanzi it cannot group; we focus here primarily on the system's ability to segment text appropriately (rather than on its pronunciation abilities). |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | A detailed description of the search procedure used is given in this patent. |
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily. | 0 | We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes. |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | Finally, the assignments of PoS tags to OOV segments is subject to language specific constraints relative to the token it was originated from. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | This withdrawal by the treasury secretary is understandable, though. |
The texts were annotated with the RSTtool. | 0 | Annotation of syntactic structure for the core corpus has just begun. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | We refer to (T , W ) as the lexicon of a language and Ï for the parameters for their generation; Ï depends on a single hyperparameter β. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | First, we will describe their method and compare it with our method. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | nan2gual 'pumpkin.' |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | If they knew that the first four words in a hypergraph node would never extend to the left and form a 5-gram, then three or even fewer words could be kept in the backward state. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We extend the Stanford parser to accept pre-generated lattices, where each word is represented as a finite state automaton. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | We would like to thank Ryan McDonald for numerous discussions on this topic. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Note that on some examples (around 2% of the test set) CoBoost abstained altogether; in these cases we labeled the test example with the baseline, organization, label. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.