source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They have made use of local and global features to deal with the instances of same token in a document. | 0 | both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data). |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 3. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | It was also proposed to allow annotators to skip sentences that they are unable to judge. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | We tokenize MWUs and their POS tags; this reduces the tag set size to 12. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Given parameter estimates, the label for a test example x is defined as We should note that the model in equation 9 is deficient, in that it assigns greater than zero probability to some feature combinations that are impossible. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Mikheev et al. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Purely statistical approaches have not been very popular, and so far as we are aware earlier work by Sproat and Shih (1990) is the only published instance of such an approach. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Domain adaptation is a common concern when optimizing empirical NLP applications. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Equation 2 is an estimate of the conditional probability of the label given the feature, P(yjx). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Sorted arrays store key-value pairs in an array sorted by key, incurring no space overhead. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Based on revision 4041, we modified Moses to print process statistics before terminating. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | Automatic scores are computed on a larger tested than manual scores (3064 sentences vs. 300–400 sentences). collected manual judgements, we do not necessarily have the same sentence judged for both systems (judges evaluate 5 systems out of the 8–10 participating systems). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | However, their system is a hybrid of hand-coded rules and machine learning methods. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse? |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | The basic strategy is, for a given pair of entity types, to start with some examples, like several famous book title and author pairs; and find expressions which contains those names; then using the found expressions, find more author and book title pairs. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | com t 700 Mountain Avenue, 2d451, Murray Hill, NJ 07974, USA. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | (Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | There is a (costless) transition between the NC node and f,. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This larger corpus was kindly provided to us by United Informatics Inc., R.O.C. a set of initial estimates of the word frequencies.9 In this re-estimation procedure only the entries in the base dictionary were used: in other words, derived words not in the base dictionary and personal and foreign names were not used. |
This corpus has several advantages: it is annotated at different levels. | 0 | Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together. |
This paper conducted research in the area of automatic paraphrase discovery. | 1 | Automatic Paraphrase Discovery based on Context and Keywords between NE Pairs |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | Future work should also extend the approach to build a complete named entity extractor - a method that pulls proper names from text and then classifies them. |
The AdaBoost algorithm was developed for supervised learning. | 0 | 2 We now introduce a new algorithm for learning from unlabeled examples, which we will call DLCoTrain (DL stands for decision list, the term Cotrain is taken from (Blum and Mitchell 98)). |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | In this particular case, all English vertices are labeled as nouns by the supervised tagger. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Word Head Of Complement POS 1 '01 inna âIndeed, trulyâ VP Noun VBP 2 '01 anna âThatâ SBAR Noun IN 3 01 in âIfâ SBAR Verb IN 4 01 an âtoâ SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored. |
The corpus was annoted with different linguitic information. | 0 | Within the RST âuser communityâ there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational). |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | Table 2 shows our complete set of results. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | We report results for the best and median hyperparameter settings obtained in this way. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The method shares some characteristics of the decision list algorithm presented in this paper. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | In words, the judgements are normalized, so that the average normalized judgement per judge is 3. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuterâs text collection8 that had a subject code corresponding to natural disasters. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | From the point of view of computational implementation this can be problematic, since the inclusion of non-projective structures makes the parsing problem more complex and therefore compromises efficiency and in practice also accuracy and robustness. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Also, in Information Extraction (IE), in which the system tries to extract elements of some events (e.g. date and company names of a corporate merger event), several event instances from different news articles have to be aligned even if these are expressed differently. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The significant drop in number of pupils will begin in the fall of 2003. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.' |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Each feature group can be made up of many binary features. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | It is important to bear in mind, though, that this is not an inherent limitation of the model. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Section 2.1 describes how BABAR generates training examples to use in the learning process. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | The manual scores are averages over the raw unnormalized scores. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | For the seen word ir, 'gen erals,' there is an c:NC transduction from to the node preceding ir,; this arc has cost cost( f,) - cost(unseen(f,)), so that the cost of the whole path is the desired cost( f,). |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | When 7' is adjoined at ?I in the tree 7 we obtain a tree v". |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | It is likely that the more complex cases, where path information could make a difference, are beyond the reach of the parser in most cases. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The pre terminal morphological analyses are mapped to the shortened âBiesâ tags provided with the tree- bank. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | In contrast to results reported in Johnson (2007), we found that the per P (Ti|T âi, β) n (f,v)âWi P (v|Ti, f, W âi, T âi, β) formance of our Gibbs sampler on the basic 1TW model stabilized very quickly after about 10 full it All of the probabilities on the right-hand-side are Dirichlet, distributions which can be computed analytically given counts. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | 17 They also provide a set of title-driven rules to identify names when they occur before titles such as $t. 1: xianlshengl 'Mr.' or i:l:itr!J tai2bei3 shi4zhang3 'Taipei Mayor.' |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Thus at each iteration the method induces at most n x k rules, where k is the number of possible labels (k = 3 in the experiments in this paper). step 3. |
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures. | 0 | In the third and final scheme, denoted Path, we keep the extra infor2Note that this is a baseline for the parsing experiment only (Experiment 2). |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | A modified language model probability pÃ(eje0; e00) is defined as follows: pÃ(eje0; e00) = 1:0 if à = 0 p(eje0; e00) if à = 1 : We associate a distribution p(Ã) with the two cases à = 0 and à = 1 and set p(à = 1) = 0:7. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Previous work has attempted to incorporate such constraints into token-level models via heavy-handed modifications to inference procedure and objective function (e.g., posterior regularization and ILP decoding) (Grac¸a et al., 2009; Ravi and Knight, 2009). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | If is not initCaps, then (not-initCaps, ) is set to 1. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | The restriction can be expressed in terms of the number of uncovered source sentence positions to the left of the rightmost position m in the coverage set. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | The original OUT counts co(s, t) are weighted by a logistic function wλ(s, t): To motivate weighting joint OUT counts as in (6), we begin with the “ideal” objective for setting multinomial phrase probabilities 0 = {p(s|t), dst}, which is the likelihood with respect to the true IN distribution pi(s, t). |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | For queries, we uniformly sampled 10 million hits and 10 million misses. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabil ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The samples from each corpus were independently evaluated. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | For a description of the application of AdaBoost to various NLP problems see the paper by Abney, Schapire, and Singer in this volume. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 2 56.2 32. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | 5 âUnderspecified Rhetorical Markup Languageâ 6 This confirms the figure given by (Schauer, Hahn. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | Finally, other packages implement language model estimation while we are currently dependent on them to generate an ARPA file. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The hypothesized parse is then the set of constituents that are likely (P > 0.5) to be in the parse according to this model. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | This is the parse that is closest to the centroid of the observed parses under the similarity metric. |
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions. | 0 | In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | To our knowledge, ours is the first analysis of this kind for Arabic parsing. |
Here both parametric and non-parametric models are explored. | 0 | These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Other approaches encode sparsity as a soft constraint. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | 3 54.4 33. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | â similar results have been observed across multiple languages. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | BABAR uses unsupervised learning to acquire this knowledge from plain text without the need for annotated training data. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | In this section, we extend state to optimize left-to-right queries. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | For à = 0, no new target word is generated, while an additional source sentence position is covered. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Reicheâs colleagues will make sure that the concept is waterproof. |
This corpus has several advantages: it is annotated at different levels. | 0 | Hence we decided to select ten commentaries to form a âcore corpusâ, for which the entire range of annotation levels was realized, so that experiments with multi-level querying could commence. |
This corpus has several advantages: it is annotated at different levels. | 0 | annotation guidelines that tell annotators what to do in case of doubt. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | 3.2 The DempsterShafer Decision Model. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | In this paper, Section 2 begins by explaining how contextual role knowledge is represented and learned. |
Their results show that their high performance NER use less training data than other systems. | 0 | We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | About half of the participants of last year’s shared task participated again. |
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models. | 0 | The taggers were trained on datasets labeled with the universal tags. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | An input ABCD can be represented as an FSA as shown in Figure 2(b). |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | The CFLex and CFNet knowledge sources provide positive evidence that a candidate NP and anaphor might be coreferent. |
Here we present two algorithms. | 0 | In our implementation, we make perhaps the simplest choice of weak hypothesis. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | (1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.