source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | Other errors include NE tagging errors and errors due to a phrase which includes other NEs. |
This assumption, however, is not inherent to type-based tagging models. | 0 | For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Many researchers have developed coreference resolvers, so we will only discuss the methods that are most closely related to BABAR. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | needs to be in initCaps to be considered for this feature. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | The results clearly indicate increased variation in the ATB relative to the WSJ, but care should be taken in assessing the magnitude of the difference. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | Such analyzers propose multiple segmentation possibilities and their corresponding analyses for a token in isolation but have no means to determine the most likely ones. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | Hence, s < k. But by addition of the votes on the two parses, s > 2N-11> k, a contradiction. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | 971,746 sentences of New York Times text were parsed using the parser of (Collins 96).1 Word sequences that met the following criteria were then extracted as named entity examples: whose head is a singular noun (tagged NN). |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | These results are promising and there are several avenues for improving on these results. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | 4 70.4 46. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | For each pair we also record the context, i.e. the phrase between the two NEs (Step1). |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | Table 2 shows single-threaded results, mostly for comparison to IRSTLM, and Table 3 shows multi-threaded results. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | For the remaining arcs, if the segment is in fact a known lexeme it is tagged as usual, but for the OOV arcs which are valid Hebrew entries lacking tags assignment, we assign all possible tags and then simulate a grammatical constraint. |
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders. | 0 | The code is opensource, has minimal dependencies, and offers both C++ and Java interfaces for integration. |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | We also see that the increase in the size of the label sets for Head and Head+Path is far below the theoretical upper bounds given in Table 1. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphorâs sentence and two preceding sentences, and for definite NPs it is the anaphorâs sentence and eight preceding sentences. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | We are currently exploring other methods that employ similar ideas and their formal properties. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | 1 61.7 37. |
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold. | 0 | In IE, creating the patterns which express the requested scenario, e.g. âmanagement successionâ or âcorporate merger and acquisitionâ is regarded as the hardest task. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | 52 15. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | (5) and ht into Equ. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | For each 13 Of course, this weighting makes the PCFG an improper distribution. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | The iw all map variant uses a non-0 y weight on a uniform prior in p,,(s t), and outperforms a version with y = 0 (iw all) and the “flattened” variant described in section 3.2. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | This approach needs a phrase as an initial seed and thus the possible relationships to be extracted are naturally limited. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | Most of these groups follow a phrase-based statistical approach to machine translation. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result. |
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | This may be the sign of a maturing research environment. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | As noted, this sentence consists of four words, namely B X ri4wen2 'Japanese,' :Â¥, zhanglyu2 'octopus/ :&P:l zen3me0 'how,' and IDt shuol 'say.' |
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | Except for the left3The graphs satisfy all the well-formedness conditions given in section 2 except (possibly) connectedness. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Following Sproat and Shih (1990), performance for Chinese segmentation systems is generally reported in terms of the dual measures of precision and recalP It is fairly standard to report precision and recall scores in the mid to high 90% range. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | This is not completely surprising, since all systems use very similar technology. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | The problem is to store these two values for a large and sparse set of n-grams in a way that makes queries efficient. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | we perform five runs with different random initialization of sampling state. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The first value reports resident size after loading; the second is the gap between post-loading resident memory and peak virtual memory. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The features are used to represent each example for the learning algorithm. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | One implementation issue deserves some elaboration. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | 2.1 Overview. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information). |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | The ability to redistribute belief values across sets rather than individual hypotheses is key. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | A configuration of M consists of a state of the finite control, the nonblank contents of the input tape and k work tapes, and the position of each head. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | constitute names, since we have only their segmentation, not the actual classification of the segmented words. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | The learned information was recycled back into the resolver to improve its performance. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | So, there is a limitation that IE can only be performed for a predefined task, like âcorporate mergersâ or âmanagement successionâ. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | Pseudo-labels are formed by taking seed labels on the labeled examples, and the output of the fixed classifier on the unlabeled examples. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | First, we aim to explicitly characterize examples from OUT as belonging to general language or not. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | The “No LP” model does not outperform direct projection for German and Greek, but performs better for six out of eight languages. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | This paper discusses the use of unlabeled examples for the problem of named entity classification. |
A beam search concept is applied as in speech recognition. | 0 | The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | (Yarowsky 95) describes an algorithm for word-sense disambiguation that exploits redundancy in contextual features, and gives impressive performance. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Nonstochastic lexical-knowledge-based approaches have been much more numer ous. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | For example, kidnapping victims should be extracted from the subject of the verb âkidnappedâ when it occurs in the passive voice (the shorthand representation of this pattern would be â<subject> were kidnappedâ). |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 58 95. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | These enable much larger models in memory, compensating for lost accuracy. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | In the remainder of the paper, we outline how a class of Linear Context-Free Rewriting Systems (LCFRS's) may be defined and sketch how semilinearity and polynomial recognition of these systems follows. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Thus we have some confidence that our own performance is at least as good as that of Chang et al. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best. |
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods. | 0 | Other errors include NE tagging errors and errors due to a phrase which includes other NEs. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | The second model (+PRIOR) utilizes the independent prior over type-level tag assignments P (T |Ï). |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant. |
This assumption, however, is not inherent to type-based tagging models. | 0 | This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The alignment model uses two kinds of parameters: alignment probabilities p(aj jajô1; I; J), where the probability of alignment aj for position j depends on the previous alignment position ajô1 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ). |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | The domain is general politics, economics and science. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | For example, if CFLex determines that the log- likelihood statistic for the co-occurrence of a particular noun and caseframe corresponds to the 90% confidence level, then CFLex returns .90 as its belief that the anaphor and candidate are coreferent. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | computing the precision of the other's judgments relative to this standard. |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | This actually happens quite frequently (more below), so that the rankings are broad estimates. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | We then extract a set of possible tags tx(y) by eliminating labels whose probability is below a threshold value τ: We describe how we choose τ in §6.4. |
This paper presents methods to query N-gram language models, minimizing time and space costs. | 0 | Part of the gap between resident and virtual memory is due to the time at which data was collected. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | 3.4 Salience-based text generation. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | Our initial experimentation with the evaluation tool showed that this is often too overwhelming. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | The second main result is that the pseudo-projective approach to parsing (using special arc labels to guide an inverse transformation) gives a further improvement of about one percentage point on attachment score. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | First, we will describe their method and compare it with our method. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | This measure has the advantage of being completely automatic. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Thus, provided at least this amount of IN data is available—as it is in our setting—adapting these weights is straightforward. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | There are thus some very good reasons why segmentation into words is an important task. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | This is a unique object for which we are able to define a proper probability model. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | With the additional assumptions, inspired by Rounds (1985), we can show that members of this class can be recognized in polynomial time. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 1 61.7 37. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | The new algorithm, which we call CoBoost, uses labeled and unlabeled data and builds two classifiers in parallel. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine). |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW). |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | For the automatic scoring method BLEU, we can distinguish three quarters of the systems. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates. |
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers. | 0 | The increase is generally higher for PDT than for DDT, which indicates a greater diversity in non-projective constructions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.