source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | The tee in Figure 9a can be denoted by t1 i223 where tee substitution is used instead of concatenation. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event. |
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM. | 0 | We resolve this problem by inserting an entry with probability set to an otherwise-invalid value (−oc). |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | The entries in this table can be compared with those of Table 3 to see how the performance of the combining techniques degrades in the presence of an inferior parser. |
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm. | 0 | To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989). |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | 7 www.cis.upenn.edu/â¼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Finally, we incorporate the instance-weighting model into a general linear combination, and learn weights and mixing parameters simultaneously. where cλ(s, t) is a modified count for pair (s, t) in OUT, u(s|t) is a prior distribution, and y is a prior weight. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The likelihood of the observed data under the model is where P(yi, xi) is defined as in (9). |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Our motivation for using DempsterShafer is that it provides a well-principled framework for combining evidence from multiple sources with respect to competing hypotheses. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 2 70.7 52. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Most of these groups follow a phrase-based statistical approach to machine translation. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Lexicon Feature: The string of the token is used as a feature. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | We have shown that, at least given independent human judgments, this is not the case, and that therefore such simplistic measures should be mistrusted. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Step 4. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | This is not an oversight. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | This leads to word- and constituent-boundaries discrepancy, which breaks the assumptions underlying current state-of-the-art statistical parsers. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | 3.2 Reordering with IBM Style. |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | We also removed the context-type feature type when using the CoBoost approach. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | For the moment we will assume that there are only two possible labels: each y, is in { —1, +1}. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | The learned patterns are then normalized and applied to the corpus. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | This suggests that different types of anaphora may warrant different treatment: definite NP resolution may depend more on lexical semantics, while pronoun resolution may depend more on contextual semantics. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | While the proportion of sentences containing non-projective dependencies is often 15–25%, the total proportion of non-projective arcs is normally only 1–2%. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | In the maximum entropy framework, there is no such constraint. |
The texts were annotated with the RSTtool. | 0 | Nonetheless, the prospect of a network of annotated discourse resources seems particularly promising if not only a single annotation layer is used but a whole variety of them, so that a systematic search for correlations between them becomes possible, which in turn can lead to more explanatory models of discourse structure. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | This allows an unbounded amount of information about two separate paths (e.g. an encoding of their length) to be combined and used to influence the later derivation. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The two new terms force the two classifiers to agree, as much as possible, on the unlabeled examples. |
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages. | 0 | Based on these high-confidence alignments we can extract tuples of the form [u H v], where u is a foreign trigram type, whose middle word aligns to an English word type v. Our bilingual similarity function then sets the edge weights in proportion to these tuple counts. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | Tsarfaty and Sima’an (2007) have reported state-of-the-art results on Hebrew unlexicalized parsing (74.41%) albeit assuming oracle morphological segmentation. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | The accuracy of the sets in representing paraphrase ranged from 73% to 99%, depending on the NE categories and set sizes; the accuracy of the links for two evaluated domains was 73% and 86%. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Mikheev et al. |
They focused on phrases which two Named Entities, and proceed in two stages. | 0 | Also there are cases where one of the two NEs belong to a phrase outside of the relation. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | 87 Table 7: Test set results. |
The AdaBoost algorithm was developed for supervised learning. | 0 | This PP modifies another NP, whose head is a singular noun. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | IRSTLM’s quantized variant is the inspiration for our quantized variant. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | Finally, we provide a realistic eval uation in which segmentation is performed both in a pipeline and jointly with parsing (§6). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | It differs from previous machine learning-based NERs in that it uses information from the whole document to classify each word, with just one classifier. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | English was again paired with German, French, and Spanish. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | The possible analyses of a surface token pose constraints on the analyses of specific segments. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed; see Pereira, Riley, and Sproat (1994) for an explanation of the notion of summing WFSTs.12 Conceptual Improvements over Chang et al.'s Model. |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | For example, we can easily imagine that the number of paraphrases for âA buys Bâ is enormous and it is not possible to create a comprehensive inventory by hand. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | 02 99. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | In Semitic languages the situation is very different. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | Recent results (e.g., (Yarowsky 95; Brill 95; Blum and Mitchell 98)) have suggested that unlabeled data can be used quite profitably in reducing the need for supervision. |
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable. | 0 | We use w erations of sampling (see Figure 2 for a depiction). |
The corpus was annoted with different linguitic information. | 0 | Thus we are interested not in extraction, but actual generation from representations that may be developed to different degrees of granularity. |
This assumption, however, is not inherent to type-based tagging models. | 0 | 5 60.6 Table 3: Multilingual Results: We report token-level one-to-one and many-to-one accuracy on a variety of languages under several experimental settings (Section 5). |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | For previously unseen hanzi in given names, Chang et al. assign a uniform small cost; but we know that some unseen hanzi are merely acci dentally missing, whereas others are missing for a reason-for example, because they have a bad connotation. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | All three curves remain steep at the maximum training set size of 18818 trees. |
This assumption, however, is not inherent to type-based tagging models. | 1 | This assumption, however, is not inherent to type-based tagging models. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Linguistic intuitions like those in the previous section inform language-specific annotation choices. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | Once again we present both a non-parametric and a parametric technique for this task. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | Again, this deserves further investigation. |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | Tables 4 and 5 show BABARâs performance when just one contextual role knowledge source is used at a time. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | MUC7 has also seen hybrids of statistical NERs and hand-coded systems (Mikheev et al., 1998; Borthwick, 1999), notably Mikheev' s system, which achieved the best performance of 93.39% on the official NE test data. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | This intuition is born out by the experimental results. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of authorâs intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The performance of our system on those sentences ap peared rather better than theirs. |
Human judges also pointed out difficulties with the evaluation of long sentences. | 0 | The domain is general politics, economics and science. |
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be. | 0 | Figure 1 shows sample sentences from these domains, which are widely divergent. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation |
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÃcient search algorithm. | 0 | The negative logarithm of t0 is reported. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | Input: Ja , wunderbar . Konnen wir machen . MonS: Yes, wonderful. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | The transitive closure of the dictionary in (a) is composed with Id(input) (b) to form the WFST (c). |
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98). | 0 | Put another way, the minimum of Equ. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Each of the constituents must have received at least 1 votes from the k parsers, so a > I1 and 2 — 2k±-1 b > ri-5-111. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | Nonetheless, the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of words. |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | For each feature type f and tag t, a multinomial Ïtf is drawn from a symmetric Dirichlet distribution with concentration parameter β. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | (1992). |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 21 84. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | Second, comparisons of different methods are not meaningful unless one can eval uate them on the same corpus. |
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints. | 0 | The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task. |
They proposed an unsupervised method to discover paraphrases from a large untagged corpus. | 0 | Rather we believe several methods have to be developed using different heuristics to discover wider variety of paraphrases. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | Its correct antecedent is âa revolverâ, which is extracted by the caseframe âkilled with <NP>â. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990). |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | (c) Coordination ambiguity is shown in dependency scores by e.g., âSSS R) and âNP NP NP R). |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | For simplicity, we assume that OUT is homogeneous. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | In this paper we have argued that Chinese word segmentation can be modeled ef fectively using weighted finite-state transducers. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | Our final average POS tagging accuracy of 83.4% compares very favorably to the average accuracy of Berg-Kirkpatrick et al.’s monolingual unsupervised state-of-the-art model (73.0%), and considerably bridges the gap to fully supervised POS tagging performance (96.6%). |
Replacing this with a ranked evaluation seems to be more suitable. | 0 | The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | Now assume we have n pairs (xi,, x2,i) drawn from X1 X X2, where the first m pairs have labels whereas for i = m+ 1...n the pairs are unlabeled. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | (Kehler, 1997) also used a DempsterShafer model to merge evidence from different sources for template-level coreference. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | To investigate the influence of these factors, we analyze Modern Standard Arabic (henceforth MSA, or simply âArabicâ) because of the unusual opportunity it presents for comparison to English parsing results. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | The complexity of the quasimonotone search is O(E3 J (R2+LR)). |
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks. | 0 | This result suggests the benefit of using the automatic discovery method. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | We have tested the translation system on the Verbmobil task (Wahlster 1993). |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | Floating point values may be stored in the trie exactly, using 31 bits for non-positive log probability and 32 bits for backoff5. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | Because b is a function, no additional hypothesis splitting happens. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | We include a list of per-category results for selected phrasal labels, POS tags, and dependencies in Table 8. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.