source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech. | 0 | The first experiment uses data from two dependency treebanks. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Even with vocalization, there are linguistic categories that are difficult to identify without semantic clues. |
Here we present two algorithms. | 0 | The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | The particular classifier used depends upon the noun. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Judges where excluded from assessing the quality of MT systems that were submitted by their institution. |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | On the MUC6 data, Bikel et al. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Our suspicion is that BLEU is very sensitive to jargon, to selecting exactly the right words, and not synonyms that human judges may appreciate as equally good. |
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design. | 0 | We weight edges using a unigram language model estimated with Good- Turing smoothing. |
The second algorithm builds on a boosting algorithm called AdaBoost. | 0 | In a fully supervised setting, the task is to learn a function f such that for all i = 1...m, f (xi,i, 12,i) = yz. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning. |
Their results show that their high performance NER use less training data than other systems. | 0 | Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | The string pumping lemma for CFG's (uvwxy-theorem) can be seen as a corollary of this lemma. from this pumping lemma: a single path can be pumped independently. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d). |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | However, reads in the TRIE data structure are more expensive due to bit-level packing, so we found that it is faster to use interpolation search the entire time. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Some approaches depend upon some form of con straint satisfaction based on syntactic or semantic features (e.g., Yeh and Lee [1991], which uses a unification-based approach). |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | There are simply not enough votes remaining to allow any of the crossing structures to enter the hypothesized constituent set. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 37 79. |
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse. | 0 | If âgunâ and ârevolverâ refer to the same object, then it should also be acceptable to say that Fred was âkilled with a gunâ and that the burglar âfireda revolverâ. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function. |
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier. | 0 | The first modification — cautiousness — is a relatively minor change. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | (1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced. |
Here both parametric and non-parametric models are explored. | 0 | The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers. |
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank. | 0 | There is a guarantee of no crossing brackets but there is no guarantee that a constituent in the tree has the same children as it had in any of the three original parses. |
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration. | 0 | The highestorder N-gram array omits backoff and the index, since these are not applicable. |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Table 4 shows how much the Bayes switching technique uses each of the parsers on the test set. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | yu2 'fish.' |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | Precision. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Despite these limitations, a purely finite-state approach to Chinese word segmentation enjoys a number of strong advantages. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | By design, they readily capture regularities at the token-level. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | logical rules, and personal names; the transitive closure of the resulting machine is then computed. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | Members of LCFRS whose operations have this property can be translated into the ILFP notation (Rounds, 1985). |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | However, in order to capture the properties of various grammatical systems under consideration, our notation is more restrictive that ILFP, which was designed as a general logical notation to characterize the complete class of languages that are recognizable in polynomial time. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | We use v1.0 mainly because previous studies on joint inference reported results w.r.t. v1.0 only.5 We expect that using the same setup on v2.0 will allow a crosstreebank comparison.6 We used the first 500 sentences as our dev set and the rest 4500 for training and report our main results on this split. |
The texts were annotated with the RSTtool. | 0 | The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Ex: Mr. Bush disclosed the policy by reading it... |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | (f1; ;mg n fl1; l2g ; l) 4 (f1; ;m ô 1g n fl1; l2; l3g ; l0) ! |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | It was motivated by the observation that the (Yarowsky 95) algorithm added a very large number of rules in the first few iterations. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | When the maSdar lacks a determiner, the constituent as a whole resem bles the ubiquitous annexation construct � ?f iDafa. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | The evaluation framework for the shared task is similar to the one used in last year’s shared task. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | 3.1 Maximum Entropy. |
Finally, several coreference systems have successfully incorporated anaphoricity determination modules. | 0 | The terrorism examples reflect fairly obvious relationships: people who are murdered are killed; agents that âreportâ things also âaddâ and âstateâ things; crimes that are âperpetratedâ are often later âcondemnedâ. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | As a (crude) approximation, we normalize the extraction patterns with respect to active and passive voice and label those extractions as agents or patients. |
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers. | 0 | 10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to. |
Their results show that their high performance NER use less training data than other systems. | 0 | Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC). |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962). |
Here we present two algorithms. | 0 | Formally, let el (62) be the number of classification errors of the first (second) learner on the training data, and let Eco be the number of unlabeled examples on which the two classifiers disagree. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | (2006). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | To evaluate proper-name identification, we randomly se lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin gle unambiguous answer, unlike the more general question of which segmentation is correct. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | English parsing evaluations usually report results on sentences up to length 40. |
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones. | 0 | This essentially means that a better grammar tunes the joint model for optimized syntactic disambiguation at least in as much as their hyper parameters do. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003). |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | 4. |
Here we present two algorithms. | 0 | But we will show that the use of unlabeled data can drastically reduce the need for supervision. |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | GL is then used to parse the string tn1 ... tnk_1, where tni is a terminal corresponding to the lattice span between node ni and ni+1. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | The distinctions in the ATB are linguistically justified, but complicate parsing. |
This paper talks about Unsupervised Models for Named Entity Classification. | 0 | We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | BABAR employs information extraction techniques to represent and learn role relationships. |
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing. | 0 | We are unaware of prior results for the Stanford parser. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | 85 82. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | To facilitate the comparison of our results to those reported by (Cohen and Smith, 2007) we use their data set in which 177 empty and “malformed”7 were removed. |
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed. | 0 | This causes a problem for reverse trie implementations, including SRILM itself, because it leaves n+1-grams without an n-gram node pointing to them. |
Here both parametric and non-parametric models are explored. | 0 | One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it. |
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound. | 0 | The Grammar Our parser looks for the most likely tree spanning a single path through the lattice of which the yield is a sequence of lexemes. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Automatic paraphrase discovery is an important but challenging task. |
The texts were annotated with the RSTtool. | 0 | The implementation is In a similar effort, (G¨otze 2003) developed a proposal for the theory-neutral annotation of information structure (IS) â a notoriously difficult area with plenty of conflicting and overlapping terminological conceptions. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Future work along these lines will incorporate other layers of annotation, in particular the syntax information. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | We obtained positive results using a very simple phrase-based system in two different adaptation settings: using English/French Europarl to improve a performance on a small, specialized medical domain; and using non-news portions of the NIST09 training material to improve performance on the news-related corpora. |
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not. | 0 | With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one. |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | An additional case of super-segmental morphology is the case of Pronominal Clitics. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Certainly these linguistic factors increase the difficulty of syntactic disambiguation. |
These clusters are computed using an SVD variant without relying on transitional structure. | 0 | 37. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Ltd., then organization will be more probable. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Using structural information As was explained in the results section, we extracted examples like âSmith estimates Lotusâ, from a sentence like âMr. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | Step 2. |
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance. | 0 | newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material. |
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data. | 0 | While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given. |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | The second modification is more important, and is discussed in the next section. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | In the numerator, however, the counts of ni1s are quite irregular, in cluding several zeros (e.g., RAT, none of whose members were seen). |
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process. | 0 | 2 56.2 32. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | In the ATB, :: b astaâadah is tagged 48 times as a noun and 9 times as verbal noun. |
The resulting model is compact, efficiently learnable and linguistically expressive. | 0 | 4 7 . 3 8 . 9 2 8 . 8 2 0 . 7 3 2 . 3 3 5 . 2 2 9 . 6 2 7 . 6 1 4 . 2 4 2 . 8 4 5 . 9 4 4 . 3 6 0 . 6 6 1 . 5 4 9 . 9 3 3 . 9 Table 6: Type-level Results: Each cell report the type- level accuracy computed against the most frequent tag of each word type. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | The significant drop in number of pupils will begin in the fall of 2003. |
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline. | 0 | Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects. |
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages. | 0 | A tree set may be said to have dependencies between paths if some "appropriate" subset can be shown to have dependent paths as defined above. |
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously. | 0 | We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first âPresident George Bushâ then âBushâ). |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | 1 1 0. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports). |
This assumption, however, is not inherent to type-based tagging models. | 0 | 1 61.7 37. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training. |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | To evaluate the performance on the segmentation task, we report SEG, the standard harmonic means for segmentation Precision and Recall F1 (as defined in Bar-Haim et al. (2005); Tsarfaty (2006)) as well as the segmentation accuracy SEGTok measure indicating the percentage of input tokens assigned the correct exact segmentation (as reported by Cohen and Smith (2007)). |
This paper presents a maximum entropy-based named entity recognizer (NER). | 0 | IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998). |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The approach assumes that the word reordering is restricted to a few positions in the source sentence. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.