source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | One annotator suggested that this was the case for as much as 10% of our test sentences. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The dictionary sizes reported in the literature range from 17,000 to 125,000 entries, and it seems reasonable to assume that the coverage of the base dictionary constitutes a major factor in the performance of the various approaches, possibly more important than the particular set of methods used in the segmentation. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | As described in Sproat (1995), the Chinese segmenter presented here fits directly into the context of a broader finite-state model of text analysis for speech synthesis. |
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account. | 0 | Additionally, it works about 3 times as fast as the IBM style search. |
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration. | 0 | Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.' |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag. |
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information. | 0 | Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer. |
It is probably the first analysis of Arabic parsing of this kind. | 0 | Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2). |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | Even this may be nondeterministic, in case the graph contains several non-projective arcs whose lifts interact, but we use the following algorithm to construct a minimal projective transformation D0 = (W, A0) of a (nonprojective) dependency graph D = (W, A): The function SMALLEST-NONP-ARC returns the non-projective arc with the shortest distance from head to dependent (breaking ties from left to right). |
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers. | 0 | Lemma: If the number of votes required by constituent voting is greater than half of the parsers under consideration the resulting structure has no crossing constituents. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | The percentage scores on the axis labels represent the amount of variation in the data explained by the dimension in question. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | The next two rows are results of oracle experiments. |
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model. | 0 | We can only compare with Grac¸a et al. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | Again, famous place names will most likely be found in the dictionary, but less well-known names, such as 1PM± R; bu4lang3-shi4wei2-ke4 'Brunswick' (as in the New Jersey town name 'New Brunswick') will not generally be found. |
There are clustering approaches that assign a single POS tag to each word type. | 0 | Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types. |
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity. | 0 | The P (T |Ï) distribution, in English for instance, should have very low mass for the DT (determiner) tag, since determiners are a very small portion of the vocabulary. |
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure. | 0 | Some relations are signalled by subordinating conjunctions, which clearly demarcate the range of the text spans related (matrix clause, embedded clause). |
The approach assumes that the word reordering is restricted to a few positions in the source sentence. | 0 | Source sentence words are aligned with hypothesized target sentence words, where the choice of a new source word, which has not been aligned with a target word yet, is restricted1. |
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data. | 0 | Using less training data than other systems, our NER is able to perform as well as other state-of-the-art NERs. |
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages. | 0 | We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. |
The PROBING data structure uses linear probing hash tables and is designed for speed. | 0 | This section measures performance on shared tasks in order of increasing complexity: sparse lookups, evaluating perplexity of a large file, and translation with Moses. |
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines. | 0 | The final block in table 2 shows models trained on feature subsets and on the SVM feature described in 3.4. |
Combining multiple highly-accurate independent parsers yields promising results. | 0 | This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data. |
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs. | 0 | On the MUC6 data, Bikel et al. |
Here both parametric and non-parametric models are explored. | 0 | Table 3 contains the results for evaluating our systems on the test set (section 22). |
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result. | 0 | We call such a constituent an isolated constituent. |
The AdaBoost algorithm was developed for supervised learning. | 0 | The problem is a binary classification problem. |
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text. | 0 | Note that hanzi that are not grouped into dictionary words (and are not identified as single hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words. |
They found replacing it with a ranked evaluation to be more suitable. | 0 | The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain. |
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs. | 0 | This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo]. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | The state function is integrated into the query process so that, in lieu of the query p(wnjwn−1 1 ), the application issues query p(wnjs(wn−1 1 )) which also returns s(wn1 ). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable. |
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system. | 0 | Rather we believe several methods have to be developed using different heuristics to discover wider variety of paraphrases. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Finally, we note that simple weighting gives nearly a 2% F1 improvement, whereas Goldberg and Tsarfaty (2008) found that unweighted lattices were more effective for Hebrew. |
This paper talks about KenLM: Faster and Smaller Language Model Queries. | 0 | Interpolation search formalizes the notion that one opens a dictionary near the end to find the word “zebra.” Initially, the algorithm knows the array begins at b +— 0 and ends at e +— |A|−1. |
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems. | 0 | Se ma nti c (a) filters candidate if its semantic tags d o n â t i n t e r s e c t w i t h t h o s e o f t h e a n a p h o r . |
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results. | 0 | In (b) âtheyâ refers to the kidnapping victims, but in (c) âtheyâ refers to the armed men. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems. |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | Pumping t2 will change only one branch and leave the other branch unaffected. |
Here we present two algorithms. | 0 | The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags. |
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs. | 0 | For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations. |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence). |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | For each token, three types of features may be taken into account: the word form; the part-of-speech assigned by an automatic tagger; and labels on previously assigned dependency arcs involving the token – the arc from its head and the arcs to its leftmost and rightmost dependent, respectively. |
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low. | 0 | Following Dickinson (2005), we randomly sampled 100 variation nuclei from each corpus and evaluated each sample for the presence of an annotation error. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | The most accurate characterization of Chinese writing is that it is morphosyllabic (DeFrancis 1984): each hanzi represents one morpheme lexically and semantically, and one syllable phonologi cally. |
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure. | 0 | Log-linear combination (loglin) improves on this in all cases, and also beats the pure IN system. |
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance. | 0 | Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN. |
This topic has been getting more attention, driven by the needs of various NLP applications. | 0 | 3.2 Results. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data. |
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words. | 0 | newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material. |
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming. | 0 | For our demonstration system, we typically use the pruning threshold t0 = 5:0 to speed up the search by a factor 5 while allowing for a small degradation in translation accuracy. |
Here we present two algorithms. | 0 | In the named entity domain these rules were Each of these rules was given a strength of 0.9999. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | For à = 0, no new target word is generated, while an additional source sentence position is covered. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | 2.3 Rhetorical structure. |
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task. | 0 | Mai. |
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names. | 0 | The last affix in the list is the nominal plural f, men0.20 In the table are the (typical) classes of words to which the affix attaches, the number found in the test corpus by the method, the number correct (with a precision measure), and the number missed (with a recall measure). |
The corpus was annoted with different linguitic information. | 0 | (Brandt 1996) extended these ideas toward a conception of kommunikative Gewichtung (âcommunicative-weight assignmentâ). |
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure. | 0 | The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers. |
It is well-known that English constituency parsing models do not generalize to other languages and treebanks. | 0 | Process nominals name the action of the transitive or ditransitive verb from which they derive. |
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back. | 0 | Human judges also pointed out difficulties with the evaluation of long sentences. |
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution. | 0 | Third, all remaining anaphora are evaluated by 11 different knowledge sources: the four contextual role knowledge sources just described and seven general knowledge sources. |
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data. | 0 | In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point. |
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics. | 0 | Participants were also provided with two sets of 2,000 sentences of parallel text to be used for system development and tuning. |
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results. | 0 | The joint morphological and syntactic hypothesis was first discussed in (Tsarfaty, 2006; Tsarfaty and Sima’an, 2004) and empirically explored in (Tsarfaty, 2006). |
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far. | 0 | Our first model is GTplain, a PCFG learned from the treebank after removing all functional features from the syntactic categories. |
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table. | 0 | Clearly, retaining the original frequencies is important for good performance, and globally smoothing the final weighted frequencies is crucial. |
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs. | 0 | Tables 4 and 5 also show that putting all of the contextual role KSs in play at the same time produces the greatest performance gain. |
The approach has been successfully tested on the 8 000-word Verbmobil task. | 0 | The effect of the pruning threshold t0 is shown in Table 5. |
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model. | 0 | For all languages, the vocabulary sizes increase by several thousand words. |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | For example, it is well-known that one can build a finite-state bigram (word) model by simply assigning a state Si to each word Wi in the vocabulary, and having (word) arcs leaving that state weighted such that for each Wj and corresponding arc aj leaving Si, the cost on aj is the bigram cost of WiWj- (Costs for unseen bigrams in such a scheme would typically be modeled with a special backoff state.) |
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage. | 0 | For example, the independence assumptions mean that the model fails to capture the dependence between specific and more general features (for example the fact that the feature full.-string=New_York is always seen with the features contains (New) and The baseline method tags all entities as the most frequent class type (organization). contains (York) and is never seen with a feature such as contains (Group) ). |
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. | 0 | In order to observe the similarity between these constrained systems, it is crucial to abstract away from the details of the structures and operations used by the system. |
Their results show that their high performance NER use less training data than other systems. | 0 | For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1. |
The texts were annotated with the RSTtool. | 0 | The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations. |
Their results show that their high performance NER use less training data than other systems. | 0 | A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later. |
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform. | 0 | While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy. |
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation. | 0 | All experiments use ATB parts 1â3 divided according to the canonical split suggested by Chiang et al. |
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform. | 0 | At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU. |
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. | 0 | In this paper, we outline how such family of formalisms can be defined, and show that like CFG's, each member possesses a number of desirable linguistic and computational properties: in particular, the constant growth property and polynomial recognizability. |
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language. | 0 | These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels. |
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP). | 0 | The search starts in the hypothesis (I; f;g; 0). |
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity. | 0 | For the remaining arcs, if the segment is in fact a known lexeme it is tagged as usual, but for the OOV arcs which are valid Hebrew entries lacking tags assignment, we assign all possible tags and then simulate a grammatical constraint. |
This paper talks about Pseudo-Projective Dependency Parsing. | 0 | In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004). |
They have made use of local and global features to deal with the instances of same token in a document. | 0 | Another attempt at using global information can be found in (Borthwick, 1999). |
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement. | 0 | 2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji.. |
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university. | 0 | The significant drop in number of pupils will begin in the fall of 2003. |
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy. | 0 | Different sentence structure and rich target language morphology are two reasons for this. |
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories. | 0 | In many cases, inspection of either the spelling or context alone is sufficient to classify an example. |
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems. | 0 | There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words. |
There is no global pruning. | 0 | 10. |
Their results show that their high performance NER use less training data than other systems. | 0 | The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps. |
The texts were annotated with the RSTtool. | 0 | For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: ⢠Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes. |
Two general approaches are presented and two combination techniques are described for each approach. | 0 | For each experiment we gave an nonparametric and a parametric technique for combining parsers. |
The corpus was annoted with different linguitic information. | 0 | This means that the PCC cannot grow particularly quickly. |
This assumption, however, is not inherent to type-based tagging models. | 0 | Since the early days of statistical NLP, researchers have observed that a part-of-speech tag distribution exhibits âone tag per discourseâ sparsity â words are likely to select a single predominant tag in a corpus, even when several tags are possible. |
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models. | 0 | However, there is a strong relationship between ni1s and the number of hanzi in the class. |
This paper conducted research in the area of automatic paraphrase discovery. | 0 | Obviously âLotusâ is part of the following clause rather than being the object of âestimatesâ and the extracted instance makes no sense. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.