source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
BABAR employs information extraction techniques to represent and learn role relationships.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
We use two different similarity functions to define the edge weights among the foreign vertices and between vertices from different languages.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
It is striking that from this point of view many formalisms can be grouped together as having identically structured derivation tree sets.
The texts were annotated with the RSTtool.
0
That is, we can use the discourse parser on PCC texts, emulating for instance a “co-reference oracle” that adds the information from our co-reference annotations.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
The number of top-ranked pairs to retain is chosen to optimize dev-set BLEU score.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The probabilistic version of this procedure is straightforward: We once again assume independence among our various member parsers.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives.
All the texts were annotated by two people.
0
In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information — as in PCC.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Pairwise comparison is done using the sign test.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Step 2.
Combining multiple highly-accurate independent parsers yields promising results.
0
We show the results of three of the experiments we conducted to measure isolated constituent precision under various partitioning schemes.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
They are also labelled for their topicality (yes / no), and this annotation is accompanied by a confidence value assigned by the annotator (since it is a more subjective matter).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We address the question of whether or not a formalism can generate only structural descriptions with independent paths.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
First of all, most previous articles report perfor­ mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We use Alternating Turing Machines (Chandra, Kozen, and Stockmeyer, 1981) to show that polynomial time recognition is possible for the languages discussed in Section 4.3.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
We generate these caseframes automatically by running AutoSlog over the training corpus exhaustively so that it literally generates a pattern to extract every noun phrase in the corpus.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Precision.
There is no global pruning.
0
In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(If fewer than n rules have Precision greater than pin, we 3Note that taking tlie top n most frequent rules already makes the method robut to low count events, hence we do not use smoothing, allowing low-count high-precision features to be chosen on later iterations. keep only those rules which exceed the precision threshold.) pm,n was fixed at 0.95 in all experiments in this paper.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We also performed experiments to evaluate the impact of each type of contextual role knowledge separately.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
In both cases, the instanceweighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline, and gains of between 0.6 and 1.8 over an equivalent mixture model (with an identical training procedure but without instance weighting).
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
97 78.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
There are 13 types of extensions needed to describe the verbgroup reordering.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
A geometrical progression of language families defined by Weir (1987) involves tree sets with increasingly complex path sets.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.
There are clustering approaches that assign a single POS tag to each word type.
0
For each language and setting, we report one-to-one (11) and many- to-one (m-1) accuracies.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Two subjects are each given a calendar and they are asked to schedule a meeting.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
95 76.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Minimal perfect hashing is used to find the index at which a quantized probability and possibly backoff are stored.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Across all languages, +PRIOR consistently outperforms 1TW, reducing error on average by 9.1% and 5.9% on best and median settings respectively.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Thus the method makes the fairly strong assumption that the features can be partitioned into two types such that each type alone is sufficient for classification.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Chunking is not enough to find such relationships.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
With respect to exact match, the improvement is even more noticeable, which shows quite clearly that even if non-projective dependencies are rare on the token level, they are nevertheless important for getting the global syntactic structure correct.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Its correct antecedent is “a revolver”, which is extracted by the caseframe “killed with <NP>”.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
T e r r o r i s m Ca sef ra me Semantic Classes <a ge nt > ass ass ina ted group, human inv esti gat ion int o < N P> event exp lod ed out sid e < N P> building N a t u r a l D i s a s t e r s Ca sef ra me Semantic Classes <a ge nt > inv esti gat ing cau se group, human sur viv or of < N P> event, natphenom hit wit h < N P> attribute, natphenom Figure 3: Semantic Caseframe Expectations Figure 2: Lexical Caseframe Expectations To illustrate how lexical expectations are used, suppose we want to determine whether noun phrase X is the antecedent for noun phrase Y. If they are coreferent, then X and Y should be substitutable for one another in the story.4 Consider these sentences: (S1) Fred was killed by a masked man with a revolver.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We must adjoin all trees in an auxiliary tree set together as a single step in the derivation.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
This is because different judges focused on different language pairs.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The quasi-monotone search performs best in terms of both error rates mWER and SSER.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Using RandLM and the documented settings (8-bit values and 1 256 false-positive probability), we built a stupid backoff model on the same data as in Section 5.2.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
At each point during the derivation, the prediction is based on six word tokens, the two topmost tokens on the stack, and the next four input tokens.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
A Person-Prefix-List is compiled in an analogous way.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We considered using the MUC6 and MUC7 data sets, but their training sets were far too small to learn reliable co-occurrence statistics for a large set of contextual role relationships.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Special thanks to Jan Hajiˇc and Matthias Trautner Kromann for assistance with the Czech and Danish data, respectively, and to Jan Hajiˇc, Tom´aˇs Holan, Dan Zeman and three anonymous reviewers for valuable comments on a preliminary version of the paper.
A beam search concept is applied as in speech recognition.
0
Kollege.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In section 2 we introduce the graph transformation techniques used to projectivize and deprojectivize dependency graphs, and in section 3 we describe the data-driven dependency parser that is the core of our system.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
This can be repeated several times to collect a list of author / book title pairs and expressions.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
!!:\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 • 10.03 13...
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
In Equations 1 through 3 we develop the model for constructing our parse using naïve Bayes classification.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Following the setup of Johnson (2007), we use the whole of the Penn Treebank corpus for training and evaluation on English.
They found replacing it with a ranked evaluation to be more suitable.
0
Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.
It is probably the first analysis of Arabic parsing of this kind.
0
Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The learned information was recycled back into the resolver to improve its performance.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The Levenshtein distance between the automatic translation and each of the reference translations is computed, and the minimum Levenshtein distance is taken.
These clusters are computed using an SVD variant without relying on transitional structure.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
There are clustering approaches that assign a single POS tag to each word type.
0
Specifically, we assume each word type W consists of feature-value pairs (f, v).
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Cohen and Smith (2007) later on based a system for joint inference on factored, independent, morphological and syntactic components of which scores are combined to cater for the joint inference task.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The development of automatic scoring methods is an open field of research.
Replacing this with a ranked evaluation seems to be more suitable.
0
The text type are editorials instead of speech transcripts.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
This is most severe with RandLM in the multi-threaded case, where each thread keeps a separate cache, exceeding the original model size.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Ex: The brigade, which attacked ...
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
But we also need an estimate of the probability for a non-occurring though possible plural form like i¥JJ1l.f, nan2gua1-men0 'pumpkins.'
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
(2006) developed a technique for splitting and chunking long sentences.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third Tai­ wan speaker is also close in the most significant dimension (the x axis).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Then we ran binary search to determine the least amount of memory with which it would run.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Therefore in cases where the segmentation is identical between the two systems we assume that tagging is also identical.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The final block in table 2 shows models trained on feature subsets and on the SVM feature described in 3.4.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The first issue relates to the completeness of the base lexicon.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Consider the following sentences: (a) Jose Maria Martinez, Roberto Lisandy, and Dino Rossy, who were staying at a Tecun Uman hotel, were kidnapped by armed men who took them to an unknown place.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We can obtain a letter equivalent CFL defined by a CFG in which the for each rule as above, we have the production A —* A1 Anup where tk (up) = cp.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
newspaper material, but also including kungfu fiction, Buddhist tracts, and scientific material.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
besuchen 9.
These clusters are computed using an SVD variant without relying on transitional structure.
0
We thank members of the MIT NLP group for their suggestions and comments.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ­ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
We consider two variants of Berg-Kirkpatrick et al.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We use a simple TF/IDF method to measure the topicality of words.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Statistics for all data sets are shown in Table 2.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Realizing gains in practice can be challenging, however, particularly when the target domain is distant from the background data.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
conceptual relationship in the discourse.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
In this case, we have no finite-state restrictions for the search space.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Instead, we want to apply an inverse transformation to recover the underlying (nonprojective) dependency graph.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We paid particular attention to minimize the number of free parameters, and used the same hyperparameters for all language pairs, rather than attempting language-specific tuning.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Performance typically stabilizes across languages after only a few number of iterations.
The AdaBoost algorithm was developed for supervised learning.
0
The algorithm can be viewed as heuristically optimizing an objective function suggested by (Blum and Mitchell 98); empirically it is shown to be quite successful in optimizing this criterion.