source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
When this metric is less than 0.5, we expect to incur more errors' than we will remove by adding those constituents to the parse.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
8 We use head-finding rules specified by a native speaker.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The way judgements are collected, human judges tend to use the scores to rank systems against each other.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Each visited entry wni stores backoff b(wni ).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
.., Tn ) T W φ E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS ψ : Lexicon parameters θ : Token word emission parameters φ : Token tag transition parameters φ φ t1 t2 θ θ w1 w2 K φ T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters.
This paper conducted research in the area of automatic paraphrase discovery.
0
The sentences in the corpus were tagged by a transformation-based chunker and an NE tagger.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The domain is general politics, economics and science.
Here we present two algorithms.
0
(3)) to be defined over unlabeled as well as labeled instances.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
123 examples fell into the noise category.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Fig.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Proper-Name Identification.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The scores and confidence intervals are detailed first in the Figures 7–10 in table form (including ranks), and then in graphical form in Figures 11–16.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase.
They focused on phrases which two Named Entities, and proceed in two stages.
0
They first collect the NE instance pairs and contexts, just like our method.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Our code has been publicly available and intergrated into Moses since October 2010.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Such discrepancies can be aligned via an intermediate level of PoS tags.
All the texts were annotated by two people.
0
The general idea for the knowledge- based part is to have the system use as much information as it can find at its disposal to produce a target representation as specific as possible and as underspecified as necessary.
They focused on phrases which two Named Entities, and proceed in two stages.
0
In total, for the 2,000 NE category pairs, 5,184 keywords are found.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We used the MUC4 terrorism corpus (MUC4 Proceedings, 1992) and news articles from the Reuter’s text collection8 that had a subject code corresponding to natural disasters.
Combining multiple highly-accurate independent parsers yields promising results.
0
It is closer to the smaller value of precision and recall when there is a large skew in their values.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
All experiments use ATB parts 1–3 divided according to the canonical split suggested by Chiang et al.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
(a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?'
They have made use of local and global features to deal with the instances of same token in a document.
0
On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The simplest version of the maximum matching algorithm effectively deals with ambiguity by ignoring it, since the method is guaranteed to produce only one segmentation.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Given an anaphor, BABAR identifies the caseframe that would extract it from its sentence.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Smith estimates Lotus will make a profit this quarter…”, our system extracts “Smith esti mates Lotus” as an instance.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
The accuracies for link were 73% and 86% on two evaluated domains.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
In total, across all domains, we kept 13,976 phrases with keywords.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Our full model outperforms the “No LP” setting because it has better vocabulary coverage and allows the extraction of a larger set of constraint features.
The AdaBoost algorithm was developed for supervised learning.
0
The algorithm builds two classifiers iteratively: each iteration involves minimization of a continuously differential function which bounds the number of examples on which the two classifiers disagree.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
In this section, we will explain the algorithm step by step with examples.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This approach is similar to BABAR in that they both acquire knowledge from earlier resolutions.
They found replacing it with a ranked evaluation to be more suitable.
0
In all figures, we present the per-sentence normalized judgements.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
1
In order to handle the necessary word reordering as an optimization problem within our dynamic programming approach, we describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming (Held, Karp, 1962).
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
This kind of dependency arises from the use of the composition operation to compose two arbitrarily large categories.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The algorithm works due to the fact that not all permutations of cities have to be considered explicitly.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Experiments are presented in section 4.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
5 68.1 34.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Examples are given in Table 4.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Such resources exist for Hebrew (Itai et al., 2006), but unfortunately use a tagging scheme which is incompatible with the one of the Hebrew Treebank.s For this reason, we use a data-driven morphological analyzer derived from the training data similar to (Cohen and Smith, 2007).
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
HR0011-06-C-0022.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text.
The texts were annotated with the RSTtool.
0
Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Instead, we condition on the type-level tag assignments T . Specifically, let St = {i|Ti = t} denote the indices of theword types which have been assigned tag t accord ing to the tag assignments T . Then θt is drawn from DIRICHLET(α, St), a symmetric Dirichlet which only places mass on word types indicated by St. This ensures that each word will only be assigned a single tag at inference time (see Section 4).
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004).
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
5 Related Work.
Replacing this with a ranked evaluation seems to be more suitable.
0
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
REL+VB) (cf.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
The texts were annotated with the RSTtool.
0
Preferences for constituent order (especially in languages with relatively free word order) often belong to this group.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
1
Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
A different but supplementary perspective on discourse-based information structure is taken 11ventionalized patterns (e.g., order of informa by one of our partner projects, which is inter tion in news reports).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The domain is general politics, economics and science.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Entries landing in the same bucket are said to collide.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This is not ideal for some applications, however.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
A beam search concept is applied as in speech recognition.
0
When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
Special thanks to Jan Hajiˇc and Matthias Trautner Kromann for assistance with the Czech and Danish data, respectively, and to Jan Hajiˇc, Tom´aˇs Holan, Dan Zeman and three anonymous reviewers for valuable comments on a preliminary version of the paper.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The importance of this property becomes clear in contrasting theories underlying GPSG (Gazdar, Klein, Pulluna, and Sag, 1985), and GB (as described by Berwick, 1984) with those underlying LFG and FUG.
Here we present two algorithms.
0
We can now add a new weak hypothesis 14 based on a feature in X1 with a confidence value al hl and atl are chosen to minimize the function We now define, for 1 <i <n, the following virtual distribution, As before, Ztl is a normalization constant.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
52 15.
There are clustering approaches that assign a single POS tag to each word type.
0
5.1 Data Sets.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
input token, the segmentation is then performed deterministically given the 1-best analysis.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We evaluated the results based on two metrics.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The sentences in the corpus were tagged by a transformation-based chunker and an NE tagger.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
This is similar to stacking the different feature instantiations into long (sparse) vectors and computing the cosine similarity between them.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
It also does not prune, so comparing to our pruned model would be unfair.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Of course, we.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
f, nan2gual+men0 'pumpkins' is by no means impossible.
They have made use of local and global features to deal with the instances of same token in a document.
0
Previous work that involves the gathering of information from the whole document often uses a secondary classifier, which corrects the mistakes of a primary sentence- based classifier.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development set.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
For the automatic scoring method BLEU, we can distinguish three quarters of the systems.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
This feature ft incorporates information from the smoothed graph and prunes hidden states that are inconsistent with the thresholded vector tx.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The learned patterns are then normalized and applied to the corpus.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
For that application, at a minimum, one would want to know the phonological word boundaries.
These clusters are computed using an SVD variant without relying on transitional structure.
0
(2010) reports the best unsupervised results for English.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
All features were conjoined with the state z.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Finally, we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Unfortunately, Yarowsky's method is not well understood from a theoretical viewpoint: we would like to formalize the notion of redundancy in unlabeled data, and set up the learning task as optimization of some appropriate objective function.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
In the initial release of the ATB, inter-annotator agreement was inferior to other LDC treebanks (Maamouri et al., 2008).
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We introduce several new ideas.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The second concerns the methods used (if any) to ex­ tend the lexicon beyond the static list of entries provided by the machine-readable dictionary upon which it is based.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
And if one is interested in TIS, one would probably consider the single orthographic word ACL to consist of three phonological words-lei s'i d/-corresponding to the pronunciation of each of the letters in the acronym.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Search t0 CPU time #search mWER Method [sec] error [%] QmS 0.0 0.07 108 42:6 1.0 0.13 85 37:8 2.5 0.35 44 36:6 5.0 1.92 4 34:6 10.0 10.6 0 34:5 IbmS 0.0 0.14 108 43:4 1.0 0.3 84 39:5 2.5 0.8 45 39:1 5.0 4.99 7 38:3 10.0 28.52 0 38:2 Table 6 shows example translations obtained by the three different approaches.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain.