source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Section 5 covers relevant previous work on SMT adaptation, and section 6 concludes.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Such resources exist for Hebrew (Itai et al., 2006), but unfortunately use a tagging scheme which is incompatible with the one of the Hebrew Treebank.s For this reason, we use a data-driven morphological analyzer derived from the training data similar to (Cohen and Smith, 2007).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The operations must be linear and nonerasing, i.e., they can not duplicate or erase structure from their arguments.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
6. 3) all G o l d P O S 7 0 0.7 91 0.825 358 0.7 73 0.818 358 0.8 02 0.836 452 80.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Unfortunately, we have much less data to work with than with the automatic scores.
There are clustering approaches that assign a single POS tag to each word type.
0
9 66.4 47.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
40 75.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The next two rows are results of oracle experiments.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
5.2 Discussion.
This paper conducted research in the area of automatic paraphrase discovery.
0
buy - acquire (5) buy - agree (2) buy - purchase (5) buy - acquisition (7) buy - pay (2)* buy - buyout (3) buy - bid (2) acquire - purchase (2) acquire - acquisition (2) acquire - pay (2)* purchase - acquisition (4) purchase - stake (2)* acquisition - stake (2)* unit - subsidiary (2) unit - parent (5) It is clear that these links form two clusters which are mostly correct.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
To facilitate comparison with previous work, we exhaustively evaluate this grammar and two other parsing models when gold segmentation is assumed (§5).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Our empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Different sentence structure and rich target language morphology are two reasons for this.
The AdaBoost algorithm was developed for supervised learning.
0
AdaBoost finds a weighted combination of simple (weak) classifiers, where the weights are chosen to minimize a function that bounds the classification error on a set of training examples.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Thus in an English sentence such as I'm going to show up at the ACL one would reasonably conjecture that there are eight words separated by seven spaces.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
As mentioned above, it is not obvious how to apply Daum´e’s approach to multinomials, which do not have a mechanism for combining split features.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
More formally, we start by representing the dictionary D as a Weighted Finite State Trans­ ducer (WFST) (Pereira, Riley, and Sproat 1994).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The Hebrew token ‘bcl’1, for example, stands for the complete prepositional phrase 'We adopt here the transliteration of (Sima’an et al., 2001).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Vocabulary lookup is a hash table mapping from word to vocabulary index.
The AdaBoost algorithm was developed for supervised learning.
0
(3)) to be defined over unlabeled as well as labeled instances.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
35 76.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Thus we opted not to take the step of creating more precise written annotation guidelines (as (Carlson, Marcu 2001) did for English), which would then allow for measuring inter-annotator agreement.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, some caveats are in order in comparing this method (or any method) with other approaches to seg­ mentation reported in the literature.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
A graph D = (W, A) is well-formed iff it is acyclic and connected.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The most accurate characterization of Chinese writing is that it is morphosyllabic (DeFrancis 1984): each hanzi represents one morpheme lexically and semantically, and one syllable phonologi­ cally.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Because many foreign word types are not aligned to an English word (see Table 3), and we do not run label propagation on the foreign side, we expect the projected information to have less coverage.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Altun et al. (2005) proposed a technique that uses graph based similarity between labeled and unlabeled parts of structured data in a discriminative framework for semi-supervised learning.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
These two principles guide experimentation in this framework, and together with the evaluation measures help us decide which specific type of substructure to combine.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
If a candidate has a belief value ≥ .50, then we select that candidate as the antecedent for the anaphor.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, some caveats are in order in comparing this method (or any method) with other approaches to seg­ mentation reported in the literature.
This corpus has several advantages: it is annotated at different levels.
0
The significant drop in number of pupils will begin in the fall of 2003.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Linear weights are difficult to incorporate into the standard MERT procedure because they are “hidden” within a top-level probability that represents the linear combination.1 Following previous work (Foster and Kuhn, 2007), we circumvent this problem by choosing weights to optimize corpus loglikelihood, which is roughly speaking the training criterion used by the LM and TM themselves.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
It is closer to the smaller value of precision and recall when there is a large skew in their values.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Finally, we make some improvements to baseline approaches.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Here we push the single-framework conjecture across the board and present a single model that performs morphological segmentation and syntactic disambiguation in a fully generative framework.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
To control for the effect of the HSPELL-based pruning, we also experimented with a morphological analyzer that does not perform this pruning.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The subtree under,; is excised from 7, the tree 7' is inserted in its place and the excised subtree is inserted below the foot of y'.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
In the cotraining case, (Blum and Mitchell 98) argue that the task should be to induce functions Ii and f2 such that So Ii and 12 must (1) correctly classify the labeled examples, and (2) must agree with each other on the unlabeled examples.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Like verbs, maSdar takes arguments and assigns case to its objects, whereas it also demonstrates nominal characteristics by, e.g., taking determiners and heading iDafa (Fassi Fehri, 1993).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Corpus Step 1 NE pair instances Step 2 Step 1.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The probability distribution that satisfies the above property is the one with the highest entropy.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Nu mb er filters candidate if number doesn’t agree.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Sometime, multiple words are needed, like “vice chairman”, “prime minister” or “pay for” (“pay” and “pay for” are different senses in the CC-domain).
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The performance of our system on those sentences ap­ peared rather better than theirs.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo].
Replacing this with a ranked evaluation seems to be more suitable.
0
While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
6 Joint Segmentation and Parsing.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
In principle, it would be possible to encode the exact position of the syntactic head in the label of the arc from the linear head, but this would give a potentially infinite set of arc labels and would make the training of the parser very hard.
The texts were annotated with the RSTtool.
0
And then there are decisions that systems typically hard-wire, because the linguistic motivation for making them is not well understood yet.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Here we use a slightly different notion of lift, applying to individual arcs and moving their head upwards one step at a time: Intuitively, lifting an arc makes the word wk dependent on the head wi of its original head wj (which is unique in a well-formed dependency graph), unless wj is a root in which case the operation is undefined (but then wj —* wk is necessarily projective if the dependency graph is well-formed).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
An input ABCD can be represented as an FSA as shown in Figure 2(b).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Another thread of relevant research has explored the use of features in unsupervised POS induction (Smith and Eisner, 2005; Berg-Kirkpatrick et al., 2010; Hasan and Ng, 2009).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
In line with perplexity results from Table 1, the PROBING model is the fastest followed by TRIE, and subsequently other packages.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
2.6 Co-reference.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
To approximate these baselines, we implemented a very simple sentence selection algorithm in which parallel sentence pairs from OUT are ranked by the perplexity of their target half according to the IN language model.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Our work is motivated by the observation that contextual roles can be critically important in determining the referent of a noun phrase.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Formally, we define dependency graphs as follows: 3.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
For novel texts, no lexicon that consists simply of a list of word entries will ever be entirely satisfactory, since the list will inevitably omit many constructions that should be considered words.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We are not claiming that this method is almighty.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Exposing this information to the decoder will lead to better hypothesis recombination.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
In this way we restrict the parameterization of a Language Original case English Danish Dutch German Spanish Swedish Portuguese 94.6 96.3 96.6 95.5 95.4 93.3 95.6 Table 1: Upper bound on tagging accuracy assuming each word type is assigned to majority POS tag.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We report the F1 value of both measures.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
so that 'door' would be and in this case the hanzi 7C, does not represent a syllable.
These clusters are computed using an SVD variant without relying on transitional structure.
0
The system of Berg-Kirkpatrick et al.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
IRSTLM and BerkeleyLM use this state function (and a limit of N −1 words), but it is more strict than necessary, so decoders using these packages will miss some recombination opportunities.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
The table shows that the lexicon tag frequency predicated by our full model are the closest to the gold standard.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
(Kehler, 1997) also used a DempsterShafer model to merge evidence from different sources for template-level coreference.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Interestingly, Chang et al. report 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits.
They have made use of local and global features to deal with the instances of same token in a document.
0
All our results are obtained by using only the official training data provided by the MUC conferences.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The Berkeley parser gives state-of-the-art performance for all metrics.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The problem with these styles of evaluation is that, as we shall demonstrate, even human judges do not agree perfectly on how to segment a given text.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Arguably this consists of about three phonological words.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
There is a (costless) transition between the NC node and f,.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The ATB segmentation scheme is one of many alternatives.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The text type are editorials instead of speech transcripts.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Ex: He was found in San Jose, where ...
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The record for wn1 stores the offset at which its extensions begin.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
36.
A beam search concept is applied as in speech recognition.
0
The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Æ;e00 np(jjj0; J) p(Æ) pÆ(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
These packages are further described in Section 3.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
4.4 Chinese Personal Names.