source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
6One of our experimental settings lacks document boundaries, and we used this approximation in both settings for consistency.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Evalb is a Java re-implementation of the standard labeled precision/recall metric.12 The ATB gives all punctuation a single tag.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience.
This paper conducted research in the area of automatic paraphrase discovery.
0
Using structural information As was explained in the results section, we extracted examples like “Smith estimates Lotus”, from a sentence like “Mr.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.
It is probably the first analysis of Arabic parsing of this kind.
0
This paper is based on work supported in part by DARPA through IBM.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
(1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Precision.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The method being described-henceforth ST..
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Note that hanzi that are not grouped into dictionary words (and are not identified as single­ hanzi words), or into one of the other categories of words discussed in this paper, are left unattached and tagged as unknown words.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The maximum precision oracle is an upper bound on the possible gain we can achieve by parse hybridization.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
JA DE G O L D G R AS S SI C K NE SS DE AT H R A T 14.
The AdaBoost algorithm was developed for supervised learning.
0
For the purposes of EM, the "observed" data is {(xi, Ya• • • (xrn, Yrn), xfil, and the hidden data is {ym+i y}.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
63 95.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
It is formally straightforward to extend the grammar to include these names, though it does increase the likelihood of overgeneration and we are unaware of any working systems that incorporate this type of name.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Such constraints are derived from training data, expressing some relationship between features and outcome.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW).
This corpus has several advantages: it is annotated at different levels.
0
Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
However, for multinomial models like our LMs and TMs, there is a one to one correspondence between instances and features, eg the correspondence between a phrase pair (s, t) and its conditional multinomial probability p(s1t).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
JI!
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Our coreference resolver performed well in two domains, and experiments showed that each contextual role knowledge source contributed valuable information.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Our parsing performance measures (SY N) thus report the PARSEVAL extension proposed in Tsarfaty (2006).
These clusters are computed using an SVD variant without relying on transitional structure.
0
This approach makes the training objective more complex by adding linear constraints proportional to the number of word types, which is rather prohibitive.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
NN � .e NP NNP NP DTNNP NN � .e NP NP NNP NP Table 5: Evaluation of 100 randomly sampled variation nuclei types.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The state in future has not enough work for its many teachers.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
We carried out translation experiments in two different settings.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Compared to last year’s shared task, the participants represent more long-term research efforts.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
To our knowledge, ours is the first analysis of this kind for Arabic parsing.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
92 77.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Our clue is the NE instance pairs.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
There has been recent interest in the application of Indexed Grammars (IG's) to natural languages.
This paper conducted research in the area of automatic paraphrase discovery.
0
Another approach to finding paraphrases is to find phrases which take similar subjects and objects in large corpora by using mutual information of word distribution [Lin and Pantel 01].
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
We make the assumption that for each example, both xi,. and x2,2 alone are sufficient to determine the label yi.
It is probably the first analysis of Arabic parsing of this kind.
0
86 78.
These clusters are computed using an SVD variant without relying on transitional structure.
0
Comparison with state-of-the-art taggers For comparison we consider two unsupervised tag- gers: the HMM with log-linear features of Berg- Kirkpatrick et al.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Final (F): The rest of the sentence is processed monotonically taking account of the already covered positions.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
7.96 5.55 1 l...................................................................................................................................................................................................J..
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
BerkeleyLM revision 152 (Pauls and Klein, 2011) implements tries based on hash tables and sorted arrays in Java with lossy quantization.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
In this paper, we make a simplifying assumption of one-tag-per-word.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The 95% confidence intervals for type-level errors are (5580, 9440) for the ATB and (1400, 4610) for the WSJ.
The AdaBoost algorithm was developed for supervised learning.
0
To prevent this we "smooth" the confidence by adding a small value, e, to both W+ and W_, giving at = Plugging the value of at from Equ.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
We attain these results using several optimizations: hashing, custom lookup tables, bit-level packing, and state for left-to-right query patterns.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
5 Related Work.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Their work used subject-verb, verb-object, and adjective-noun relations to compare the contexts surrounding an anaphor and candidate.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di­ graphernic word.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
As the reviewer also points out, this is a problem that is shared by, e.g., probabilistic context-free parsers, which tend to pick trees with fewer nodes.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
2.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We use v1.0 mainly because previous studies on joint inference reported results w.r.t. v1.0 only.5 We expect that using the same setup on v2.0 will allow a crosstreebank comparison.6 We used the first 500 sentences as our dev set and the rest 4500 for training and report our main results on this split.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Chris Dyer integrated the code into cdec.
They focused on phrases which two Named Entities, and proceed in two stages.
0
After tagging a large corpus with an automatic NE tagger, the method tries to find sets of paraphrases automatically without being given a seed phrase or any kind of cue.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The number of judgements is additionally fragmented by our breakup of sentences into in-domain and out-of-domain.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
For instance, for TTS it is necessary to know that a particular sequence of hanzi is of a particular category because that knowl­ edge could affect the pronunciation; consider, for example the issues surrounding the pronunciation of ganl I qian2 discussed in Section 1.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
(2009).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For RandLM, we used the settings in the documentation: 8 bits per value and false positive probability 1 256.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
(b) does the translation have the same meaning, including connotations?
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
It was filtered to retain the top 30 translations for each source phrase using the TM part of the current log-linear model.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
For instance, in the recent IWSLT evaluation, first fluency annotations were solicited (while withholding the source sentence), and then adequacy annotations.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We quantify error categories in both evaluation settings.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Often, two systems can not be distinguished with a confidence of over 95%, so there are ranked the same.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The learned information was recycled back into the resolver to improve its performance.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The resulting algorithm is depicted in Table 1.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Yet, some hanzi are far more probable in women's names than they are in men's names, and there is a similar list of male-oriented hanzi: mixing hanzi from these two lists is generally less likely than would be predicted by the independence model.
Replacing this with a ranked evaluation seems to be more suitable.
0
There may be occasionally a system clearly at the top or at the bottom, but most systems are so close that it is hard to distinguish them.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The algorithm in Fig.
There are clustering approaches that assign a single POS tag to each word type.
0
Models To assess the marginal utility of each component of the model (see Section 3), we incremen- tally increase its sophistication.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
An easy way to achieve this is to put the domain-specific LMs and TMs into the top-level log-linear model and learn optimal weights with MERT (Och, 2003).
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The key to the methods we describe is redundancy in the unlabeled data.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The first issue relates to the completeness of the base lexicon.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
RandLM 0.2 (Talbot and Osborne, 2007) stores large-scale models in less memory using randomized data structures.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Link phrases based on instance pairs Using NE instance pairs as a clue, we find links between sets of phrases.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Typical data structures are generalized Bloom filters that guarantee a customizable probability of returning the correct answer.
The AdaBoost algorithm was developed for supervised learning.
0
Using the virtual distribution Di (i) and pseudo-labels"y.,„ values for Wo, W± and W_ can be calculated for each possible weak hypothesis (i.e., for each feature x E Xi); the weak hypothesis with minimal value for Wo + 2/WW _ can be chosen as before; and the weight for this weak hypothesis at = ln ww+411:) can be calculated.
Their results show that their high performance NER use less training data than other systems.
0
A secondary reference resolution classifier has information on the class assigned by the primary classifier.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Several coreference resolvers have used supervised learning techniques, such as decision trees and rule learners (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, BABAR learned that agents that “assassinate” or “investigate a cause” are usually humans or groups (i.e., organizations).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
02 99.
The corpus was annoted with different linguitic information.
0
Since Daneˇs’ proposals of ‘thematic development patterns’, a few suggestions have been made as to the existence of a level of discourse structure that would predict the information structure of sentences within texts.
It is probably the first analysis of Arabic parsing of this kind.
0
We use the default inference parameters.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Nonstochastic lexical-knowledge-based approaches have been much more numer­ ous.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
If it is made up of all capital letters, then (allCaps, zone) is set to 1.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Evaluation of links A link between two sets is considered correct if the majority of phrases in both sets have the same meaning, i.e. if the link indicates paraphrase.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The 13 positions of the source sentence are processed in the order shown.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
On the other hand, if it is seen as McCann Pte.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.