source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
But Arabic contains a variety of linguistic phenomena unseen in English.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
For the experiments, we used four newswire corpora, the Los Angeles Times/Washington Post, The New York Times, Reuters and the Wall Street Journal, all published in 1995.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Either save money at any cost - or give priority to education.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
0 68.4 48.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Cluster phrases based on Links We now have a set of phrases which share a keyword.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Giving a recognition algorithm for LCFRL's involves describing the substrings of the input that are spanned by the structures derived by the LCFRS's and how the composition operation combines these substrings.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The second concerns the methods used (if any) to ex­ tend the lexicon beyond the static list of entries provided by the machine-readable dictionary upon which it is based.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(Brin 98) ,describes a system for extracting (author, book-title) pairs from the World Wide Web using an approach that bootstraps from an initial seed set of examples.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
example, in Northern Mandarin dialects there is a morpheme -r that attaches mostly to nouns, and which is phonologically incorporated into the syllable to which it attaches: thus men2+r (door+R) 'door' is realized as mer2.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
7 68.3 56.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Replacing this with an ranked evaluation seems to be more suitable.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Hence, we use the bootstrap resampling method described by Koehn (2004).
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
This leads to word- and constituent-boundaries discrepancy, which breaks the assumptions underlying current state-of-the-art statistical parsers.
There are clustering approaches that assign a single POS tag to each word type.
0
5 We choose these two metrics over the Variation Information measure due to the deficiencies discussed in Gao and Johnson (2008).
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Sima’an et al. (2001) presented parsing results for a DOP tree-gram model using a small data set (500 sentences) and semiautomatic morphological disambiguation.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold T on the posterior distribution of tags for a given word type (Eq.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
There is a fairly large body of work on SMT adaptation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
3 Throughout this paper we shall give Chinese examples in traditional orthography, followed.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The following recursive equation is evaluated: Qe0 (e; S; C; j) = (2) = p(fj je) max Æ;e00 np(jjj0; J) p(Æ) pÆ(eje0; e00) max (S0;j0) (S0 ;Cnfjg;j0)!(S;C;j) j02Cnfjg Qe00 (e0; S0;C n fjg; j0)o: The search ends in the hypotheses (I; f1; ; Jg; j).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Finally, a DempsterShafer probabilistic model evaluates the evidence provided by the knowledge sources for all candidate antecedents and makes the final resolution decision.
There are clustering approaches that assign a single POS tag to each word type.
0
8 57.3 +F EA TS be st me dia n 50.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Automatic paraphrase discovery is an important but challenging task.
Their results show that their high performance NER use less training data than other systems.
0
4.1 Local Features.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Similarly Figures 1 and 2 show how the isolated constituent precision varies by sentence length and the size of the span of the hypothesized constituent.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Finally, we wish to reiterate an important point.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Memory usage in PROBING is high, though SRILM is even larger, so where memory is of concern we recommend using TRIE, if it fits in memory.
The AdaBoost algorithm was developed for supervised learning.
0
We can now compare this algorithm to that of (Yarowsky 95).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
CHARACTERIZING STRUCTURAL DESCRIPTIONS PRODUCED BY VARIOUS GRAMMATICAL FORMALISMS*
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The first stage identifies a keyword in each phrase and joins phrases with the same keyword into sets.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The adjunction operation with respect to tree sets (multicomponent adjunction) is defined as follows.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
(2010)’s richest model: optimized via either EM or LBFGS, as their relative performance depends on the language.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The taggers were trained on datasets labeled with the universal tags.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
3 68.4 49.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
(f1; ;mg n fl1g ; l) 3 (f1; ;mg n fl; l1; l2g ; l0) !
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Training under this model involves estimation of parameter values for P(y), P(m) and P(x I y).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Output of the learning algorithm: a function h:Xxy [0, 1] where h(x, y) is an estimate of the conditional probability p(y1x) of seeing label y given that feature x is present.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Hence, trees shown in Figure 8 can not be generated by any MCTAG (but can be generated by an IG) because the number of pairs of dependent paths grows with n. Since the derivation tees of TAG's, MCTAG's, and HG's are local sets, the choice of the structure used at each point in a derivation in these systems does not depend on the context at that point within the derivation.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
(8) can now be rewritten5 as which is of the same form as the function Zt used in AdaBoost.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
2 We now introduce a new algorithm for learning from unlabeled examples, which we will call DLCoTrain (DL stands for decision list, the term Cotrain is taken from (Blum and Mitchell 98)).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, there are several reasons why this approach will not in general work: 1.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
It is natural that the larger the data in the domain, the more keywords are found.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
In Section 2, we brie y review our approach to statistical machine translation.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The first 3770 trees of the resulting set then were used for training, and the last 418 are used testing.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.'
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Replacing this with an ranked evaluation seems to be more suitable.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Oracle results).
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Each word is simply tagged with the semantic classes corresponding to all of its senses.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
Instead, we want to apply an inverse transformation to recover the underlying (nonprojective) dependency graph.
It is probably the first analysis of Arabic parsing of this kind.
0
Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
4.2 A Sample Segmentation Using Only Dictionary Words Figure 4 shows two possible paths from the lattice of possible analyses of the input sentence B X:Â¥ .:.S:P:l 'How do you say octopus in Japanese?' previously shown in Figure 1.
These clusters are computed using an SVD variant without relying on transitional structure.
1
These clusters are computed using an SVD variant without relying on transitional structure.
All the texts were annotated by two people.
0
While RST (Mann, Thompson 1988) proposed that a single relation hold between adjacent text segments, SDRT (Asher, Lascarides 2003) maintains that multiple relations may hold simultaneously.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The particular classifier used depends upon the noun.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
These words are in turn highly ambiguous, breaking the assumption underlying most parsers that the yield of a tree for a given sentence is known in advance.
They have made use of local and global features to deal with the instances of same token in a document.
0
A name already mentioned previously in a document may appear in abbreviated form when it is mentioned again later.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
paper, and is missing 6 examples from the A set.
This corpus has several advantages: it is annotated at different levels.
0
Instead, the designs of the various annotation layers and the actual annotation work are results of a series of diploma theses, of students’ work in course projects, and to some extent of paid assistentships.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
2.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
yields.6 But to its great advantage, it has a high ratio of non-terminals/terminals (μ Constituents / μ Length).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We now describe the CoBoost algorithm for the named entity problem.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
These two properties of the tree sets are not only linguistically relevant, but also have computational importance.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Eight out of the thirteen errors in the high frequency phrases in the CC-domain are the phrases in “agree”.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
If two systems’ scores are close, this may simply be a random effect in the test data.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.
This paper talks about Unsupervised Models for Named Entity Classification.
0
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.
This paper talks about Unsupervised Models for Named Entity Classification.
0
123 examples fell into the noise category.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
This research was supported in part by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001001-18917 from the Space and Naval Warfare Systems Center, San Diego, and by the National Science Foundation under Grant IIS00325657.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji..
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
For the automatic scoring method BLEU, we can distinguish three quarters of the systems.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
To resolve the anaphor, we survey the final belief values assigned to each candidate’s singleton set.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Even when there is training data available in the domain of interest, there is often additional data from other domains that could in principle be used to improve performance.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The sources of our dictionaries are listed in Table 2.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Our coreference resolver performed well in two domains, and experiments showed that each contextual role knowledge source contributed valuable information.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
It also does not prune, so comparing to our pruned model would be unfair.
A beam search concept is applied as in speech recognition.
0
In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Following the system devised under the Qing emperor Kang Xi, hanzi have traditionally been classified according to a set of approximately 200 semantic radicals; members of a radical class share a particular structural component, and often also share a common meaning (hence the term 'semantic').