source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The AdaBoost algorithm was developed for supervised learning.
0
Having found (spelling, context) pairs in the parsed data, a number of features are extracted.
The texts were annotated with the RSTtool.
0
html 4 www.wagsoft.com/RSTTool assigning rhetorical relations is a process loaded with ambiguity and, possibly, subjectivity.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
First, we use a novel graph-based framework for projecting syntactic information across language boundaries.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Roughly speaking, the new algorithm presented in this paper performs a similar search, but instead minimizes a bound on the number of (unlabeled) examples on which two classifiers disagree.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
0 D/ nc 5.0 The minimal dictionary encoding this information is represented by the WFST in Figure 2(a).
This paper talks about Unsupervised Models for Named Entity Classification.
0
Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Figure 2 shows timing results.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Based on revision 4041, we modified Moses to print process statistics before terminating.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The corpus was wordaligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
When aligning the words in parallel texts (for language pairs like SpanishEnglish, French-English, ItalianGerman,...), we typically observe a strong localization effect.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
For comparison to information-retrieval inspired baselines, eg (L¨u et al., 2007), we select sentences from OUT using language model perplexities from IN.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Applying the function PROJECTIVIZE to the graph in Figure 1 yields the graph in Figure 2, where the problematic arc pointing to Z has been lifted from the original head jedna to the ancestor je.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
This information is readily available in TRIE where adjacent records with equal pointers indicate no further extension of context is possible.
A beam search concept is applied as in speech recognition.
0
Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
There are two weaknesses in Chang et al.'s model, which we improve upon.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Jiang and Zhai (2007) suggest the following derivation, making use of the true OUT distribution po(s, t): where each fi(s, t) is a feature intended to charac- !0ˆ = argmax pf(s, t) log pθ(s|t) (8) terize the usefulness of (s, t), weighted by Ai. θ s,t pf(s, t)po(s, t) log pθ(s|t) The mixing parameters and feature weights (col- != argmax po (s, t) lectively 0) are optimized simultaneously using dev- θ s,t pf(s, t)co(s, t) log pθ(s|t), set maximum likelihood as before: !�argmax po (s, t) ! θ s,t �ˆ = argmax ˜p(s, t) log p(s|t; 0).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
These estimates are in turn combined linearly with relative-frequency estimates from an in-domain phrase table.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Dagan and Itai (Dagan and Itai, 1990) experimented with co-occurrence statistics that are similar to our lexical caseframe expectations.
The AdaBoost algorithm was developed for supervised learning.
0
Each vertex within a connected component must have the same label — in the binary classification case, we need a single labeled example to identify which component should get which label.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In these examples, the names identified by the two systems (if any) are underlined; the sentence with the correct segmentation is boxed.19 The differences in performance between the two systems relate directly to three issues, which can be seen as differences in the tuning of the models, rather than repre­ senting differences in the capabilities of the model per se.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
RandLM’s stupid backoff variant stores counts instead of probabilities and backoffs.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # ” , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ ” . +F EA TS JJ NN S CD NN P UH , PR P$ # . “ Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
For even larger models, we recommend RandLM; the memory consumption of the cache is not expected to grow with model size, and it has been reported to scale well.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
We can better predict the probability of an unseen hanzi occurring in a name by computing a within-class Good-Turing estimate for each radical class.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
First, we use a novel graph-based framework for projecting syntactic information across language boundaries.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The corpus has been annotated with six different types of information, which are characterized in the following subsections.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Previous approaches have tried to find examples that are similar to the target domain.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Previous work on morphological and syntactic disambiguation in Hebrew used different sets of data, different splits, differing annotation schemes, and different evaluation measures.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
2.3 Assigning Evidence Values.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We create equivalence classes for verb, noun, and adjective POS categories.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
This is equivalent to the assumption used in probability estimation for naïve Bayes classifiers, namely that the attribute values are conditionally independent when the target value is given.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Here a set is represented by the keyword and the number in parentheses indicates the number of shared NE pair instances.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
The ATB annotation distinguishes between verbal and nominal readings of maSdar process nominals.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
When SRILM estimates a model, it sometimes removes n-grams but not n + 1-grams that extend it to the left.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Finally, we make some improvements to baseline approaches.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Having explained the various layers of annotation in PCC, we now turn to the question what all this might be good for.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
However, there are phrases which express the same meanings even though they do not share the same keyword.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
This pumping lemma states that if there is tree, t = t2t3t4t5, generated by a TAG G, such that its height is more than a predetermined bound k, then all trees of the form ti it tstt ts for each i > 0 will also generated by G. Similarly, for tree sets with independent paths and more complex path sets, tree pumping lemmas can be given.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Feature weights were set using Och’s MERT algorithm (Och, 2003).
This corpus has several advantages: it is annotated at different levels.
0
This is manifest in the lexical choices but 1 www.coli.unisb.de/∼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Up to now, most IE researchers have been creating paraphrase knowledge (or IE patterns) by hand and for specific tasks.
Combining multiple highly-accurate independent parsers yields promising results.
0
The second row is the accuracy of the best of the three parsers.'
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
To quantize, we use the binning method (Federico and Bertoldi, 2006) that sorts values, divides into equally sized bins, and averages within each bin.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
This problem arises because our keywords consist of only one word.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Instead, we resort to an iterative update based method.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Clearly, explicitly modeling such a powerful constraint on tagging assignment has a potential to significantly improve the accuracy of an unsupervised part-of-speech tagger learned without a tagging dictionary.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
6 Results and Analysis.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Since pronouns carry little semantics of their own, resolving them depends almost entirely on context.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
BABAR merely identifies caseframes that frequently co-occur in coreference resolutions.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
We have checked if there are similar verbs in other major domains, but this was the only one.
There are clustering approaches that assign a single POS tag to each word type.
0
For each cell, the first row corresponds to the result using the best hyperparameter choice, where best is defined by the 11 metric.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
First, a non-anaphoric NP classifier identifies definite noun phrases that are existential, using both syntactic rules and our learned existential NP recognizer (Bean and Riloff, 1999), and removes them from the resolution process.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
They found replacing it with a ranked evaluation to be more suitable.
0
This is not completely surprising, since all systems use very similar technology.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
For other languages, we use the CoNLL-X multilingual dependency parsing shared task corpora (Buchholz and Marsi, 2006) which include gold POS tags (used for evaluation).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
MCTAG's are able to generate tee sets having dependent paths.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
However there is no global pruning.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
paper, and is missing 6 examples from the A set.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of author’s intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Feature-based HMM Model (Berg- Kirkpatrick et al., 2010): The KM model uses a variety of orthographic features and employs the EM or LBFGS optimization algorithm; Posterior regulariation model (Grac¸a et al., 2009): The G10 model uses the posterior regular- ization approach to ensure tag sparsity constraint.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We resolve this problem by inserting an entry with probability set to an otherwise-invalid value (−oc).
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
This fact annoyed especially his dog...).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
+ cost(unseen(fm, as desired.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
0750271 and by the DARPA GALE program.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
But in most cases they can be used interchangably.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Tsarfaty (2006) was the first to demonstrate that fully automatic Hebrew parsing is feasible using the newly available 5000 sentences treebank.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.
Here both parametric and non-parametric models are explored.
0
The computation of Pfr1(c)1Mi M k (C)) has been sketched before in Equations 1 through 4.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
This model is easily incorporated into the segmenter by building a WFST restrict­ ing the names to the four licit types, with costs on the arcs for any particular name summing to an estimate of the cost of that name.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
This decreases the statistical significance of our results compared to those studies.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We currently simulate this by crafting a WCFG and feeding it to BitPar.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Assigning rhetorical relations thus poses questions that can often be answered only subjectively.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
“Agree” is a subject control verb, which dominates another verb whose subject is the same as that of “agree”; the latter verb is generally the one of interest for extraction.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
It is probably the first analysis of Arabic parsing of this kind.
0
This is especially true in the case of quotations—which are common in the ATB—where (1) will follow a verb like (2) (Figure 1).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Unsupervised learning approaches appear to be a natural solution to this problem, as they require only unannotated text for training models.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
85 82.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
If one of these checks fails then this knowledge source reports that the candidate is not a viable antecedent for the anaphor.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
We reduce this to O(log log |A|) time by evenly distributing keys over their range then using interpolation search4 (Perl et al., 1978).
The AdaBoost algorithm was developed for supervised learning.
0
So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.