source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
When dropping the top and bottom 2.5% the remaining BLEU scores define the range of the confidence interval.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For that application, at a minimum, one would want to know the phonological word boundaries.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Note that the good classes JADE, GOLD and GRASS have lower costs than the bad classes SICKNESS, DEATH and RAT, as desired, so the trend observed for the results of this method is in the right direction.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
However, when the linear LM is combined with a linear TM (lm+lin tm) or MAP TM (lm+map TM), the results are much better than a log-linear combination for the EMEA setting, and on a par for NIST.
There are clustering approaches that assign a single POS tag to each word type.
0
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
There are clustering approaches that assign a single POS tag to each word type.
0
(2009) on Portuguese (Grac¸a et al.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Since guess and gold trees may now have different yields, the question of evaluation is complex.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
In this domain the major scenarios involve the things they agreed on, rather than the mere fact that they agreed.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
”, because Bush is the first word, the initial caps might be due to its position (as in “They put a freeze on . . .
They found replacing it with a ranked evaluation to be more suitable.
0
This is because different judges focused on different language pairs.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Section 5 covers relevant previous work on SMT adaptation, and section 6 concludes.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Models that employ this strategy are denoted hsp.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
When extracting the vector t, used to compute the constraint feature from the graph, we tried three threshold values for r (see Eq.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
1
This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Segmental morphology Hebrew consists of seven particles m(“from”) f(“when”/“who”/“that”) h(“the”) w(“and”) k(“like”) l(“to”) and b(“in”). which may never appear in isolation and must always attach as prefixes to the following open-class category item we refer to as stem.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
1 is given in Fig.
Their results show that their high performance NER use less training data than other systems.
0
It is unique, agrees with the maximum-likelihood distribution, and has the exponential form (Della Pietra et al., 1997): where refers to the outcome, the history (or context), and is a normalization function.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
30 16.
These clusters are computed using an SVD variant without relying on transitional structure.
0
8 66.4 52.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
We incur some additional memory cost due to storing state in each hypothesis, though this is minimal compared with the size of the model itself.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Table 1 The cost as a novel given name (second position) for hanzi from various radical classes.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This revealed interesting clues about the properties of automatic and manual scoring.
They have made use of local and global features to deal with the instances of same token in a document.
0
Unique Occurrences and Zone (UNIQ): This group of features indicates whether the word is unique in the whole document.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
As expected, the most informative encoding, Head+Path, gives the highest accuracy with over 99% of all non-projective arcs being recovered correctly in both data sets.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
To conserve memory at the expense of accuracy, values may be quantized using q bits per probability and r bits per backoff6.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Consider the following sentences: (a) Jose Maria Martinez, Roberto Lisandy, and Dino Rossy, who were staying at a Tecun Uman hotel, were kidnapped by armed men who took them to an unknown place.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We would like to thank Ryan McDonald for numerous discussions on this topic.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
gaolbu4-gaolxing4 (hap-not-happy) 'happy?'
The corpus was annoted with different linguitic information.
0
• Bridging links: the annotator is asked to specify the type as part-whole, cause-effect (e.g., She had an accident.
This assumption, however, is not inherent to type-based tagging models.
0
3 68.9 50.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
We reduce this to O(log log |A|) time by evenly distributing keys over their range then using interpolation search4 (Perl et al., 1978).
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The semantic agreement KS eliminates some candidates, but also provides positive evidence in one case: if the candidate and anaphor both have semantic tags human, company, date, or location that were assigned via NER or the manually labeled dictionary entries.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Lexicon Feature of Previous and Next Token: The string of the previous token and the next token is used with the initCaps information of . If has initCaps, then a feature (initCaps, ) is set to 1.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Compared to last year’s shared task, the participants represent more long-term research efforts.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
3) A tight coupling with the speech recognizer output.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The hyperparameters α and β represent the concentration parameters of the token- and type-level components of the model respectively.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
This locally normalized log-linear model can look at various aspects of the observation x, incorporating overlapping features of the observation.
There are clustering approaches that assign a single POS tag to each word type.
0
We refer to (T , W ) as the lexicon of a language and ψ for the parameters for their generation; ψ depends on a single hyperparameter β.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Case and Zone: If the token starts with a capital letter (initCaps), then an additional feature (init- Caps, zone) is set to 1.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Although the segmentation requirements for Arabic are not as extreme as those for Chinese, Arabic is written with certain cliticized prepositions, pronouns, and connectives connected to adjacent words.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
For even larger models, we recommend RandLM; the memory consumption of the cache is not expected to grow with model size, and it has been reported to scale well.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, it is almost universally the case that no clear definition of what constitutes a "correct" segmentation is given, so these performance measures are hard to evaluate.
The corpus was annoted with different linguitic information.
0
2.1 Part-of-speech tags.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
An anti-greedy algorithm, AG: instead of the longest match, take the.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
We can do that . IbmS: Yes, wonderful.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The tree t2 must be on one of the two branches.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Variants of alif are inconsistently used in Arabic texts.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Pairwise comparison: We can use the same method to assess the statistical significance of one system outperforming another.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This range is collapsed to a number of buckets, typically by taking the hash modulo the number of buckets.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
First, we learn weights on individual phrase pairs rather than sentences.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
A contextual rule considers words surrounding the string in the sentence in which it appears (e.g., a rule that any proper name modified by an appositive whose head is president is a person).
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
The entries in such a lexicon may be thought of as meaningful surface segments paired up with their PoS tags li = (si, pi), but note that a surface segment s need not be a space-delimited token.
This corpus has several advantages: it is annotated at different levels.
0
As an indication, in our core corpus, we found an average sentence length of 15.8 words and 1.8 verbs per sentence, whereas a randomly taken sample of ten commentaries from the national papers Su¨ddeutsche Zeitung and Frankfurter Allgemeine has 19.6 words and 2.1 verbs per sentence.
This corpus has several advantages: it is annotated at different levels.
0
The wounds are still healing.), entity-attribute (e.g., She 2001), who determined that in their corpus of German computer tests, 38% of relations were lexically signalled.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
With regard to purely morphological phenomena, certain processes are not han­ dled elegantly within the current framework Any process involving reduplication, for instance, does not lend itself to modeling by finite-state techniques, since there is no way that finite-state networks can directly implement the copying operations required.
The AdaBoost algorithm was developed for supervised learning.
0
Note that in our formalism a weakhypothesis can abstain.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
For some language pairs (such as GermanEnglish) system performance is more divergent than for others (such as English-French), at least as measured by BLEU.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
With the exception of the Dutch data set, no other processing is performed on the annotated tags.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We have already mentioned the closely related work by Matsoukas et al (2009) on discriminative corpus weighting, and Jiang and Zhai (2007) on (nondiscriminative) instance weighting.
Human judges also pointed out difficulties with the evaluation of long sentences.
1
Replacing this with an ranked evaluation seems to be more suitable.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The Wang, Li, and Chang system fails on fragment (b) because their system lacks the word youlyoul 'soberly' and misinterpreted the thus isolated first youl as being the final hanzi of the preceding name; similarly our system failed in fragment (h) since it is missing the abbreviation i:lJI!
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Nonstochastic lexical-knowledge-based approaches have been much more numer­ ous.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Cohen and Smith (2007) chose a metric like SParseval (Roark et al., 2006) that first aligns the trees and then penalizes segmentation errors with an edit-distance metric.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
This is in contrast to dependency treebanks, e.g.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Gazdar (1985) argues that sharing of stacks can be used to give analyses for coordination.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
1
The experimental tests are carried out on the Verbmobil task (GermanEnglish, 8000-word vocabulary), which is a limited-domain spoken-language task.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Confidence Interval: To estimate confidence intervals for the average mean scores for the systems, we use standard significance testing.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Now, for this application one might be tempted to simply bypass the segmentation problem and pronounce the text character-by-character.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The space of a configuration is the sum of the lengths of the nonblank tape contents of the k work tapes.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
There is no relation between these two interpretations other then the fact that their surface forms coincide, and we argue that the only reason to prefer one analysis over the other is compositional.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We are interested in combining the substructures of the input parses to produce a better parse.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
6One of our experimental settings lacks document boundaries, and we used this approximation in both settings for consistency.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
(See also Wu and Fung [1994].)
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
The additional morphological material in such cases appears after the stem and realizes the extended meaning.
This corpus has several advantages: it is annotated at different levels.
0
This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
A second restriction on the formalisms is that choices during the derivation are independent of the context in the derivation.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
MENE is then trained on 80% of the training corpus, and tested on the remaining 20%.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Each lattice arc corresponds to a segment and its corresponding PoS tag, and a path through the lattice corresponds to a specific morphological segmentation of the utterance.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
For the experiments in this paper we made a couple of additional modifications to the CoBoost algorithm.
The AdaBoost algorithm was developed for supervised learning.
0
The learning task is to find two classifiers : 2x1 { —1, +1} 12 : 2x2 { —1, +1} such that (x1,) = f2(x2,t) = yt for examples i = 1, , m, and f1 (x1,) = f2 (x2,t) as often as possible on examples i = m + 1, ,n. To achieve this goal we extend the auxiliary function that bounds the training error (see Equ.
There are clustering approaches that assign a single POS tag to each word type.
0
In contrast to these approaches, our method directly incorporates these constraints into the structure of the model.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
of Tokens No.
There is no global pruning.
0
The cities of the traveling salesman problem correspond to source Table 1: DP algorithm for statistical machine translation.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
including Third Tone Sandhi (Shih 1986), which changes a 3 (low) tone into a 2 (rising) tone before another 3 tone: 'j";gil, xiao3 [lao3 shu3] 'little rat,' becomes xiao3 { lao2shu3 ], rather than xiao2 { lao2shu3 ], because the rule first applies within the word lao3shu3 'rat,' blocking its phrasal application.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Since these are distinct syntactic units, they are typically segmented.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
When the connective is an adverbial, there is much less clarity as to the range of the spans.