source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
This is especially effective at reducing load time, since raw bytes are read directly to memory—or, as happens with repeatedly used models, are already in the disk cache.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
As noted for the perplexity task, we do not expect cache to grow substantially with model size, so RandLM remains a low-memory option.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In this section we present a partial evaluation of the current system, in three parts.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.
They have made use of local and global features to deal with the instances of same token in a document.
0
Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the “frequency” of Corp. is 2).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Arabic is a morphologically rich language with a root-and-pattern system similar to other Semitic languages.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
We first define &quot;pseudo-labels&quot;,-yt, as follows: = Yi t sign(g 0\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The availability of these resources guided our selection of foreign languages.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
ICOC and CSPP contributed the greatest im provements.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
A possible probabilistic model for assigning probabilities to complex analyses of a surface form may be and indeed recent sequential disambiguation models for Hebrew (Adler and Elhadad, 2006) and Arabic (Smith et al., 2005) present similar models.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We have developed a coreference resolver called BABAR that uses contextual role knowledge to make coreference decisions.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
The final score is obtained from: max e;e0 j2fJ􀀀L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
For effectively annotating connectives/scopes, we found that existing annotation tools were not well-suited, for two reasons: • Some tools are dedicated to modes of annotation (e.g., tiers), which could only quite un-intuitively be used for connectives and scopes.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
There is twice as much language modelling data, since training data for the machine translation system is filtered against sentences of length larger than 40 words.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
We will also directly compare with a baseline similar to the Matsoukas et al approach in order to measure the benefit from weighting phrase pairs (or ngrams) rather than full sentences.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Multiple features can be used for the same token.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Having defined LCFRS's, in Section 4.2 we established the semilinearity (and hence constant growth property) of the languages generated.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, for our purposes it is not sufficient to repre­ sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The DempsterShafer rule for combining pdfs is: to {C}, meaning that it is 70% sure the correct hypothesis is C. The intersection of these sets is the null set because these beliefs are contradictory.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
AdaBoost was first introduced in (Freund and Schapire 97); (Schapire and Singer 98) gave a generalization of AdaBoost which we will use in this paper.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
This is in general very difficult, given the extremely free manner in which Chinese given names are formed, and given that in these cases we lack even a family name to give the model confidence that it is identifying a name.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The development of automatic scoring methods is an open field of research.
It is probably the first analysis of Arabic parsing of this kind.
0
Certainly these linguistic factors increase the difficulty of syntactic disambiguation.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Given the closeness of most systems and the wide over-lapping confidence intervals it is hard to make strong statements about the correlation between human judgements and automatic scoring methods such as BLEU.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
5 Related Work.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The iw all map variant uses a non-0 y weight on a uniform prior in p,,(s t), and outperforms a version with y = 0 (iw all) and the “flattened” variant described in section 3.2.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
To be a member of LCFRS a formalism must satisfy two restrictions.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel’ˇcuk, 1988; Covington, 1990).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
An additional case of super-segmental morphology is the case of Pronominal Clitics.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold, 30.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
So, who won the competition?
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
However, 1 http://maxent.sourceforge.net 3.2 Testing.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
This may be the sign of a maturing research environment.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The final model tions.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Pseudo-code describing the generalized boosting algorithm of Schapire and Singer is given in Figure 1.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
At first glance, we quickly recognize that many systems are scored very similar, both in terms of manual judgement and BLEU.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
In section 4 we evaluate these transformations with respect to projectivized dependency treebanks, and in section 5 they are applied to parser output.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Many packages perform language model queries.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We received submissions from 14 groups from 11 institutions, as listed in Figure 2.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
As we shall argue, the semantic class affiliation of a hanzi constitutes useful information in predicting its properties.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
lla/llb and 14a/14b respectively).
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
More recently, Subramanya et al. (2010) defined a graph over the cliques in an underlying structured prediction model.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
When the connective is an adverbial, there is much less clarity as to the range of the spans.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Consequently, all three parsers prefer the nominal reading.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
For the automatic evaluation, we used BLEU, since it is the most established metric in the field.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
2 Chinese ?l* han4zi4 'Chinese character'; this is the same word as Japanese kanji..
Here we present two algorithms.
0
In principle a feature could be an arbitrary predicate of the (spelling, context) pair; for reasons that will become clear, features are limited to querying either the spelling or context alone.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
an event.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In the numerator, however, the counts of ni1s are quite irregular, in­ cluding several zeros (e.g., RAT, none of whose members were seen).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The choice of the particular newspaper was motivated by the fact that the language used in a regional daily is somewhat simpler than that of papers read nationwide.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
For the automatic evaluation, we used BLEU, since it is the most established metric in the field.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
The work reported here is closely related to [Ha- segawa et al. 04].
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Memory usage is likely much lower than ours. fThe original paper (Germann et al., 2009) provided only 2s of query timing and compared with SRI when it exceeded available RAM.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The paper explains the design decisions taken in the annotations, and describes a number of applications using this corpus with its multi-layer annotation.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
We hope that this will allow practitioners to apply our approach directly to languages for which no resources are available.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Furthermore, the systematic way in which particles are prefixed to one another and onto an open-class category gives rise to a distinct sort of morphological ambiguity: space-delimited tokens may be ambiguous between several different segmentation possibilities.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).
It is probably the first analysis of Arabic parsing of this kind.
0
In these experiments, the input lacks segmentation markers, hence the slightly different dev set baseline than in Table 6.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Because conjunctions are elevated in the parse trees when they separate recursive constituents, we choose the right sister instead of the category of the next word.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
2 62.2 39.
They have made use of local and global features to deal with the instances of same token in a document.
0
This process is repeated 5 times by rotating the data appropriately.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Given that part-of-speech labels are properties of words rather than morphemes, it follows that one cannot do part-of-speech assignment without having access to word-boundary information.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
92 76.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Part of the gap between resident and virtual memory is due to the time at which data was collected.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
1,000 of these were picked at random, and labeled by hand to produce a test set.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Semantic expectations are analogous to lexical expectations except that they represent semantic classes rather than nouns.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Across all languages, high performance can be attained by selecting a single tag per word type.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
We indicate whether a context with zero log backoff will extend using the sign bit: +0.0 for contexts that extend and −0.0 for contexts that do not extend.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The first concerns how to deal with ambiguities in segmentation.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Handling Unknown tokens When handling unknown tokens in a language such as Hebrew various important aspects have to be borne in mind.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Instead, we extend the variation n-gram method of Dickinson (2005) to compare annotation error rates in the WSJ and ATB.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
A minimal requirement for building a Chinese word segmenter is obviously a dictionary; furthermore, as has been argued persuasively by Fung and Wu (1994), one will perform much better at segmenting text by using a dictionary constructed with text of the same genre as the text to be segmented.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Thatcher (1973) describes a tee pumping lemma for recognizable sets related to the string pumping lemma for regular sets.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
This was also inspired by the work on the Penn Discourse Tree Bank7 , which follows similar goals for English.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
To date we have not done a separate evaluation of foreign-name recognition.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
While many systems had similar performance, the results offer interesting insights, especially about the relative performance of statistical and rule-based systems.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
3.1 Corpora.
Replacing this with a ranked evaluation seems to be more suitable.
0
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We refer to different readings as different analyses whereby the segments are deterministic given the sequence of PoS tags.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
In future work we plan to try this approach with more competitive SMT systems, and to extend instance weighting to other standard SMT components such as the LM, lexical phrase weights, and lexicalized distortion.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Finally, other packages implement language model estimation while we are currently dependent on them to generate an ARPA file.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
First, it directly encodes linguistic intuitions about POS tag assignments: the model structure reflects the one-tag-per-word property, and a type- level tag prior captures the skew on tag assignments (e.g., there are fewer unique determiners than unique nouns).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The sequence of states needed to carry out the word reordering example in Fig.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
As Figure 1 shows, this word has no high-confidence alignment in the Italian-English bitext.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
To initialize the graph for label propagation we use a supervised English tagger to label the English side of the bitext.7 We then simply count the individual labels of the English tokens and normalize the counts to produce tag distributions over English word types.
A beam search concept is applied as in speech recognition.
0
An error count of 0:0 is assigned to a perfect translation, and an error count of 1:0 is assigned to a semantically and syntactically wrong translation.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Several such particles may be prefixed onto a single stem, in which case the affixation is subject to strict linear precedence constraints.