source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
72 78.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
They cluster NE instance pairs based on the words in the contexts using a bag- of-words method.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The government has to make a decision, and do it quickly.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
This has the effect of randomly permuting vocabulary identifiers, meeting the requirements of interpolation search when vocabulary identifiers are used as keys.
This corpus has several advantages: it is annotated at different levels.
0
Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Since our graph is built from a parallel corpus, we can use standard word alignment techniques to align the English sentences De 5Note that many combinations are impossible giving a PMI value of 0; e.g., when the trigram type and the feature instantiation don’t have words in common. and their foreign language translations Df.6 Label propagation in the graph will provide coverage and high recall, and we therefore extract only intersected high-confidence (> 0.9) alignments De�f.
There is no global pruning.
0
diesem 3.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The use of ILP in learning the desired grammar significantly increases the computational complexity of this method.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
1 The apparent difficulty of adapting constituency models to non-configurational languages has been one motivation for dependency representations (Hajicˇ and Zema´nek, 2004; Habash and Roth, 2009).
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The evaluation framework for the shared task is similar to the one used in last year’s shared task.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
It reads a file with a list of German connectives, and when a text is opened for annotation, it highlights all the words that show up in this list; these will be all the potential connectives.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Let s = a + b.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Here NO counts the number of hypothesized constituents in the development set that match the binary predicate specified as an argument.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
In considering recognition of LCFRS's, we make further assumption concerning the contribution of each structure to the input string, and how the composition operations combine structures in this respect.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Both parameters depend on a single hyperparameter α.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
It has no syntactic function.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
10 Chinese speakers may object to this form, since the suffix f, menD (PL) is usually restricted to.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
BABAR uses a DempsterShafer decision model (Stefik, 1995) to combine the evidence provided by the knowledge sources.
The corpus was annoted with different linguitic information.
0
The PCC is not the result of a funded project.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
(6), with W+ > W_.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
3 68.4 49.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Across all languages, high performance can be attained by selecting a single tag per word type.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
When a collision occurs, linear probing places the entry to be inserted in the next (higher index) empty bucket, wrapping around as necessary.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
The two knowledge sources that use semantic expectations, WordSemCFSem and CFSemCFSem, always return values of -1 or 0.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.
It is probably the first analysis of Arabic parsing of this kind.
0
99 94.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The two knowledge sources that use semantic expectations, WordSemCFSem and CFSemCFSem, always return values of -1 or 0.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
In section 4 we evaluate these transformations with respect to projectivized dependency treebanks, and in section 5 they are applied to parser output.
The corpus was annoted with different linguitic information.
0
Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.
All the texts were annotated by two people.
0
Section 3 discusses the applications that have been completed with PCC, or are under way, or are planned for the future.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
29 — 95.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The F- measure score increased for both domains, reflecting a substantial increase in recall with a small decrease in precision.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(b) 89 :1 t& tal de cai2neng2 hen3 he DE talent very 'He has great talent' f.b ga ol hig h While the current algorithm correctly handles the (b) sentences, it fails to handle the (a) sentences, since it does not have enough information to know not to group the sequences.ma3lu4 and?]cai2neng2 respectively.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Global context from the whole document is available and can be exploited in a natural manner with a maximum entropy classifier.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
01 75.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
BABAR then computes statistics over the training examples measuring the frequency with which extraction patterns and noun phrases co-occur in coreference resolutions.
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, in the sentence that starts with “Bush put a freeze on . . .
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
We identified three ways that contextual roles can be exploited: (1) by identifying caseframes that co-occur in resolutions, (2) by identifying nouns that co-occur with case- frames and using them to crosscheck anaphor/candidate compatibility, (3) by identifying semantic classes that co- occur with caseframes and using them to crosscheck anaphor/candidate compatability.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The first term in the objective function is the graph smoothness regularizer which encourages the distributions of similar vertices (large wij) to be similar.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
N, portion of examples on which both classifiers give a label rather than abstaining), and the proportion of these examples on which the two classifiers agree.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
ATB CTB6 Negra WSJ Trees 23449 28278 20602 43948 Word Typess 40972 45245 51272 46348 Tokens 738654 782541 355096 1046829 Tags 32 34 499 45 Phrasal Cats 22 26 325 27 Test OOV 16.8% 22.2% 30.5% 13.2% Per Sentence Table 4: Gross statistics for several different treebanks.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Yet we note that the better grammars without pruning outperform the poorer grammars using this technique, indicating that the syntactic context aids, to some extent, the disambiguation of unknown tokens.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The corpus was wordaligned using both HMM and IBM2 models, and the phrase table was the union of phrases extracted from these separate alignments, with a length limit of 7.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Unfortunately, we have much less data to work with than with the automatic scores.
Two general approaches are presented and two combination techniques are described for each approach.
0
We would like to thank Eugene Charniak, Michael Collins, and Adwait Ratnaparkhi for enabling all of this research by providing us with their parsers and helpful comments.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Using a wide-coverage morphological analyzer based on (Itai et al., 2006) should cater for a better coverage, and incorporating lexical probabilities learned from a big (unannotated) corpus (cf.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
This task measures how well each package performs in machine translation.
The AdaBoost algorithm was developed for supervised learning.
0
(7) is at 0 when: 1) Vi : sign(gi (xi)) = sign(g2 (xi)); 2) Ig3(xi)l oo; and 3) sign(gi (xi)) = yi for i = 1, , m. In fact, Zco provides a bound on the sum of the classification error of the labeled examples and the number of disagreements between the two classifiers on the unlabeled examples.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The unlabeled data gives many such "hints" that two features should predict the same label, and these hints turn out to be surprisingly useful when building a classifier.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Table 2 shows single-threaded results, mostly for comparison to IRSTLM, and Table 3 shows multi-threaded results.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Now, for this application one might be tempted to simply bypass the segmentation problem and pronounce the text character-by-character.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
This means that the PCC cannot grow particularly quickly.
Combining multiple highly-accurate independent parsers yields promising results.
0
In general, the lemma of the previous section does not ensure that all the productions in the combined parse are found in the grammars of the member parsers.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The maximum likelihood estimates (i.e., parameter values which maximize 10) can not be found analytically, but the EM algorithm can be used to hill-climb to a local maximum of the likelihood function from some initial parameter settings.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
JI!
This assumption, however, is not inherent to type-based tagging models.
0
This approach makes the training objective more complex by adding linear constraints proportional to the number of word types, which is rather prohibitive.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The number of NE instance pairs used in their experiment is less than half of our method.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
But we also need an estimate of the probability for a non-occurring though possible plural form like i¥JJ1l.f, nan2gua1-men0 'pumpkins.'
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
We have not to date explored these various options.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
After the first step towards breadth had been taken with the PoS-tagging, RST annotation, and URML conversion of the entire corpus of 170 texts12 , emphasis shifted towards depth.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Given this similarity function, we define a nearest neighbor graph, where the edge weight for the n most similar vertices is set to the value of the similarity function and to 0 for all other vertices.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The problem of coreference resolution has received considerable attention, including theoretical discourse models (e.g., (Grosz et al., 1995; Grosz and Sidner, 1998)), syntactic algorithms (e.g., (Hobbs, 1978; Lappin and Le- ass, 1994)), and supervised machine learning systems (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Find keywords for each NE pair When we look at the contexts for each domain, we noticed that there is one or a few important words which indicate the relation between the NEs (for example, the word “unit” for the phrase “a unit of”).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
We focus on phrases which connect two Named Entities (NEs), and proceed in two stages.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Automatic Paraphrase Discovery based on Context and Keywords between NE Pairs
It is probably the first analysis of Arabic parsing of this kind.
0
As a result, Arabic sentences are usually long relative to English, especially after Length English (WSJ) Arabic (ATB) ≤ 20 41.9% 33.7% ≤ 40 92.4% 73.2% ≤ 63 99.7% 92.6% ≤ 70 99.9% 94.9% Table 2: Frequency distribution for sentence lengths in the WSJ (sections 2–23) and the ATB (p1–3).
Here we present two algorithms.
0
Zt can be written as follows Following the derivation of Schapire and Singer, providing that W+ > W_, Equ.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, there will remain a large number of words that are not readily adduced to any produc­ tive pattern and that would simply have to be added to the dictionary.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
In Modern Hebrew (Hebrew), a Semitic language with very rich morphology, particles marking conjunctions, prepositions, complementizers and relativizers are bound elements prefixed to the word (Glinert, 1989).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
1
In this paper, we offer broad insight into the underperformance of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The approach has been successfully tested on the 8 000-word Verbmobil task.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal Chi­ nese personal name, retains a foreign flavor because of liM.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Aside from adding a simple rule to correct alif deletion caused by the preposition J, no other language-specific processing is performed.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-8) (1-6) lcc (1-6) (1-7) (1-4) utd (1-7) (1-6) (2-7) upc-mr (1-8) (1-6) (1-7) nrc (1-7) (2-6) (8) ntt (1-8) (2-8) (1-7) cmu (3-7) (4-8) (2-7) rali (5-8) (3-9) (3-7) systran (9) (8-9) (10) upv (10) (10) (9) Spanish-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-7) (1-6) (1-5) ntt (1-7) (1-8) (1-5) lcc (1-8) (2-8) (1-4) utd (1-8) (2-7) (1-5) nrc (2-8) (1-9) (6) upc-mr (1-8) (1-6) (7) uedin-birch (1-8) (2-10) (8) rali (3-9) (3-9) (2-5) upc-jg (7-9) (6-9) (9) upv (10) (9-10) (10) German-English (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) uedin-phi (1-2) (1) (1) lcc (2-7) (2-7) (2) nrc (2-7) (2-6) (5-7) utd (3-7) (2-8) (3-4) ntt (2-9) (2-8) (3-4) upc-mr (3-9) (6-9) (8) rali (4-9) (3-9) (5-7) upc-jmc (2-9) (3-9) (5-7) systran (3-9) (3-9) (10) upv (10) (10) (9) Figure 7: Evaluation of translation to English on in-domain test data 112 English-French (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) nrc (1-5) (1-5) (1-6) upc-mr (1-4) (1-5) (1-6) upc-jmc (1-6) (1-6) (1-5) systran (2-7) (1-6) (7) utd (3-7) (3-7) (3-6) rali (1-7) (2-7) (1-6) ntt (4-7) (4-7) (1-5) English-Spanish (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) ms (1-5) (1-7) (7-8) upc-mr (1-4) (1-5) (1-4) utd (1-5) (1-6) (1-4) nrc (2-7) (1-6) (5-6) ntt (3-7) (1-6) (1-4) upc-jmc (2-7) (2-7) (1-4) rali (5-8) (6-8) (5-6) uedin-birch (6-9) (6-10) (7-8) upc-jg (9) (8-10) (9) upv (9-10) (8-10) (10) English-German (In Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-5) (3-5) ntt (1-5) (2-6) (1-3) upc-jmc (1-5) (1-4) (1-3) nrc (2-4) (1-5) (4-5) rali (3-6) (2-6) (1-4) systran (5-6) (3-6) (7) upv (7) (7) (6) Figure 8: Evaluation of translation from English on in-domain test data 113 French-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-5) (1-8) (1-4) cmu (1-8) (1-9) (4-7) systran (1-8) (1-7) (9) lcc (1-9) (1-9) (1-5) upc-mr (2-8) (1-7) (1-3) utd (1-9) (1-8) (3-7) ntt (3-9) (1-9) (3-7) nrc (3-8) (3-9) (3-7) rali (4-9) (5-9) (8) upv (10) (10) (10) Spanish-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-jmc (1-2) (1-6) (1-3) uedin-birch (1-7) (1-6) (5-8) nrc (2-8) (1-8) (5-7) ntt (2-7) (2-6) (3-4) upc-mr (2-8) (1-7) (5-8) lcc (4-9) (3-7) (1-4) utd (2-9) (2-8) (1-3) upc-jg (4-9) (7-9) (9) rali (4-9) (6-9) (6-8) upv (10) (10) (10) German-English (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1-4) (1-4) (7-9) uedin-phi (1-6) (1-7) (1) lcc (1-6) (1-7) (2-3) utd (2-7) (2-6) (4-6) ntt (1-9) (1-7) (3-5) nrc (3-8) (2-8) (7-8) upc-mr (4-8) (6-8) (4-6) upc-jmc (4-8) (3-9) (2-5) rali (8-9) (8-9) (8-9) upv (10) (10) (10) Figure 9: Evaluation of translation to English on out-of-domain test data 114 English-French (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1) (1) upc-jmc (2-5) (2-4) (2-6) upc-mr (2-4) (2-4) (2-6) utd (2-6) (2-6) (7) rali (4-7) (5-7) (2-6) nrc (4-7) (4-7) (2-5) ntt (4-7) (4-7) (3-6) English-Spanish (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) upc-mr (1-3) (1-6) (1-2) ms (1-7) (1-8) (6-7) utd (2-6) (1-7) (3-5) nrc (1-6) (2-7) (3-5) upc-jmc (2-7) (1-6) (3-5) ntt (2-7) (1-7) (1-2) rali (6-8) (4-8) (6-8) uedin-birch (6-10) (5-9) (7-8) upc-jg (8-9) (9-10) (9) upv (9) (8-9) (10) English-German (Out of Domain) Adequacy (rank) Fluency (rank) BLEU (rank) systran (1) (1-2) (1-6) upc-mr (2-3) (1-3) (1-5) upc-jmc (2-3) (3-6) (1-6) rali (4-6) (4-6) (1-6) nrc (4-6) (2-6) (2-6) ntt (4-6) (3-5) (1-6) upv (7) (7) (7) Figure 10: Evaluation of translation from English on out-of-domain test data 115 French-English In domain Out of Domain Adequacy Adequacy 0.3 0.3 • 0.2 0.2 0.1 0.1 -0.0 -0.0 -0.1 -0.1 -0.2 -0.2 -0.3 -0.3 -0.4 -0.4 -0.5 -0.5 -0.6 -0.6 -0.7 -0.7 •upv -0.8 -0.8 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 •upv •systran upcntt • rali upc-jmc • cc Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •systran •upv upc -jmc • Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 • • • td t cc upc- • rali 21 22 23 24 25 26 27 28 29 30 31 15 16 17 18 19 20 21 22 Figure 11: Correlation between manual and automatic scores for French-English 116 Spanish-English Figure 12: Correlation between manual and automatic scores for Spanish-English -0.3 -0.4 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 •upv -0.4 •upv -0.3 In Domain •upc-jg Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 Out of Domain •upc-jmc •nrc •ntt Adequacy upc-jmc • • •lcc • rali • •rali -0.7 -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 • •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 ntt • upc-mr •lcc •utd •upc-jg •rali Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upc-jmc • uedin-birch -0.5 -0.5 •upv 23 24 25 26 27 28 29 30 31 32 19 20 21 22 23 24 25 26 27 28 117 In Domain Out of Domain Adequacy Adequacy German-English 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 lcc • upc-jmc •systran •upv Fluency •ula •upc-mr •lcc 15 16 17 18 19 20 21 22 23 24 25 26 27 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •systran •upv •uedin-phi -jmc •rali •systran -0.3 -0.4 -0.5 -0.6 •upv 12 13 14 15 16 17 18 19 20 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 Fluency uedin-phi • • •utd •upc-jmc •upc-mr 0.4 •rali -0.3 -0.4 -0.5 •upv 12 13 14 15 16 17 18 19 20 0.3 0.2 0.1 -0.0 -0.1 -0.2 English-French In Domain Out of Domain Adequacy Adequacy .
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The composition operations are mapped onto operations that use concatenation to define the substrings spanned by the resulting structures.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
36.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Note that in our formalism a weakhypothesis can abstain.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
On each step CoBoost searches for a feature and a weight so as to minimize either 40 or 40.
Here both parametric and non-parametric models are explored.
0
If the parse contains productions from outside our grammar the machine has no direct method for handling them (e.g. the resulting database query may be syntactically malformed).
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
On the English side, however, the vertices (denoted by Ve) correspond to word types.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The final score is obtained from: max e;e0 j2fJ􀀀L;;Jg p($je; e0) Qe0 (e; I; f1; ; Jg; j); where p($je; e0) denotes the trigram language model, which predicts the sentence boundary $ at the end of the target sentence.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
However their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our case- frame network.(Niyu et al., 1998) used unsupervised learning to ac quire gender, number, and animacy information from resolutions produced by a statistical pronoun resolver.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Alternatively, h can be thought of as defining a decision list of rules x y ranked by their "strength" h(x, y).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
A promising direction for future work is to explicitly model a distribution over tags for each word type.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The first m pairs have labels yi, whereas for i = m + 1, , n the pairs are unlabeled.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The commentaries in PCC are all of roughly the same length, ranging from 8 to 10 sentences.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The complexity of the quasimonotone search is O(E3 J (R2+LR)).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In the denomi 11 We have two such lists, one containing about 17,000 full names, and another containing frequencies of.