source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
0.2 0.1 0.0 -0.1 25 26 27 28 29 30 31 32 -0.2 -0.3 •systran • ntt 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 20 21 22 23 24 25 26 Fluency Fluency •systran •nrc rali 25 26 27 28 29 30 31 32 0.2 0.1 0.0 -0.1 -0.2 -0.3 cme p � 20 21 22 23 24 25 26 0.5 0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -0.3 Figure 14: Correlation between manual and automatic scores for English-French 119 In Domain Out of Domain •upv Adequacy -0.9 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv 23 24 25 26 27 28 29 30 31 32 •upc-mr •utd •upc-jmc •uedin-birch •ntt •rali •uedin-birch 16 17 18 19 20 21 22 23 24 25 26 27 Adequacy •upc-mr 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 -1.0 -1.1 English-Spanish Fluency •ntt •nrc •rali •uedin-birch -0.2 -0.3 -0.5 •upv 16 17 18 19 20 21 22 23 24 25 26 27 -0.4 nr • rali Fluency -0.4 •upc-mr utd •upc-jmc -0.5 -0.6 •upv 23 24 25 26 27 28 29 30 31 32 0.2 0.1 -0.0 -0.1 -0.2 -0.3 0.3 0.2 0.1 -0.0 -0.1 -0.6 -0.7 Figure 15: Correlation between manual and automatic scores for English-Spanish 120 English-German In Domain Out of Domain Adequacy Adequacy 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 •upv -0.2 -0.3 -0.4 -0.5 -0.6 -0.7 -0.8 -0.9 •upv 0.5 0.4 •systran •upc-mr • •rali 0.3 •ntt 0.2 0.1 -0.0 -0.1 •systran •upc-mr -0.9 9 10 11 12 13 14 15 16 17 18 19 6 7 8 9 10 11 Fluency 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 •upv -0.5 •upv •systran •upc-mr • Fluency 0.4 0.3 0.2 0.1 -0.0 -0.1 -0.2 -0.3 -0.4 -0.5 -0.6 •systran •ntt
The AdaBoost algorithm was developed for supervised learning.
0
The key point is that the second constraint can be remarkably powerful in reducing the complexity of the learning problem.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
(b) POS tagging accuracy is lowest for maSdar verbal nouns (VBG,VN) and adjectives (e.g., JJ).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Whereas Arabic linguistic theory as Saddam (a) Reference (b) Stanford signs (1) and (2) to the class of pseudo verbs 01 +i J>1� inna and her sisters since they can beinflected, the ATB conventions treat (2) as a com plementizer, which means that it must be the head of SBAR.
This corpus has several advantages: it is annotated at different levels.
0
This paper, however, provides a comprehensive overview of the data collection effort and its current state.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The search starts in the hypothesis (I; f;g; 0).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
and “H” represents “Hanson Plc”.
They focused on phrases which two Named Entities, and proceed in two stages.
0
We proposed an unsupervised method to discover paraphrases from a large untagged corpus.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
There have been other kinds of efforts to discover paraphrase automatically from corpora.
It is probably the first analysis of Arabic parsing of this kind.
0
29 — 95.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We did have a problem classifying some modified noun phrases where the modified phrase does not represent a qualified or restricted form of the head, like “chairman” and “vice chairman”, as these are both represented by the keyword “chairman”.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
In MUC6, the best result is achieved by SRA (Krupka, 1995).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Our analysis begins with a description of syntactic ambiguity in unvocalized MSA text (§2).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For the examples given in (1) and (2) this certainly seems possible.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
However, dynamic programming can be used to find the shortest tour in exponential time, namely in O(n22n), using the algorithm by Held and Karp.
Their results show that their high performance NER use less training data than other systems.
0
The sources of our dictionaries are listed in Table 2.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
One may argue with these efforts on normalization, and ultimately their value should be assessed by assessing their impact on inter-annotator agreement.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
We have estimated the performance of IdentiFinder ' 99 at 200K words of training data from the graphs.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We then discuss how we adapt and generalize a boosting algorithm, AdaBoost, to the problem of named entity classification.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Both of the switching techniques, as well as the parametric hybridization technique were also shown to be robust when a poor parser was introduced into the experiments.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We use the HSPELL9 (Har’el and Kenigsberg, 2004) wordlist as a lexeme-based lexicon for pruning segmentations involving invalid segments.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
We observe similar trends when using another measure – type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 – – 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 – – 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 – – 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 – – 4 0 . 6 7 3 . 2 – – 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
To evaluate the performance on the segmentation task, we report SEG, the standard harmonic means for segmentation Precision and Recall F1 (as defined in Bar-Haim et al. (2005); Tsarfaty (2006)) as well as the segmentation accuracy SEGTok measure indicating the percentage of input tokens assigned the correct exact segmentation (as reported by Cohen and Smith (2007)).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The following context-free production captures the derivation step of the grammar shown in Figure 7, in which the trees in the auxiliary tree set are adjoined into themselves at the root node (address c).
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
We present two algorithms.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
We formulate the update as follows: where ∀ui ∈ Vf \ Vfl, γi(y) and κi are defined as: We ran this procedure for 10 iterations.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
We further thank Khalil Simaan (ILLCUvA) for his careful advise concerning the formal details of the proposal.
This paper talks about Pseudo-Projective Dependency Parsing.
0
We call this pseudoprojective dependency parsing, since it is based on a notion of pseudo-projectivity (Kahane et al., 1998).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
36 79.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Also, “agree” in the CC-domain is not a desirable keyword.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
63 95.
These clusters are computed using an SVD variant without relying on transitional structure.
0
β is the shared hyperparameter for the tag assignment prior and word feature multinomials.
Replacing this with a ranked evaluation seems to be more suitable.
0
Almost all annotators expressed their preference to move to a ranking-based evaluation in the future.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Almost all annotators reported difficulties in maintaining a consistent standard for fluency and adequacy judgements, but nevertheless most did not explicitly move towards a ranking-based evaluation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Nonetheless, the results of the comparison with human judges demonstrates that there is mileage being gained by incorporating models of these types of words.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).
Here we present two algorithms.
0
For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
7.96 5.55 1 l...................................................................................................................................................................................................J..
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
4 53.7 43.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
A token that is allCaps will also be initCaps.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The complexity of the algorithm is O(E3 J2 2J), where E is the size of the target language vocabulary.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Compared to last year’s shared task, the participants represent more long-term research efforts.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
As long as the main evaluation metric is dependency accuracy per word, with state-of-the-art accuracy mostly below 90%, the penalty for not handling non-projective constructions is almost negligible.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Finally, we model the probability of a new transliterated name as the product of PTN and PTN(hanzi;) for each hanzi; in the putative name.13 The foreign name model is implemented as an WFST, which is then summed with the WFST implementing the dictionary, morpho 13 The current model is too simplistic in several respects.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The PROBING model can perform optimistic searches by jumping to any n-gram without needing state and without any additional memory.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Unfortunately, we have much less data to work with than with the automatic scores.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
This fact annoyed especially his dog...).
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
While there might be some controversy about the exact definition of such a tagset, these 12 categories cover the most frequent part-of-speech and exist in one form or another in all of the languages that we studied.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
In previous work (Bean and Riloff, 1999), we developed an unsupervised learning algorithm that automatically recognizes definite NPs that are existential without syntactic modification because their meaning is universally understood.
Their results show that their high performance NER use less training data than other systems.
0
(3)In sentence (1), McCann can be a person or an orga nization.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Tables 4 and 5 also show that putting all of the contextual role KSs in play at the same time produces the greatest performance gain.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The inventions disclosed herein are the subject of a patent application owned by the University of Utah and licensed on an exclusive basis to Attensity Corporation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Compared to last year’s shared task, the participants represent more long-term research efforts.
Here we present two algorithms.
0
Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state.
They have made use of local and global features to deal with the instances of same token in a document.
0
Moreover, if we want to test on a huge test corpus, indexing the whole corpus might prove computationally expensive.
This assumption, however, is not inherent to type-based tagging models.
0
For each feature type f and tag t, a multinomial ψtf is drawn from a symmetric Dirichlet distribution with concentration parameter β.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Memory usage is the same as with binary search and lower than with set.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
For unaligned words, we set the tag to the most frequent tag in the corresponding treebank.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
These models generally outperform our memory consumption but are much slower, even when cached.
This corpus has several advantages: it is annotated at different levels.
1
The corpus has been annotated with six different types of information, which are characterized in the following subsections.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
The ATB is disadvantaged by having fewer trees with longer average 5 LDC A-E catalog numbers: LDC2008E61 (ATBp1v4), LDC2008E62 (ATBp2v3), and LDC2008E22 (ATBp3v3.1).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
We evaluate the system's performance by comparing its segmentation 'Tudgments" with the judgments of a pool of human segmenters, and the system is shown to perform quite well.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
irL as the product of the probability estimate for i¥JJ1l., and the probability estimate just derived for unseen plurals in ir,: p(i¥1J1l.ir,) p(i¥1J1l.)p(unseen(f,)).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
However, there is a strong relationship between ni1s and the number of hanzi in the class.
It is probably the first analysis of Arabic parsing of this kind.
0
The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.'
Here we present two algorithms.
0
Assume that the two classifiers are "rote learners": that is, 1.1 and 12 are defined through look-up tables that list a label for each member of X1 or X2.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Dagan and Itai (Dagan and Itai, 1990) experimented with co-occurrence statistics that are similar to our lexical caseframe expectations.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
an event.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Supervised learning approaches have advanced the state-of-the-art on a variety of tasks in natural language processing, resulting in highly accurate systems.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Many researchers have developed coreference resolvers, so we will only discuss the methods that are most closely related to BABAR.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
We map the ATB morphological analyses to the shortened “Bies” tags for all experiments.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
This scenario is applicable to a large set of languages and has been considered by a number of authors in the past (Alshawi et al., 2000; Xi and Hwa, 2005; Ganchev et al., 2009).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Let us notate the set of previously unseen, or novel, members of a category X as unseen(X); thus, novel members of the set of words derived in f, menO will be de­ noted unseen(f,).
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
In all of our experiments, the binary file (whether mapped or, in the case of most other packages, interpreted) is loaded into the disk cache in advance so that lazy mapping will never fault to disk.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
For instance, for out-ofdomain English-French, Systran has the best BLEU and manual scores.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.
It is probably the first analysis of Arabic parsing of this kind.
0
The content does not necessarily reflect the views of the U.S. Government, and no official endorsement should be inferred.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
In the graphs, system scores are indicated by a point, the confidence intervals by shaded areas around the point.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We will describe the evaluation of such clusters in the next subsection.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
This is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative IN sentences as queries, then pooling the match results.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
The Leaf Ancestor metric measures the cost of transforming guess trees to the reference (Sampson and Babarczy, 2003).
These clusters are computed using an SVD variant without relying on transitional structure.
0
Top 5 Bot to m 5 Go ld NN P NN JJ CD NN S RB S PD T # ” , 1T W CD W RB NN S VB N NN PR P$ W DT : MD . +P RI OR CD JJ NN S WP $ NN RR B- , $ ” . +F EA TS JJ NN S CD NN P UH , PR P$ # . “ Table 5: Type-level English POS Tag Ranking: We list the top 5 and bottom 5 POS tags in the lexicon and the predictions of our models under the best hyperparameter setting.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
For example, a good classifier would identify Mrs. Frank as a person, Steptoe & Johnson as a company, and Honduras as a location.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We are interested in combining the substructures of the input parses to produce a better parse.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
We can only compare with Grac¸a et al.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Some of the operations will be constant functions, corresponding to elementary structures, and will be written as f () = zi), where each z, is a constant, the string of terminal symbols al an,,,.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This class-based model gives reasonable results: for six radical classes, Table 1 gives the estimated cost for an unseen hanzi in the class occurring as the second hanzi in a double GIVEN name.