source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
3 60.7 50.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
In Figure 4, we displayed the number of system comparisons, for which we concluded statistical significance.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Section 2 describes our baseline techniques for SMT adaptation, and section 3 describes the instance-weighting approach.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This data set of manual judgements should provide a fruitful resource for research on better automatic scoring methods.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
BABAR uses the log-likelihood statistic (Dunning, 1993) to evaluate the strength of a co-occurrence relationship.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The resulting algorithm has a complexity of O(n!).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Hereafter, each pair of NE categories will be called a domain; e.g. the “Company – Company” domain, which we will call CC- domain (Step 2).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
In application settings, this may be a profitable strategy.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
One of the difficulties in Natural Language Processing is the fact that there are many ways to express the same thing or event.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.
These clusters are computed using an SVD variant without relying on transitional structure.
0
The P (W |T , ψ) term in the lexicon component now decomposes as: n P (W |T , ψ) = n P (Wi|Ti, ψ) i=1 n   tions are not modeled by the standard HMM, which = n  n P (v|ψTi f ) instead can model token-level frequency.
It is probably the first analysis of Arabic parsing of this kind.
0
We use the log-linear tagger of Toutanova et al.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The second weakness is purely conceptual, and probably does not affect the per­ formance of the model.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
For example, ... fraud related to work on a federally funded sewage plant in Georgia In this case, Georgia is extracted: the NP containing it is a complement to the preposition in; the PP headed by in modifies the NP a federally funded sewage plant, whose head is the singular noun plant.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
2 60.7 56.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
If the semantic expectations do not intersect, then we know that the case- frames extract mutually exclusive types of noun phrases.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We have not explored this strategy.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In the named entity domain these rules were Each of these rules was given a strength of 0.9999.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ­ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
A beam search concept is applied as in speech recognition.
0
A simple extension will be used to handle this problem.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
and f,.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
It rewards matches of n-gram sequences, but measures only at most indirectly overall grammatical coherence.
There are clustering approaches that assign a single POS tag to each word type.
0
— similar results have been observed across multiple languages.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Up to now, most IE researchers have been creating paraphrase knowledge (or IE patterns) by hand and for specific tasks.
They have made use of local and global features to deal with the instances of same token in a document.
0
Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.
They focused on phrases which two Named Entities, and proceed in two stages.
0
The basic strategy is, for a given pair of entity types, to start with some examples, like several famous book title and author pairs; and find expressions which contains those names; then using the found expressions, find more author and book title pairs.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Clearly this is not the only way to estimate word-frequencies, however, and one could consider applying other methods: in partic­ ular since the problem is similar to the problem of assigning part-of-speech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags, one might consider a more sophisticated approach such as that described in Kupiec (1992); one could also use methods that depend on a small hand-tagged seed corpus, as suggested by one reviewer.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Notice that even though IG's and LFG's involve CFG-like productions, they are (linguistically) fundamentally different from CFG's because the composition operations need not be linear.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Unfortunately, the best completely unsupervised English POS tagger (that does not make use of a tagging dictionary) reaches only 76.1% accuracy (Christodoulopoulos et al., 2010), making its practical usability questionable at best.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Nonetheless, the prospect of a network of annotated discourse resources seems particularly promising if not only a single annotation layer is used but a whole variety of them, so that a systematic search for correlations between them becomes possible, which in turn can lead to more explanatory models of discourse structure.
This assumption, however, is not inherent to type-based tagging models.
0
This distributional sparsity of syntactic tags is not unique to English 1 The source code for the work presented in this paper is available at http://groups.csail.mit.edu/rbg/code/typetagging/.
They have made use of local and global features to deal with the instances of same token in a document.
0
The secondary classifier in (Borthwick, 1999) uses information not just from the current article, but also from the whole test corpus, with an additional feature that indicates if the information comes from the same document or from another document.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
However, the point of RandLM is to scale to even larger data, compensating for this loss in quality.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Applications The discovered paraphrases have multiple applications.
They focused on phrases which two Named Entities, and proceed in two stages.
0
“Agree” is a subject control verb, which dominates another verb whose subject is the same as that of “agree”; the latter verb is generally the one of interest for extraction.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
P St = n. β T VARIABLES ψ Y W : Word types (W1 ,.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Only 2 link in the CC- domain (buy-purchase, acquire-acquisition) and 2 links (trader-dealer and head-chief) in the PC- domain are found in the same synset of Word- Net 2.1 (http://wordnet.princeton.edu/).
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
(f1; ;mg ; l) 2 (f1; ;mg n fl; l1g ; l0) !
Here we present two algorithms.
0
). context=x The context for the entity.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Section 4 presents experimen tal results on two corpora: the MUC4 terrorism corpus, and Reuters texts about natural disasters.
A beam search concept is applied as in speech recognition.
0
kann 7.nicht 8.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
2 62.2 39.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In this case, Maury Cooper is extracted.
This corpus has several advantages: it is annotated at different levels.
0
What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
One annotator suggested that this was the case for as much as 10% of our test sentences.
They have made use of local and global features to deal with the instances of same token in a document.
0
In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
For brevity we omit the segments from the analysis, and so analysis of the form “fmnh” as f/REL mnh/VB is represented simply as REL VB.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
4 70.4 46.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Evaluation of Morphological Analysis.
The AdaBoost algorithm was developed for supervised learning.
0
Our derivation is slightly different from the one presented in (Schapire and Singer 98) as we restrict at to be positive.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
In total, for the 2,000 NE category pairs, 5,184 keywords are found.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The authors provided us with a ratio between TPT and SRI under different conditions. aLossy compression with the same weights. bLossy compression with retuned weights. ditions make the value appropriate for estimating repeated run times, such as in parameter tuning.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Dropping the conditioning on 0 for brevity, and letting ¯cλ(s, t) = cλ(s, t) + yu(s|t), and ¯cλ(t) = 4Note that the probabilities in (7) need only be evaluated over the support of ˜p(s, t), which is quite small when this distribution is derived from a dev set.
These clusters are computed using an SVD variant without relying on transitional structure.
0
3 61.7 38.
This corpus has several advantages: it is annotated at different levels.
0
For developing these mechanisms, the possibility to feed in hand-annotated information is very useful.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
When a company buys another company, a paying event can occur, but these two phrases do not indicate the same event.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Previous work on morphological and syntactic disambiguation in Hebrew used different sets of data, different splits, differing annotation schemes, and different evaluation measures.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We present two algorithms.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
6 Conclusions.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
(f1; ;mg n fl1; l2; l3g ;m) German to English the monotonicity constraint is violated mainly with respect to the German verbgroup.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
The next two rows are results of oracle experiments.
It is probably the first analysis of Arabic parsing of this kind.
0
Lattice parsing (Chappelier et al., 1999) is an alternative to a pipeline that prevents cascading errors by placing all segmentation options into the parse chart.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
As long as the main evaluation metric is dependency accuracy per word, with state-of-the-art accuracy mostly below 90%, the penalty for not handling non-projective constructions is almost negligible.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Both implementations employ a state object, opaque to the application, that carries information from one query to the next; we discuss both further in Section 4.2.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
An anti-greedy algorithm, AG: instead of the longest match, take the.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
with the number of exactly matching guess trees.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
An analysis of nouns that occur in both the singular and the plural in our database reveals that there is indeed a slight but significant positive correlation-R2 = 0.20, p < 0.005; see Figure 6.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
"c' 0 + 0 "0 ' • + a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y • Taiwan 0 ·;; 0 c CD E i5 0"' 9 9 • Mainland • • • • -0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Our coreference resolver performed well in two domains, and experiments showed that each contextual role knowledge source contributed valuable information.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
If all three sentences are in the same document, then even a human will find it difficult to classify McCann in (1) into either person or organization, unless there is some other information provided.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The natural baseline approach is to concatenate data from IN and OUT.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
There is a (costless) transition between the NC node and f,.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
However, the characterization given in the main body of the text is correct sufficiently often to be useful.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
then define the best segmentation to be the cheapest or best path in Id(I) o D* (i.e., Id(I) composed with the transitive closure of 0).6 Consider the abstract example illustrated in Figure 2.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
com §Cambridge, UK Email: [email protected] © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The details are given in (Tillmann, 2000).
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Daum´e (2007) applies a related idea in a simpler way, by splitting features into general and domain-specific versions.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The points enumerated above are particularly related to ITS, but analogous arguments can easily be given for other applications; see for example Wu and Tseng's (1993) discussion of the role of segmentation in information retrieval.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
For comparison to information-retrieval inspired baselines, eg (L¨u et al., 2007), we select sentences from OUT using language model perplexities from IN.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
2 60.7 56.
The texts were annotated with the RSTtool.
0
And time is short.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
The weight on each sentence is a value in [0, 1] computed by a perceptron with Boolean features that indicate collection and genre membership.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Consequently, we implemented our own annotation tool ConAno in Java (Stede, Heintze 2004), which provides specifically the functionality needed for our purpose.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Because we are working with only three parsers, the only situation in which context will help us is when it can indicate we should choose to believe a single parser that disagrees with the majority hypothesis instead of the majority hypothesis itself.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
tai2du2 'Taiwan Independence.'
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
conceptual relationship in the discourse.
This assumption, however, is not inherent to type-based tagging models.
0
The second row represents the performance of the median hyperparameter setting.
The manual evaluation of scoring translation on a graded scale from 1&#8211;5 seemed to be very hard to perform.
0
The test set included 2000 sentences from the Europarl corpus, but also 1064 sentences out-ofdomain test data.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The two knowledge sources that use semantic expectations, WordSemCFSem and CFSemCFSem, always return values of -1 or 0.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Maamouri et al.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision.