source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The transitive closure of the dictionary in (a) is composed with Id(input) (b) to form the WFST (c).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
D o m ai n # of ph ras es t o t a l p h r a s e s ac cu ra cy C C 7 o r m o r e 1 0 5 8 7 . 6 % 6 o r l e s s 1 0 6 6 7 . 0 % P C 7 o r m o r e 3 5 9 9 9 . 2 % 6 o r l e s s 2 5 5 6 5 . 1 % Table 1.
This assumption, however, is not inherent to type-based tagging models.
0
In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
This paper conducted research in the area of automatic paraphrase discovery.
0
In general, different modalities (“planned to buy”, “agreed to buy”, “bought”) were considered to express the same relationship within an extraction setting.
They focused on phrases which two Named Entities, and proceed in two stages.
0
This result suggests the benefit of using the automatic discovery method.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This is less than the 694 judgements 2004 DARPA/NIST evaluation, or the 532 judgements in the 2005 DARPA/NIST evaluation.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The distinctions in the ATB are linguistically justified, but complicate parsing.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
The second experiment is limited to data from PDT.5 The training part of the treebank was projectivized under different encoding schemes and used to train memory-based dependency parsers, which were run on the test part of the treebank, consisting of 7,507 sentences and 125,713 tokens.6 The inverse transformation was applied to the output of the parsers and the result compared to the gold standard test set.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
We then gather all phrases with the same keyword.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
(3)In sentence (1), McCann can be a person or an orga nization.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Now, for this application one might be tempted to simply bypass the segmentation problem and pronounce the text character-by-character.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Section 2 describes our baseline techniques for SMT adaptation, and section 3 describes the instance-weighting approach.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This is not completely surprising, since all systems use very similar technology.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
— I would also like to point out to commissioner Liikanen that it is not easy to take a matter to a national court.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di­ graphernic word.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
1 61.2 43.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Both BBN and NYU have tagged their own data to supplement the official training data.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Each out-of-domain phrase pair is characterized by a set of simple features intended to reflect how useful it will be.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
All the texts were annotated by two people.
0
All annotations are done with specific tools and in XML; each layer has its own DTD.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Table 2 shows BABAR’s performance.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The highestorder N-gram array omits backoff and the index, since these are not applicable.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The out-of-domain test set differs from the Europarl data in various ways.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
This implies, therefore, that a major factor in the performance of a Chinese segmenter is the quality of the base dictionary, and this is probably a more important factor-from the point of view of performance alone-than the particular computational methods used.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
A procedural definition to restrict1In the approach described in (Berger et al., 1996), a mor phological analysis is carried out and word morphemes rather than full-form words are used during the search.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, if {N P1, N P2, N P3} are all coreferent, then each NP must be linked to one of the other two NPs.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths ≤ 70 (dev set, gold segmentation).
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
This approach is compared to another reordering scheme presented in (Berger et al., 1996).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
It uses a maximum entropy framework and classifies each word given its features.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
These tag distributions are used to initialize the label distributions over the English vertices in the graph.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
token-level HMM to reflect lexicon sparsity.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
For the ‘core’ portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright — but see Sections 3.2 and 3.3 below.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Interpolation search has a more expensive pivot but performs less pivoting and reads, so it is slow on small data and faster on large data.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The goal of machine translation is the translation of a text given in some source language into a target language.
This assumption, however, is not inherent to type-based tagging models.
0
In practice, this sparsity constraint is difficult to incorporate in a traditional POS induction system (Me´rialdo, 1994; Johnson, 2007; Gao and Johnson, 2008; Grac¸a et al., 2009; Berg-Kirkpatrick et al., 2010).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The ATB has a much higher fraction of nuclei per tree, and a higher type-level error rate.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
If the expression is longer or complicated (like “A buys B” and “A’s purchase of B”), it is called “paraphrase”, i.e. a set of phrases which express the same thing or event.
The corpus was annoted with different linguitic information.
0
The domains are the linguistic spans that are to receive an IS-partitioning, and the units are the (smaller) spans that can play a role as a constituent of such a partitioning.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Machine Translation In this case my colleague can not visit on I n d i e s e m F a l l ka nn m e i n K o l l e g e a m the v i e r t e n M a i n i c h t b e s u c h e n S i e you fourth of May Figure 1: Reordering for the German verbgroup.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
An important aspect of the DempsterShafer model is that it operates on sets of hypotheses.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Mai.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
As noted in Section 4.4, disk cache state is controlled by reading the entire binary file before each test begins.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
For queries, we uniformly sampled 10 million hits and 10 million misses.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In Table 5 we present results from small test cor­ pora for the productive affixes handled by the current version of the system; as with names, the segmentation of morphologically derived words is generally either right or wrong.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, a story can mention “the FBI”, “the White House”, or “the weather” without any prior referent in the story.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The algorithm, called CoBoost, has the advantage of being more general than the decision-list learning alInput: (xi , yi), , (xim, ) ; x, E 2x, yi = +1 Initialize Di (i) = 1/m.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
2.6 Co-reference.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Location list is processed into a list of unigrams and bigrams (e.g., New York).
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
First we present the non-parametric version of parser switching, similarity switching: The intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in common.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Sima’an et al. (2001) presented parsing results for a DOP tree-gram model using a small data set (500 sentences) and semiautomatic morphological disambiguation.
There is no global pruning.
0
Pr(eI 1) is the language model of the target language, whereas Pr(fJ 1 jeI1) is the transla tion model.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We smooth Prf(p —* (s, p)) for rare and OOV segments (s E l, l E L, s unseen) using a “per-tag” probability distribution over rare segments which we estimate using relative frequency estimates for once-occurring segments.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
The details are given in (Tillmann, 2000).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The accuracy results for segmentation, tagging and parsing using our different models and our standard data split are summarized in Table 1.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Each word is terminated by an arc that represents the transduction between f and the part of speech of that word, weighted with an estimated cost for that word.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Further, it needs extra pointers in the trie, increasing model size by 40%.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
One obvious application is information extraction.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
As a result, Habash et al.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
In the named entity task, X1 might be the instance space for the spelling features, X2 might be the instance space for the contextual features.
This paper talks about Unsupervised Models for Named Entity Classification.
0
(7), such as the likelihood function used in maximum-entropy problems and other generalized additive models (Lafferty 99).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.
Combining multiple highly-accurate independent parsers yields promising results.
0
In each figure the upper graph shows the isolated constituent precision and the bottom graph shows the corresponding number of hypothesized constituents.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
This may be the sign of a maturing research environment.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
, December, then the feature MonthName is set to 1.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The path set of a tree set is the union of the path sets of trees in that tree set.
This paper conducted research in the area of automatic paraphrase discovery.
0
For example, in Figure 3, we can see that the phrases in the “buy”, “acquire” and “purchase” sets are mostly paraphrases.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
First, we learn weights on individual phrase pairs rather than sentences.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Pairwise comparison is done using the sign test.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
We show how a datadriven deterministic dependency parser, in itself restricted to projective structures, can be combined with graph transformation techniques to produce non-projective structures.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Overview of the method 2.2 Step by Step Algorithm.
They found replacing it with a ranked evaluation to be more suitable.
0
Systems that generally do worse than others will receive a negative one.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The out-of-domain test set differs from the Europarl data in various ways.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
com §Cambridge, UK Email: [email protected] © 1996 Association for Computational Linguistics (a) B ) ( , : & ; ? ' H o w d o y o u s a y o c t o p u s i n J a p a n e s e ? ' (b) P l a u s i b l e S e g m e n t a t i o n I B X I I 1 : & I 0 0 r i 4 w e n 2 z h a n g l y u 2 z e n 3 m e 0 s h u o l ' J a p a n e s e ' ' o c t o p u s ' ' h o w ' ' s a y ' (c) Figure 1 I m p l a u s i b l e S e g m e n t a t i o n [§] lxI 1:&I ri4 wen2 zhangl yu2zen3 me0 shuol 'Japan' 'essay' 'fish' 'how' 'say' A Chinese sentence in (a) illustrating the lack of word boundaries.
The AdaBoost algorithm was developed for supervised learning.
0
The task can be considered to be one component of the MUC (MUC-6, 1995) named entity task (the other task is that of segmentation, i.e., pulling possible people, places and locations from text before sending them to the classifier).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
If no candidate satisfies this condition (which is often the case), then the anaphor is left unresolved.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
They are: 5We are grateful to an anonymous reviewer for pointing this out.
This paper conducted research in the area of automatic paraphrase discovery.
0
Corpus Step 1 NE pair instances Step 2 Step 1.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The algorithm can be viewed as heuristically optimizing an objective function suggested by (Blum and Mitchell 98); empirically it is shown to be quite successful in optimizing this criterion.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Lexical-knowledge-based approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence, and picks the best segmentation from the set of possible segmentations using a probabilistic or cost­ based scoring mechanism.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
However, the next step is clearly different.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
1
We describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
This paper talks about Unsupervised Models for Named Entity Classification.
0
This paper discusses the use of unlabeled examples for the problem of named entity classification.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Sometimes extracted phrases by themselves are not meaningful to consider without context, but we set the following criteria.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
The evidence may not say anything about whether A is more likely than B, only that C and D are not likely.
There is no global pruning.
0
In this section, we brie y review our translation approach.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Interestingly, Chang et al. report 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
In the first stage, we run a single step of label propagation, which transfers the label distributions from the English vertices to the connected foreign language vertices (say, Vf�) at the periphery of the graph.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Bresnan, Kaplan, Peters, and Zaenen (1982) argue that these structures are needed to describe crossed-serial dependencies in Dutch subordinate clauses.