source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
They are: 5We are grateful to an anonymous reviewer for pointing this out.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Location list is processed into a list of unigrams and bigrams (e.g., New York).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
So, who won the competition?
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di­ graphernic word.
Combining multiple highly-accurate independent parsers yields promising results.
0
Table 3 contains the results for evaluating our systems on the test set (section 22).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls.
They focused on phrases which two Named Entities, and proceed in two stages.
0
There have been other kinds of efforts to discover paraphrase automatically from corpora.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The Potsdam Commentary Corpus
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
123 examples fell into the noise category.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Each feature group can be made up of many binary features.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
This differs from other implementations (Stolcke, 2002; Pauls and Klein, 2011) that use hash tables as nodes in a trie, as explained in the next section.
This paper talks about Unsupervised Models for Named Entity Classification.
0
For example, take ..., says Maury Cooper, a vice president at S.&P.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Mutual information was shown to be useful in the segmentation task given that one does not have a dictionary.
It is probably the first analysis of Arabic parsing of this kind.
0
How do additional ambiguities caused by devocalization affect statistical learning?
These clusters are computed using an SVD variant without relying on transitional structure.
0
0 55.3 34.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Otherwise, it is set to 0.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
computing the recall of the other's judgments relative to this standard.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
In Table 7 we give results for several evaluation metrics.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
In Section 2, we brie y review our approach to statistical machine translation.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In Section 3, we introduce our novel concept to word reordering and a DP-based search, which is especially suitable for the translation direction from German to English.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This means that the rules in our grammar are of two kinds: (a) syntactic rules relating nonterminals to a sequence of non-terminals and/or PoS tags, and (b) lexical rules relating PoS tags to lattice arcs (lexemes).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Frequency is calculated by counting the number of distinct previous tokens that each token has (e.g., if Electric Corp. is seen 3 times, and Manufacturing Corp. is seen 5 times during training, and Corp. is not seen with any other preceding tokens, then the “frequency” of Corp. is 2).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
40 75.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(1991}, Gu and Mao (1994), and Nie, Jin, and Hannan (1994).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
There are two differences between this method and the DL-CoTrain algorithm: spelling and contextual features, alternating between labeling and learning with the two types of features.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
We show translation results for three approaches: the monotone search (MonS), where no word reordering is allowed (Tillmann, 1997), the quasimonotone search (QmS) as presented in this paper and the IBM style (IbmS) search as described in Section 3.2.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Systems that generally do worse than others will receive a negative one.
A beam search concept is applied as in speech recognition.
0
f1; ; Jg denotes a coverage set including all positions from the starting position 1 to position J and j 2 fJ 􀀀L; ; Jg.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
For example, as Gan (1994) has noted, one can construct examples where the segmen­ tation is locally ambiguous but can be determined on the basis of sentential or even discourse context.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
First of all, most previous articles report perfor­ mance in terms of a single percent-correct score, or else in terms of the paired measures of precision and recall.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
(b) does the translation have the same meaning, including connotations?
This paper talks about Unsupervised Models for Named Entity Classification.
0
Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
M spawns as many processes as there are ways of breaking up ri , .. • , zt, and rules with A on their left-hand-side.
Replacing this with a ranked evaluation seems to be more suitable.
0
On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2.
A beam search concept is applied as in speech recognition.
0
3.2 Reordering with IBM Style.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Third, all remaining anaphora are evaluated by 11 different knowledge sources: the four contextual role knowledge sources just described and seven general knowledge sources.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
An extension of the TAG system was introduced by Joshi et al. (1975) and later redefined by Joshi (1987) in which the adjunction operation is defined on sets of elementary trees rather than single trees.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Within the RST “user community” there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).
This paper conducted research in the area of automatic paraphrase discovery.
0
Before explaining our method in detail, we present a brief overview in this subsection.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Affix Pron Base category N found N missed (recall) N correct (precision) t,-,7 The second issue is that rare family names can be responsible for overgeneration, especially if these names are otherwise common as single-hanzi words.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Others depend upon various lexical heuris­ tics: for example Chen and Liu (1992) attempt to balance the length of words in a three-word window, favoring segmentations that give approximately equal length for each word.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
All notions of word, with the exception of the orthographic word, are as relevant in Chinese as they are in English, and just as is the case in other languages, a word in Chinese may correspond to one or more symbols in the orthog 1 For a related approach to the problem of word-segrnention in Japanese, see Nagata (1994), inter alia..
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The cost of storing these averages, in bits, is Because there are comparatively few unigrams, we elected to store them byte-aligned and unquantized, making every query faster.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Arguably this consists of about three phonological words.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
An important subproblem of language model storage is therefore sparse mapping: storing values for sparse keys using little memory then retrieving values given keys using little time.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Fort= 1,...,T:
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
One side of the decision making process is when we choose to believe a constituent should be in the parse, even though only one parser suggests it.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The best answer to this is: many research labs have very competitive systems whose performance is hard to tell apart.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
If the context wnf will never extend to the right (i.e. wnf v is not present in the model for all words v) then no subsequent query will match the full context.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Second, rather than relying on a division of the corpus into manually-assigned portions, we use features intended to capture the usefulness of each phrase pair.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
pre-processing.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
If so, the CF Network reports that the anaphor and candidate may be coreferent.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Each connected path (l1 ... lk) E L corresponds to one morphological segmentation possibility of W. The Parser Given a sequence of input tokens W = w1 ... wn and a morphological analyzer, we look for the most probable parse tree π s.t.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The difference in performance between pronouns and definite noun phrases surprised us.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
For example, one parser could be more accurate at predicting noun phrases than the other parsers.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
This approach leads to a search procedure with complexity O(E3 J4).
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
In fact, training on the official training data is not suitable as the articles in this data set are entirely about aviation disasters, and the test data is about air vehicle launching.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
There any many techniques for improving language model speed and reducing memory consumption.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This WFST is then summed with the WFST implementing the dictionary and morphological rules, and the transitive closure of the resulting transducer is computed; see Pereira, Riley, and Sproat (1994) for an explanation of the notion of summing WFSTs.12 Conceptual Improvements over Chang et al.'s Model.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Table 2 shows BABAR’s performance.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
3 68.9 50.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
He used an additional maximum entropy classifier that tries to correct mistakes by using reference resolution.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The proof is given in (Tillmann, 2000).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
2.2.1 The Caseframe Representation Information extraction (IE) systems use extraction patterns to identify noun phrases that play a specific role in 1 Our implementation only resolves NPs that occur in the same document, but in retrospect, one could probably resolve instances of the same existential NP in different documents too.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
(student+plural) 'students,' which is derived by the affixation of the plural affix f, menD to the nounxue2shengl.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
These methods demonstrated the benefits of incorporating linguistic features using a log-linear parameterization, but requires elaborate machinery for training.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Although the ultimate goal of the Verbmobil project is the translation of spoken language, the input used for the translation experiments reported on in this paper is the (more or less) correct orthographic transcription of the spoken sentences.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
com t 600 Mountain Avenue, 2c278, Murray Hill, NJ 07974, USA.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
We divide up each test set into blocks of 20 sentences (100 blocks for the in-domain test set, 53 blocks for the out-of-domain test set), check for each block, if one system has a higher BLEU score than the other, and then use the sign test.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
In the natural disasters domain, agents are often forces of nature, such as hurricanes or wildfires.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In any event, to date, we have not compared different methods for deriving the set of initial frequency estimates.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
TIS systems in general need to do more than simply compute the.
The AdaBoost algorithm was developed for supervised learning.
0
2 for the accuracy of the different methods.
Combining multiple highly-accurate independent parsers yields promising results.
0
The parser switching oracle is the upper bound on the accuracy that can be achieved on this set in the parser switching framework.
Their results show that their high performance NER use less training data than other systems.
0
The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
For 2 < n < N, we use a hash table mapping from the n-gram to the probability and backoff3.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
In (b) “they” refers to the kidnapping victims, but in (c) “they” refers to the armed men.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
68 95.
There is no global pruning.
0
Using the concept of inverted alignments, we explicitly take care of the coverage constraint by introducing a coverage set C of source sentence positions that have been already processed.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This step ensures that the most frequent terms for each domain are labeled (in case some of them are not in WordNet) and labeled with the sense most appropriate for the domain.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
t(i).
They focused on phrases which two Named Entities, and proceed in two stages.
0
In the CC-domain, there are 32 sets of phrases which contain more than 2 phrases.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Not only is concatenation of these strings possible, but head wrapping can be used to split a string and wrap it around another string.
Replacing this with a ranked evaluation seems to be more suitable.
0
It was also proposed to allow annotators to skip sentences that they are unable to judge.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
BABAR achieved recall in the 4250% range for both domains, with 76% precision overall for terrorism and 87% precision for natural disasters.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Thus at each iteration the method induces at most n x k rules, where k is the number of possible labels (k = 3 in the experiments in this paper). step 3.