source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
In the natural disasters domain, agents are often forces of nature, such as hurricanes or wildfires.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
This revealed interesting clues about the properties of automatic and manual scoring.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Sparse lookup is a key subproblem of language model queries.
Here both parametric and non-parametric models are explored.
0
We have presented two general approaches to studying parser combination: parser switching and parse hybridization.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Reflexive pronouns with only 1 NP in scope..
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Daum´e (2007) applies a related idea in a simpler way, by splitting features into general and domain-specific versions.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
First, we parsed the training corpus, collected all the noun phrases, and looked up each head noun in WordNet (Miller, 1990).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
8 We use head-finding rules specified by a native speaker.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, it is possible to personify any noun, so in children's stories or fables, i¥JJ1l.
They found replacing it with a ranked evaluation to be more suitable.
0
By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Eight out of the thirteen errors in the high frequency phrases in the CC-domain are the phrases in “agree”.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Syntactic universals are a well studied concept in linguistics (Carnie, 2002; Newmeyer, 2005), and were recently used in similar form by Naseem et al. (2010) for multilingual grammar induction.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
This paper presents methods to query N-gram language models, minimizing time and space costs.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
4.2 Global Features.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
It was developed in response to the non-terminal/terminal bias of Evalb, but Clegg and Shepherd (2005) showed that it is also a valuable diagnostic tool for trees with complex deep structures such as those found in the ATB.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
termined by the category of the word that follows it.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For a given "word" in the automatic segmentation, if at least k of the hu­ man judges agree that this is a word, then that word is considered to be correct.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Systran submitted their commercial rule-based system that was not tuned to the Europarl corpus.
Two general approaches are presented and two combination techniques are described for each approach.
0
This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
In both cases, SRILM walks its trie an additional time to minimize context as mentioned in Section 4.1.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
What is important and is not expressed by the notation is the so-called coverage constraint: each source position j should be 'hit' exactly once by the path of the inverted alignment bI 1 = b1:::bi:::bI . Using the inverted alignments in the maximum approximation, we obtain as search criterion: max I (p(JjI) max eI 1 ( I Yi=1 p(eijei􀀀1 i􀀀2) max bI 1 I Yi=1 [p(bijbi􀀀1; I; J) p(fbi jei)])) = = max I (p(JjI) max eI 1;bI 1 ( I Yi=1 p(eijei􀀀1 i􀀀2) p(bijbi􀀀1; I; J) p(fbi jei)])); where the two products over i have been merged into a single product over i. p(eijei􀀀1 i􀀀2) is the trigram language model probability.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For example, if is found in the list of person first names, the feature PersonFirstName is set to 1.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
If the token is the first word of a sentence, then this feature is set to 1.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
2.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
With each iteration more examples are assigned labels by both classifiers, while a high level of agreement (> 94%) is maintained between them.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Pseudo-labels are formed by taking seed labels on the labeled examples, and the output of the fixed classifier on the unlabeled examples.
Two general approaches are presented and two combination techniques are described for each approach.
0
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
However, this optimistic search would not visit the entries necessary to store backoff information in the outgoing state.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In Chinese text, individual characters of the script, to which we shall refer by their traditional name of hanzi,Z are written one after another with no intervening spaces; a Chinese sentence is shown in Figure 1.3 Partly as a result of this, the notion "word" has never played a role in Chinese philological tradition, and the idea that Chinese lacks any­ thing analogous to words in European languages has been prevalent among Western sinologists; see DeFrancis (1984).
The AdaBoost algorithm was developed for supervised learning.
0
(3)), with one term for each classifier.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Syntactic decoders, such as cdec (Dyer et al., 2010), build state from null context then store it in the hypergraph node for later extension.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
For the inverted alignment probability p(bijbi􀀀1; I; J), we drop the dependence on the target sentence length I. 2.2 Word Joining.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen con­ structions, including morphological derivatives and personal names.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
25 16.
This assumption, however, is not inherent to type-based tagging models.
0
Our work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Because of this threshold, very few NE instance pairs could be used and hence the variety of phrases was also limited.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Second, we show that although the Penn Arabic Treebank is similar to other tree- banks in gross statistical terms, annotation consistency remains problematic.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
5 68.1 34.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Still, for about good number of sentences, we do have this direct comparison, which allows us to apply the sign test, as described in Section 2.2.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Training and testing is based on the Europarl corpus.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
It is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situations.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
In the case of, the most common usage is as an adverb with the pronunciation jiangl, so that variant is assigned the estimated cost of 5.98, and a high cost is assigned to nominal usage with the pronunciation jiang4.
The corpus was annoted with different linguitic information.
0
Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
For t = 1, T and for j = 1, 2: where 4 = exp(-jg'(xj,i)). practice, this greedy approach almost always results in an overall decrease in the value of Zco.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Using the virtual distribution Di (i) and pseudo-labels"y.,„ values for Wo, W± and W_ can be calculated for each possible weak hypothesis (i.e., for each feature x E Xi); the weak hypothesis with minimal value for Wo + 2/WW _ can be chosen as before; and the weight for this weak hypothesis at = ln ww+411:) can be calculated.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
In the last example, the less restrictive IbmS word reordering leads to a better translation, although the QmS translation is still acceptable.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
"c' 0 + 0 "0 ' • + a n t i g r e e d y x g r e e d y < > c u r r e n t m e t h o d o d i e t . o n l y • Taiwan 0 ·;; 0 c CD E i5 0"' 9 9 • Mainland • • • • -0.30.20.1 0.0 0.1 0.2 Dimension 1 (62%) Figure 7 Classical metric multidimensional scaling of distance matrix, showing the two most significant dimensions.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
For derived words that occur in our corpus we can estimate these costs as we would the costs for an underived dictionary entry.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This latter evaluation compares the performance of the system with that of several human judges since, as we shall show, even people do not agree on a single correct way to segment a text.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Topicalization of NP subjects in SVO configurations causes confusion with VO (pro-drop).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Eight out of the thirteen errors in the high frequency phrases in the CC-domain are the phrases in “agree”.
It is probably the first analysis of Arabic parsing of this kind.
0
Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Now we have sets of phrases which share a keyword and we have links between those sets.
Two general approaches are presented and two combination techniques are described for each approach.
0
The maximum precision row is the upper bound on accuracy if we could pick exactly the correct constituents from among the constituents suggested by the three parsers.
Replacing this with a ranked evaluation seems to be more suitable.
0
For the manual scoring, we can distinguish only half of the systems, both in terms of fluency and adequacy.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
For the purposes of EM, the &quot;observed&quot; data is {(xi, Ya• • • (xrn, Yrn), xfil, and the hidden data is {ym+i y}.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The correct resolution in sentence (c) depends on knowledge that kidnappers frequently blindfold their victims.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Figure 2 shows examples of lexical expectations that were learned for both domains.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
For monolingual treebank data we relied on the CoNLL-X and CoNLL-2007 shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007).
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
In order to train this error-correction model, he divided his training corpus into 5 portions of 20% each.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For example, BABAR learned that agents that “assassinate” or “investigate a cause” are usually humans or groups (i.e., organizations).
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Hence, we take the probability of the event fmnh analyzed as REL VB to be This means that we generate f and mnh independently depending on their corresponding PoS tags, and the context (as well as the syntactic relation between the two) is modeled via the derivation resulting in a sequence REL VB spanning the form fmnh. based on linear context.
The texts were annotated with the RSTtool.
0
The motivation for our more informal approach was the intuition that there are so many open problems in rhetorical analysis (and more so for German than for English; see below) that the main task is qualitative investigation, whereas rigorous quantitative analyses should be performed at a later stage.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
42 nator, the N31s can be measured well by counting, and we replace the expectation by the observation.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
These systems are similar to those described by Pollard (1984) as Generalized Context-Free Grammars (GCFG's).
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
There, a lattice is used to represent the possible sentences resulting from an interpretation of an acoustic model.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Translation errors are reported in terms of multireference word error rate (mWER) and subjective sentence error rate (SSER).
It is probably the first analysis of Arabic parsing of this kind.
0
Linguistic intuitions like those in the previous section inform language-specific annotation choices.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Our results suggest that current parsing models would benefit from better annotation consistency and enriched annotation in certain syntactic configurations.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
When this feature type was included, CoBoost chose this default feature at an early iteration, thereby giving non-abstaining pseudo-labels for all examples, with eventual convergence to the two classifiers agreeing by assigning the same label to almost all examples.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
In such cases we assign all of the estimated probability mass to the form with the most likely pronunciation (determined by inspection), and assign a very small probability (a very high cost, arbitrarily chosen to be 40) to all other variants.
They have made use of local and global features to deal with the instances of same token in a document.
0
For example: McCann initiated a new global system.
This assumption, however, is not inherent to type-based tagging models.
0
Its only purpose is 3 This follows since each θt has St − 1 parameters and.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
The developer explained that the loading process requires extra memory that it then frees. eBased on the ratio to SRI’s speed reported in Guthrie and Hepple (2010) under different conditions.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.
They have made use of local and global features to deal with the instances of same token in a document.
0
(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
As far as we know, no other NERs have used information from the whole document (global) as well as information within the same sentence (local) in one framework.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
11 www.ling.unipotsdam.de/sfb/projekt a3.php 12 This step was carried out in the course of the diploma thesis work of David Reitter (2003), which de serves special mention here.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
In this specific case, as these two titles could fill the same column of an IE table, we regarded them as paraphrases for the evaluation.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
The out-of-domain test set differs from the Europarl data in various ways.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The first is an evaluation of the system's ability to mimic humans at the task of segmenting text into word-sized units; the second evaluates the proper-name identification; the third measures the performance on morphological analysis.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder – 650,000 – 790,000 MENE – – 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Unigrams also have 64-bit overhead for vocabulary lookup.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Ex: The regime gives itself the right...
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This is consistent with the nature of these two settings: log-linear combination, which effectively takes the intersection of IN and OUT, does relatively better on NIST, where the domains are broader and closer together.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
On one end of the spectrum are clustering approaches that assign a single POS tag to each word type (Schutze, 1995; Lamar et al., 2010).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
However, it is also mostly political content (even if not focused on the internal workings of the European Union) and opinion.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Most of these groups follow a phrase-based statistical approach to machine translation.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different &quot;views&quot; of an example.