source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
There is no global pruning.
0
The approach has been successfully tested on the 8 000-word Verbmobil task.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
In particular, the decision to represent arguments in verb- initial clauses as VP internal makes VSO and VOS configurations difficult to distinguish.
The AdaBoost algorithm was developed for supervised learning.
0
(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Note also that the costs currently used in the system are actually string costs, rather than word costs.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The AdaBoost algorithm was developed for supervised learning.
Combining multiple highly-accurate independent parsers yields promising results.
0
It is possible one could produce better models by introducing features describing constituents and their contexts because one parser could be much better than the majority of the others in particular situations.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Also there are cases where one of the two NEs belong to a phrase outside of the relation.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
A novel element of our model is the ability to capture type-level tag frequencies.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
The sign test checks, how likely a sample of better and worse BLEU scores would have been generated by two systems of equal performance.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Participants and other volunteers contributed about 180 hours of labor in the manual evaluation.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Table 5: Individual Performance of KSs for Disasters (e.g., “the mayor” vs. “the journalist”).
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.
They have made use of local and global features to deal with the instances of same token in a document.
0
For MUC7, there are also no published results on systems trained on only the official training data of 200 aviation disaster articles.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
shortest match at each point.
The AdaBoost algorithm was developed for supervised learning.
0
Each ht is a function that predicts a label (+1 or —1) on examples containing a particular feature xt, while abstaining on other examples: The prediction of the strong hypothesis can then be written as We now briefly describe how to choose ht and at at each iteration.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Though we do not directly compare state implementations, performance metrics in Table 1 indicate our overall method is faster.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
3 60.7 50.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
For one thing, it is not clear who is to receive settlements or what should happen in case not enough teachers accept the offer of early retirement.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
logical rules, and personal names; the transitive closure of the resulting machine is then computed.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The translation direction is from German to English.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
In words, the judgements are normalized, so that the average normalized judgement per judge is 3.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
This enables us to build a high performance NER without using separate classifiers to take care of global consistency or complex formulation on smoothing and backoff models (Bikel et al., 1997).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Unfortunately, we were unable to correctly run the IRSTLM quantized variant.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The translation direction is from German to English.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
A morpheme, on the other hand, usually corresponds to a unique hanzi, though there are a few cases where variant forms are found.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
To be a member of LCFRS a formalism must satisfy two restrictions.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
For example, out of 905 phrases in the CC- domain, 211 phrases contain keywords found in step 2.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Lexical-knowledge-based approaches that include statistical information generally presume that one starts with all possible segmentations of a sentence, and picks the best segmentation from the set of possible segmentations using a probabilistic or cost­ based scoring mechanism.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
When finished, the whole material is written into an XML-structured annotation file.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The baseline system in Table 3 refers to the maximum entropy system that uses only local features.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Our open-source (LGPL) implementation is also available for download as a standalone package with minimal (POSIX and g++) dependencies.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
AdaBoost.MH maintains a distribution over instances and labels; in addition, each weak-hypothesis outputs a confidence vector with one confidence value for each possible label.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
2 62.6 45.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
3.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
We picked two domains, the CC-domain and the “Person – Company” domain (PC-domain), for the evaluation, as the entire system output was too large to evaluate.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
If there are too many distinct states, the decoder prunes low-scoring partial hypotheses, possibly leading to a search error.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Removing the leaves from the resulting tree yields a parse for L under G, with the desired probabilities.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Pumping t2 will change only one branch and leave the other branch unaffected.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
4.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
(a) 1 § . ;m t 7 leO z h e 4 pil m a 3 lu 4 sh an g4 bi ng 4 t h i s CL (assi fier) horse w ay on sic k A SP (ec t) 'This horse got sick on the way' (b) 1§: . til y zhe4 tiao2 ma3lu4 hen3 shao3 this CL road very few 'Very few cars pass by this road' :$ chel jinglguo4 car pass by 2.
These clusters are computed using an SVD variant without relying on transitional structure.
0
This assumption, however, is not inherent to type-based tagging models.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
(2009) study related but different multilingual grammar and tagger induction tasks, where it is assumed that no labeled data at all is available.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Given names are most commonly two hanzi long, occasionally one hanzi long: there are thus four possible name types, which can be described by a simple set of context-free rewrite rules such as the following: 1.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.
It is probably the first analysis of Arabic parsing of this kind.
0
Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Finally, we wish to reiterate an important point.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For example, kidnapping victims should be extracted from the subject of the verb “kidnapped” when it occurs in the passive voice (the shorthand representation of this pattern would be “<subject> were kidnapped”).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Eight out of the thirteen errors in the high frequency phrases in the CC-domain are the phrases in “agree”.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
F1 85 Berkeley 80 Stanford.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The weak hypothesis chosen was then restricted to be a predictor in favor of this label.
A beam search concept is applied as in speech recognition.
0
In this case, we have no finite-state restrictions for the search space.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
On the one hand, the definition of composition in Steedman (1985), which technically permits composition of functions with unbounded number of arguments, generates tree sets with dependent paths such as those shown in Figure 6.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
For sorted lookup, we compare interpolation search, standard C++ binary search, and standard C++ set based on red-black trees.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Both parameters depend on a single hyperparameter α.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
By definition, each existential NP uniquely specifies an object or concept, so we can infer that all instances of the same existential NP are coreferent (e.g., “the FBI” always refers to the same entity).
This assumption, however, is not inherent to type-based tagging models.
0
2 70.7 52.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
na me =>1 ha nzi fa mi ly 2 ha nzi gi ve n 3.
Combining multiple highly-accurate independent parsers yields promising results.
0
For this reason, naïve Bayes classifiers are well-matched to this problem.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
As a (crude) approximation, we normalize the extraction patterns with respect to active and passive voice and label those extractions as agents or patients.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Mo del Hy per par am . E n g li s h1 1 m-1 D a n i s h1 1 m-1 D u t c h1 1 m-1 G er m a n1 1 m-1 Por tug ues e1 1 m-1 S p a ni s h1 1 m-1 S w e di s h1 1 m-1 1T W be st me dia n 45.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel’ˇcuk, 1988; Covington, 1990).
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The semantic caseframe expectations are used in two ways.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
This left 962 examples, of which 85 were noise.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
(In this figure eps is c) be implemented, though, such as a maximal-grouping strategy (as suggested by one reviewer of this paper); or a pairwise-grouping strategy, whereby long sequences of unattached hanzi are grouped into two-hanzi words (which may have some prosodic motivation).
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
These estimates are in turn combined linearly with relative-frequency estimates from an in-domain phrase table.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Many human evaluation metrics have been proposed.
These clusters are computed using an SVD variant without relying on transitional structure.
0
This line of work has been motivated by empirical findings that the standard EM-learned unsupervised HMM does not exhibit sufficient word tag sparsity.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In that work, mutual information was used to decide whether to group adjacent hanzi into two-hanzi words.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This may be the sign of a maturing research environment.
The corpus was annoted with different linguitic information.
0
Upon identifying an anaphoric expression (currently restricted to: pronouns, prepositional adverbs, definite noun phrases), the an- notator first marks the antecedent expression (currently restricted to: various kinds of noun phrases, prepositional phrases, verb phrases, sentences) and then establishes the link between the two.
Their results show that their high performance NER use less training data than other systems.
0
For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Proper names that match are resolved with each other.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Our system does not currently make use of titles, but it would be straightforward to do so within the finite-state framework that we propose.
Replacing this with a ranked evaluation seems to be more suitable.
0
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
They have made use of local and global features to deal with the instances of same token in a document.
0
We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We computed BLEU scores for each submission with a single reference translation.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Gazdar (1985) considers a number of linguistic analyses which IG's (but not CFG's) can make, for example, the Norwedish example shown in Figure 1.
This assumption, however, is not inherent to type-based tagging models.
0
We tokenize MWUs and their POS tags; this reduces the tag set size to 12.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
While automatic measures are an invaluable tool for the day-to-day development of machine translation systems, they are only a imperfect substitute for human assessment of translation quality, or as the acronym BLEU puts it, a bilingual evaluation understudy.
All the texts were annotated by two people.
0
We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.
There is no global pruning.
0
Starting from a DP-based solution to the traveling salesman problem, we present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Since each composition operation is linear and nonerasing, a bounded sequences of substrings associated with the resulting structure is obtained by combining the substrings in each of its arguments using only the concatenation operation, including each substring exactly once.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
PoS tags impose a unique morphological segmentation on surface tokens and present a unique valid yield for syntactic trees.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
A defining characteristic of MSA is the prevalence of discourse markers to connect and subordinate words and phrases (Ryding, 2005).
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
The POS distributions over the foreign trigram types are used as features to learn a better unsupervised POS tagger (§5).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Unigrams also have 64-bit overhead for vocabulary lookup.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Given the bilingual graph described in the previous section, we can use label propagation to project the English POS labels to the foreign language.
The texts were annotated with the RSTtool.
0
• Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol – the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Among machine learning-based NERs, Identi- Finder has proven to be the best on the official MUC6 and MUC7 test data.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The 14 general-language features embody straightforward cues: frequency, “centrality” as reflected in model scores, and lack of burstiness.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
besuchen 9.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
They are set to fixed constants.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This is a unique object for which we are able to define a proper probability model.