source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
from the subset of the United Informatics corpus not used in the training of the models.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Third, all remaining anaphora are evaluated by 11 different knowledge sources: the four contextual role knowledge sources just described and seven general knowledge sources.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
an event.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We are currently working on a complete open source implementation of a training and decoding system, which should become available over the summer. pus, from which also the in-domain test set is taken.
Here we present two algorithms.
0
The CoBoost algorithm just described is for the case where there are two labels: for the named entity task there are three labels, and in general it will be useful to generalize the CoBoost algorithm to the multiclass case.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
The government has to make a decision, and do it quickly.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Even this may be nondeterministic, in case the graph contains several non-projective arcs whose lifts interact, but we use the following algorithm to construct a minimal projective transformation D0 = (W, A0) of a (nonprojective) dependency graph D = (W, A): The function SMALLEST-NONP-ARC returns the non-projective arc with the shortest distance from head to dependent (breaking ties from left to right).
The resulting model is compact, efficiently learnable and linguistically expressive.
0
On one end of the spectrum are clustering approaches that assign a single POS tag to each word type (Schutze, 1995; Lamar et al., 2010).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
As indicated in Figure 1(c), apart from this correct analysis, there is also the analysis taking B ri4 as a word (e.g., a common abbreviation for Japan), along with X:Â¥ wen2zhangl 'essay/ and f!!.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The complexity of the quasimonotone search is O(E3 J (R2+LR)).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
While size of the resulting transducers may seem daunting-the segmenter described here, as it is used in the Bell Labs Mandarin TTS system has about 32,000 states and 209,000 arcs-recent work on minimization of weighted machines and transducers (cf.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
attaching to terms denoting human beings.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, for our purposes it is not sufficient to repre­ sent the morphological decomposition of, say, plural nouns: we also need an estimate of the cost of the resulting word.
A beam search concept is applied as in speech recognition.
0
For Æ = 0, no new target word is generated, while an additional source sentence position is covered.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Unlike GCFG's, however, the composition operations of LCFRS's are restricted to be linear (do not duplicate unboundedly large structures) and nonerasing (do not erase unbounded structures, a restriction made in most modern transformational grammars).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1.
There is no global pruning.
0
The goal of machine translation is the translation of a text given in some source language into a target language.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
No: Predecessor coverage set Successor coverage set 1 (f1; ;mg n flg ; l0) !
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Since our goal is to perform well under these measures we will similarly treat constituents as the minimal substructures for combination.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
21 84.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We train linear mixture models for conditional phrase pair probabilities over IN and OUT so as to maximize the likelihood of an empirical joint phrase-pair distribution extracted from a development set.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Except our own and MENE + reference resolution, the results in Table 6 are all official MUC7 results.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
6One of our experimental settings lacks document boundaries, and we used this approximation in both settings for consistency.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Interpolation search formalizes the notion that one opens a dictionary near the end to find the word “zebra.” Initially, the algorithm knows the array begins at b +— 0 and ends at e +— |A|−1.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Interestingly, Chang et al. report 80.67% recall and 91.87% precision on an 11,000 word corpus: seemingly, our system finds as many names as their system, but with four times as many false hits.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Obviously, the presence of a title after a potential name N increases the probability that N is in fact a name.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
However, a recent study (Callison-Burch et al., 2006), pointed out that this correlation may not always be strong.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
On the other hand, when all systems produce muddled output, but one is better, and one is worse, but not completely wrong, a judge is inclined to hand out judgements of 4, 3, and 2.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
State is implemented in their scrolling variant, which is a trie annotated with forward and backward pointers.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Parameter Component As in the standard Bayesian HMM (Goldwater and Griffiths, 2007), all distributions are independently drawn from symmetric Dirichlet distributions: 2 Note that t and w denote tag and word sequences respectively, rather than individual tokens or tags.
This paper conducted research in the area of automatic paraphrase discovery.
0
However, the next step is clearly different.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The corresponding token words w are drawn conditioned on t and θ.2 Our full generative model is given by: K P (φ, θ|T , α, β) = n (P (φt|α)P (θt|T , α)) t=1 The transition distribution φt for each tag t is drawn according to DIRICHLET(α, K ), where α is the shared transition and emission distribution hyperparameter.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
For each trigram type x2 x3 x4 in a sequence x1 x2 x3 x4 x5, we count how many times that trigram type co-occurs with the different instantiations of each concept, and compute the point-wise mutual information (PMI) between the two.5 The similarity between two trigram types is given by summing over the PMI values over feature instantiations that they have in common.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) • the full text, • the annotation values for the activated annotation set (co-reference), • the actual annotation tiers, and • the portion of text currently ‘in focus’ (which also appears underlined in the full text).
They have made use of local and global features to deal with the instances of same token in a document.
0
For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
However, ince we extracted the test corpus automatically from web sources, the reference translation was not always accurate — due to sentence alignment errors, or because translators did not adhere to a strict sentence-by-sentence translation (say, using pronouns when referring to entities mentioned in the previous sentence).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Email: rlls@bell-labs.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Unigram records store probability, backoff, and an index in the bigram table.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
On the other hand, only very restricted reorderings are necessary, e.g. for the translation direction from Table 2: Coverage set hypothesis extensions for the IBM reordering.
This assumption, however, is not inherent to type-based tagging models.
0
0 55.3 34.
They have made use of local and global features to deal with the instances of same token in a document.
0
Named Entity Recognition: A Maximum Entropy Approach Using Global Information
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
I • JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- • :zhang!
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Since the early days of statistical NLP, researchers have observed that a part-of-speech tag distribution exhibits “one tag per discourse” sparsity — words are likely to select a single predominant tag in a corpus, even when several tags are possible.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Replacing this with an ranked evaluation seems to be more suitable.
They have made use of local and global features to deal with the instances of same token in a document.
0
This paper presents a maximum entropy-based named entity recognizer (NER).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
In our grammar, features are realized as annotations to basic category labels.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions.
This corpus has several advantages: it is annotated at different levels.
0
• Anaphoric links: the annotator is asked to specify whether the anaphor is a repetition, partial repetition, pronoun, epithet (e.g., Andy Warhol – the PopArt artist), or is-a (e.g., Andy Warhol was often hunted by photographers.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We further report SYNCS, the parsing metric of Cohen and Smith (2007), to facilitate the comparison.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
more frequently than is done in English.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
computing the precision of the other's judgments relative to this standard.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
The availability of comparable corpora is limited, which is a significant limitation on the approach.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
(1997; 1999) do have some statistics that show how IdentiFinder performs when the training data is reduced.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
We tokenize MWUs and their POS tags; this reduces the tag set size to 12.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
4.5 Transliterations of Foreign Words.
This assumption, however, is not inherent to type-based tagging models.
0
In contrast to these approaches, our method directly incorporates these constraints into the structure of the model.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Here, “EG” represents “Eastern Group Plc”.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Email: cls@bell-labs.
The texts were annotated with the RSTtool.
0
Unexpectedly, because the ministries of treasury and education both had prepared the teacher plan together.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Still, for both human and automatic rhetorical analysis, connectives are the most important source of surface information.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
4 65.9 48.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Apart from MERT difficulties, a conceptual problem with log-linear combination is that it multiplies feature probabilities, essentially forcing different features to agree on high-scoring candidates.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
In the cases where isolated constituent precision is larger than 0.5 the affected portion of the hypotheses is negligible.
They have made use of local and global features to deal with the instances of same token in a document.
0
With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Therefore, a populated probing hash table consists of an array of buckets that contain either one entry or are empty.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Table 1 shows four words “ 0 Indeed NN Indeed Saddamwhose unvocalized surface forms 0 an are indistinguishable.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1).
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
3An English sentence with ambiguous PoS assignment can be trivially represented as a lattice similar to our own, where every pair of consecutive nodes correspond to a word, and every possible PoS assignment for this word is a connecting arc.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Here, the pruning threshold t0 = 10:0 is used.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Table 3: Training and test conditions for the Verbmobil task (*number of words without punctuation marks).
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Unigram records store probability, backoff, and an index in the bigram table.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Annotation of syntactic structure for the core corpus has just begun.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Unfortunately, modifying the model to account for these kind of dependencies is not at all straightforward.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Recent work has made significant progress on unsupervised POS tagging (Me´rialdo, 1994; Smith and Eisner, 2005; Haghighi and Klein, 2006; Johnson,2007; Goldwater and Griffiths, 2007; Gao and John son, 2008; Ravi and Knight, 2009).
This topic has been getting more attention, driven by the needs of various NLP applications.
0
This overview is illustrated in Figure 1.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
phrase (markContainsVerb).
Here we present two algorithms.
0
(Blum and Mitchell 98) describe learning in the following situation: X = X1 X X2 where X1 and X2 correspond to two different "views" of an example.
A beam search concept is applied as in speech recognition.
0
In this paper, we have presented a new, eÆcient DP-based search procedure for statistical machine translation.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
3.1 Maximum Entropy.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
4 53.7 43.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
(Hearst 92) describes a method for extracting hyponyms from a corpus (pairs of words in "isa" relations).
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
A graphical depiction of our model as well as a summary of random variables and parameters can be found in Figure 1.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Language models that contain wi must also contain prefixes wi for 1 G i G k. Therefore, when the model is queried for p(wnjwn−1 1 ) but the longest matching suffix is wnf , it may return state s(wn1) = wnf since no longer context will be found.
They found replacing it with a ranked evaluation to be more suitable.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The bootstrap method has been critized by Riezler and Maxwell (2005) and Collins et al. (2005), as being too optimistic in deciding for statistical significant difference between systems.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
First, we evaluated BABAR using only the seven general knowledge sources.