source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The results of this experiment can be seen in Table 5.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The supervised POS tagging accuracies (on this tagset) are shown in the last row of Table 2.
All the texts were annotated by two people.
0
Different annotations of the same text are mapped into the same data structure, so that search queries can be formulated across annotation levels.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
For inference, we are interested in the posterior probability over the latent variables in our model.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The 2(Yarowsky 95) describes the use of more sophisticated smoothing methods.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
The global feature groups are: InitCaps of Other Occurrences (ICOC): There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous position (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
2.2.2 The Caseframe Network The first type of contextual role knowledge that BABAR learns is the Caseframe Network (CFNet), which identifies caseframes that co-occur in anaphor/antecedent resolutions.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
token-level HMM to reflect lexicon sparsity.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
of Articles No.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Values in the trie are minimally sized at the bit level, improving memory consumption over trie implementations in SRILM, IRSTLM, and BerkeleyLM.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
For even larger models, storing counts (Talbot and Osborne, 2007; Pauls and Klein, 2011; Guthrie and Hepple, 2010) is a possibility.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
We are unaware of prior results for the Stanford parser.
It is probably the first analysis of Arabic parsing of this kind.
0
68 95.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Following this method, we repeatedly — say, 1000 times — sample sets of sentences from the output of each system, measure their BLEU score, and use these 1000 BLEU scores as basis for estimating a confidence interval.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
With a human evaluation we also showed that ATB inter-annotator agreement remains low relative to the WSJ corpus.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The label for a test example with features x is then defined as In this paper we define h(x, y) as the following function of counts seen in training data: Count(x,y) is the number of times feature x is seen with label y in training data, Count(x) = EyEy Count(x, y). a is a smoothing parameter, and k is the number of possible labels.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Special thanks to Jan Hajiˇc and Matthias Trautner Kromann for assistance with the Czech and Danish data, respectively, and to Jan Hajiˇc, Tom´aˇs Holan, Dan Zeman and three anonymous reviewers for valuable comments on a preliminary version of the paper.
The corpus was annoted with different linguitic information.
0
A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
68 95.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
To bridge this gap, we consider a practically motivated scenario, in which we want to leverage existing resources from a resource-rich language (like English) when building tools for resource-poor foreign languages.1 We assume that absolutely no labeled training data is available for the foreign language of interest, but that we have access to parallel data with a resource-rich language.
The corpus was annoted with different linguitic information.
0
In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
We are especially grateful to Taylor Berg- Kirkpatrick for running additional experiments.
All the texts were annotated by two people.
0
The idea is to have a pipeline of shallow-analysis modules (tagging, chunk- ing, discourse parsing based on connectives) and map the resulting underspecified rhetorical tree (see Section 2.4) into a knowledge base that may contain domain and world knowledge for enriching the representation, e.g., to resolve references that cannot be handled by shallow methods, or to hypothesize coherence relations.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Church and Hanks [1989]), and we have used lists of character pairs ranked by mutual information to expand our own dictionary.
It is probably the first analysis of Arabic parsing of this kind.
0
92 76.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
They demonstrated this with the comparison of statistical systems against (a) manually post-edited MT output, and (b) a rule-based commercial system.
This assumption, however, is not inherent to type-based tagging models.
0
In most cases, however, these expansions come with a steep increase in model complexity, with respect to training procedure and inference time.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The manual scores are averages over the raw unnormalized scores.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
0750271 and by the DARPA GALE program.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
We retain segmentation markers—which are consistent only in the vocalized section of the treebank—to differentiate between e.g. � “they” and � + “their.” Because we use the vocalized section, we must remove null pronoun markers.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
95 76.
These clusters are computed using an SVD variant without relying on transitional structure.
0
For this experiment, we compare our model with the uniform tag assignment prior (1TW) with the learned prior (+PRIOR).
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
This formulation of the constraint feature is equivalent to the use of a tagging dictionary extracted from the graph using a threshold T on the posterior distribution of tags for a given word type (Eq.
This corpus has several advantages: it is annotated at different levels.
0
We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Finally, we make some improvements to baseline approaches.
Here we present two algorithms.
0
We first give a brief overview of boosting algorithms.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
We would also like to thank Amarnag Subramanya for helping us with the implementation of label propagation and Shankar Kumar for access to the parallel data.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Acknowledgments We thank Meni Adler and Michael Elhadad (BGU) for helpful comments and discussion.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Note that the backoff model assumes that there is a positive correlation between the frequency of a singular noun and its plural.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The results are given in Table 4.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.
Their results show that their high performance NER use less training data than other systems.
0
MENE without Proteus, however, did not do very well and only achieved an F measure of 84.22% (Borthwick, 1999).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , ψ) component.
This assumption, however, is not inherent to type-based tagging models.
0
We present several variations for the lexical component P (T , W |ψ), each adding more complex pa rameterizations.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
While our method also enforces a singe tag per word constraint, it leverages the transition distribution encoded in an HMM, thereby benefiting from a richer representation of context.
They have made use of local and global features to deal with the instances of same token in a document.
0
The effect of a second reference resolution classifier is not entirely the same as that of global features.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We therefore also normalized judgements on a per-sentence basis.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The method uses a "soft" measure of the agreement between two classifiers as an objective function; we described an algorithm which directly optimizes this function.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Although the tag distributions of the foreign words (Eq.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
Sima’an et al. (2001) presented parsing results for a DOP tree-gram model using a small data set (500 sentences) and semiautomatic morphological disambiguation.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The first unsupervised algorithm we describe is based on the decision list method from (Yarowsky 95).
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
A modified language model probability pÆ(eje0; e00) is defined as follows: pÆ(eje0; e00) = 1:0 if Æ = 0 p(eje0; e00) if Æ = 1 : We associate a distribution p(Æ) with the two cases Æ = 0 and Æ = 1 and set p(Æ = 1) = 0:7.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
It is chosen such that the decisions it made in including or excluding constituents are most probable under the models for all of the parsers.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Consequently, all three parsers prefer the nominal reading.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Table 3 Classes of words found by ST for the test corpus.
This paper talks about Unsupervised Models for Named Entity Classification.
0
A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.
It is probably the first analysis of Arabic parsing of this kind.
0
The ATB annotation guidelines specify that proper nouns should be specified with a flat NP (a).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Here, the term frequency (TF) is the frequency of a word in the bag and the inverse term frequency (ITF) is the inverse of the log of the frequency in the entire corpus.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
This departure from the traditional token-based tagging approach allows us to explicitly capture type- level distributional properties of valid POS tag as signments as part of the model.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(3) shows learning curves for CoBoost.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
We use MMAX for this annotation as well.
There are clustering approaches that assign a single POS tag to each word type.
0
9 65.5 46.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
yu2 'fish.'
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, it is almost universally the case that no clear definition of what constitutes a "correct" segmentation is given, so these performance measures are hard to evaluate.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Recent work by Finkel and Manning (2009) which re-casts Daum´e’s approach in a hierarchical MAP framework may be applicable to this problem.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Since these are distinct syntactic units, they are typically segmented.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Given a sorted array A, these other packages use binary search to find keys in O(log |A|) time.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Finally, the statistical method fails to correctly group hanzi in cases where the individual hanzi comprising the name are listed in the dictionary as being relatively high-frequency single-hanzi words.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Here, the pruning threshold t0 = 10:0 is used.
They have made use of local and global features to deal with the instances of same token in a document.
0
Mikheev et al.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
2.6 Co-reference.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Because we are interested in applying our techniques to languages for which no labeled resources are available, we paid particular attention to minimize the number of free parameters and used the same hyperparameters for all language pairs.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Because of their size, the examples (Figures 2 to 4) appear at the end of the paper.
The corpus was annoted with different linguitic information.
0
9 www.ling.unipotsdam.de/sfb/ Figure 2: Screenshot of Annis Linguistic Database 3.3 Symbolic and knowledge-based.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
In the labeled version of these metrics (L) both heads and arc labels must be correct, while the unlabeled version (U) only considers heads.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Systems that generally do better than others will receive a positive average normalizedjudgement per sentence.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The semilinearity of Tree Adjoining Languages (TAL's), MCTAL's, and Head Languages (HL's) can be proved using this property, with suitable restrictions on the composition operations.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Among these are words derived by various productive processes, including: 1.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
In contrast, NNP (proper nouns) form a large portion of vocabulary.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
We refer to different readings as different analyses whereby the segments are deterministic given the sequence of PoS tags.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Contextual role knowledge provides evidence as to whether a candidate is a plausible antecedent for an anaphor.
It is probably the first analysis of Arabic parsing of this kind.
1
To our knowledge, ours is the first analysis of this kind for Arabic parsing.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Let say, if we find one system doing better on 20 of the blocks, and worse on 80 of the blocks, is it significantly worse?
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
We map the ATB morphological analyses to the shortened “Bies” tags for all experiments.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
AdaBoost is given access to a weak learning algorithm, which accepts as input the training examples, along with a distribution over the instances.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
The prediction based on these features is a knearest neighbor classification, using the IB1 algorithm and k = 5, the modified value difference metric (MVDM) and class voting with inverse distance weighting, as implemented in the TiMBL software package (Daelemans et al., 2003).
There is no global pruning.
0
The resulting algorithm has a complexity of O(n!).
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
We used 22 features for the logistic weighting model, divided into two groups: one intended to reflect the degree to which a phrase pair belongs to general language, and one intended to capture similarity to the IN domain.
Their results show that their high performance NER use less training data than other systems.
0
Reference resolution involves finding words that co-refer to the same entity.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The difference in performance between pronouns and definite noun phrases surprised us.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
They also describe an application of cotraining to classifying web pages (the to feature sets are the words on the page, and other pages pointing to the page).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.
This paper conducted research in the area of automatic paraphrase discovery.
0
Sometime, multiple words are needed, like “vice chairman”, “prime minister” or “pay for” (“pay” and “pay for” are different senses in the CC-domain).
Their results show that their high performance NER use less training data than other systems.
0
Both BBN and NYU have tagged their own data to supplement the official training data.