source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
In order to evaluate and advance this approach, it helps to feed into the knowledge base data that is already enriched with some of the desired information — as in PCC.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
3.2 Stochastic rhetorical analysis.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The only way to handle such phenomena within the framework described here is simply to expand out the reduplicated forms beforehand, and incorporate the expanded forms into the lexical transducer.
The AdaBoost algorithm was developed for supervised learning.
0
But we will show that the use of unlabeled data can drastically reduce the need for supervision.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Thus, provided at least this amount of IN data is available—as it is in our setting—adapting these weights is straightforward.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
TPT has theoretically better locality because it stores ngrams near their suffixes, thereby placing reads for a single query in the same or adjacent pages.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
One key issue here is to seek a discourse-based model of information structure.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Context from the whole document can be important in classifying a named entity.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
Realizing gains in practice can be challenging, however, particularly when the target domain is distant from the background data.
Here we present two algorithms.
0
(Riloff and Jones 99) was brought to our attention as we were preparing the final version of this paper.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
We do not show the numbers for the Bayes models in Table 2 because the parameters involved were established using this set.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Word frequencies are estimated by a re-estimation procedure that involves apply­ ing the segmentation algorithm presented here to a corpus of 20 million words,8 using 8 Our training corpus was drawn from a larger corpus of mixed-genre text consisting mostly of.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
To optimize this function, we used L-BFGS, a quasi-Newton method (Liu and Nocedal, 1989).
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
This can be seen as a rough approximation of Yarowsky and Ngai (2001).
They have made use of local and global features to deal with the instances of same token in a document.
0
For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
An input ABCD can be represented as an FSA as shown in Figure 2(b).
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
For each language under consideration, Petrov et al. (2011) provide a mapping A from the fine-grained language specific POS tags in the foreign treebank to the universal POS tags.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Step 2.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
However, if we consider precision, recall and Fmeasure on non-projective dependencies only, as shown in Table 6, some differences begin to emerge.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
La ng ua ge # To ke ns # W or d Ty pe s # Ta gs E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 1 1 7 3 7 6 6 9 4 3 8 6 2 0 3 5 6 8 6 9 9 6 0 5 2 0 6 6 7 8 8 9 3 3 4 1 9 1 4 6 7 4 9 2 0 6 1 8 3 5 6 2 8 3 9 3 7 2 3 2 5 2 8 9 3 1 1 6 4 5 8 2 0 0 5 7 4 5 2 5 1 2 5 4 2 2 4 7 4 1 Table 2: Statistics for various corpora utilized in experiments.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The first issue relates to the completeness of the base lexicon.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
If one is interested in translation, one would probably want to consider show up as a single dictionary word since its semantic interpretation is not trivially derivable from the meanings of show and up.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
The significant drop in number of pupils will begin in the fall of 2003.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
If one system is better in 95% of the sample sets, we conclude that its higher BLEU score is statistically significantly better.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
We tagged each noun with the top-level semantic classes assigned to it in Word- Net.
All the texts were annotated by two people.
0
Still, for both human and automatic rhetorical analysis, connectives are the most important source of surface information.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
First we mark any node that dominates (at any level) a verb sider POS tags when pre-terminals are the only intervening nodes between the nucleus and its bracketing (e.g., unaries, base NPs).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
(3)) to be defined over unlabeled as well as labeled instances.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
1
These sequence models-based approaches commonly treat token-level tag assignment as the primary latent variable.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The final estimating equation is then: (3) Since the total of all these class estimates was about 10% off from the Turing estimate n1/N for the probability of all unseen hanzi, we renormalized the estimates so that they would sum to n 1jN.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
In an experiment on automatic rhetorical parsing, the RST-annotations and PoS tags were used by (Reitter 2003) as a training corpus for statistical classification with Support Vector Machines.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Before explaining our method in detail, we present a brief overview in this subsection.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
We evaluate the time and memory consumption of each data structure by computing perplexity on 4 billion tokens from the English Gigaword corpus (Parker et al., 2009).
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
However, our full model takes advantage of word features not present in Grac¸a et al.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
While the semantic aspect of radicals is by no means completely predictive, the semantic homogeneity of many classes is quite striking: for example 254 out of the 263 examples (97%) of the INSECT class listed by Wieger (1965, 77376) denote crawling or invertebrate animals; similarly 21 out of the 22 examples (95%) of the GHOST class (page 808) denote ghosts or spirits.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
level.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Again, we can compute average scores for all systems for the different language pairs (Figure 6).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
By this assumption, each element x E X can also be represented as (xi, x2) E X1 x X2.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
2.3 Assigning Evidence Values.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Then, the remaining texts were annotated and cross-validated, always with discussions among the annotators.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
For parsing, the most challenging form of ambiguity occurs at the discourse level.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
00 76.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
However, when grammatical relations like subject and object are evaluated, parsing performance drops considerably (Green et al., 2009).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
However, until such standards are universally adopted in evaluating Chinese segmenters, claims about performance in terms of simple measures like percent correct should be taken with a grain of salt; see, again, Wu and Fung (1994) for further arguments supporting this conclusion.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Let 71 be a tree with root and foot labeled by X.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
If either case is true, then CFLex reports that the anaphor and candidate might be coreferent.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
(Levinger et al., 1995; Goldberg et al., ; Adler et al., 2008)) will make the parser more robust and suitable for use in more realistic scenarios.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For example, if CFLex determines that the log- likelihood statistic for the co-occurrence of a particular noun and caseframe corresponds to the 90% confidence level, then CFLex returns .90 as its belief that the anaphor and candidate are coreferent.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
There has been additional recent work on inducing lexicons or other knowledge sources from large corpora.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
On MUC6 data, MENERGI also achieves performance comparable to IdentiFinder when trained on similar amount of training data.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Figure 2 shows an excerpt of a sentence from the Italian test set and the tags assigned by four different models, as well as the gold tags.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Corporate Suffixes and Person Prefixes of Other Occurrences (CSPP): If McCann has been seen as Mr. McCann somewhere else in the document, then one would like to give person a higher probability than organization.
They have made use of local and global features to deal with the instances of same token in a document.
0
Since MUC6, BBN' s Hidden Markov Model (HMM) based IdentiFinder (Bikel et al., 1997) has achieved remarkably good performance.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
With the same Corporate- Suffix-List and Person-Prefix-List used in local features, for a token seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
In figure 4, reverse relations are indicated by `*’ next to the frequency.
They have made use of local and global features to deal with the instances of same token in a document.
0
We group the features used into feature groups.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
RandLM and SRILM also remove context that will not extend, but SRILM performs a second lookup in its trie whereas our approach has minimal additional cost.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
This model admits a simple Gibbs sampling algorithm where the number of latent variables is proportional to the number of word types, rather than the size of a corpus as for a standard HMM sampler (Johnson, 2007).
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
We are therefore applying a different method, which has been used at the 2005 DARPA/NIST evaluation.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Limitations of (Blum and Mitchell 98): While the assumptions of (Blum and Mitchell 98) are useful in developing both theoretical results and an intuition for the problem, the assumptions are quite limited.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
For each domain, we created a semantic dictionary by doing two things.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
On each language we investigate the contribution of each component of our model.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The relativizer f(“that”) for example, may attach to an arbitrarily long relative clause that goes beyond token boundaries.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
As expected, the vanilla HMM trained with EM performs the worst.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
We can do that . IbmS: Yes, wonderful.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The first row represents the average accuracy of the three parsers we combine.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
However their work did not consider other types of lexical expectations (e.g., PP arguments), semantic expectations, or context comparisons like our case- frame network.(Niyu et al., 1998) used unsupervised learning to ac quire gender, number, and animacy information from resolutions produced by a statistical pronoun resolver.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
BerkeleyLM uses states to optimistically search for longer n-gram matches first and must perform twice as many random accesses to retrieve backoff information.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
If is one of Monday, Tuesday, . . .
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Conditioned on T , features of word types W are drawn.
Two general approaches are presented and two combination techniques are described for each approach.
0
The counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development set.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
The basic strategy is, for a given pair of entity types, to start with some examples, like several famous book title and author pairs; and find expressions which contains those names; then using the found expressions, find more author and book title pairs.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
When using the segmentation pruning (using HSPELL) for unseen tokens, performance improves for all tasks as well.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
These clusters are computed using an SVD variant without relying on transitional structure.
These clusters are computed using an SVD variant without relying on transitional structure.
0
5 60.6 Table 3: Multilingual Results: We report token-level one-to-one and many-to-one accuracy on a variety of languages under several experimental settings (Section 5).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
One is that smaller sets sometime have meaningless keywords, like “strength” or “add” in the CC-domain, or “compare” in the PC-domain.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Section 3 describes the complete coreference resolution model, which uses the contextual role knowledge as well as more traditional coreference features.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
On the other hand, in a translation system one probably wants to treat this string as a single dictionary word since it has a conventional and somewhat unpredictable translation into English.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
We offer a state function s(wn1) = wn� where substring wn� is guaranteed to extend (to the right) in the same way that wn1 does for purposes of language modeling.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Furthermore, we do not connect the English vertices to each other, but only to foreign language vertices.4 The graph vertices are extracted from the different sides of a parallel corpus (De, Df) and an additional unlabeled monolingual foreign corpus Ff, which will be used later for training.
This paper talks about Unsupervised Models for Named Entity Classification.
0
.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Under this scheme, n human judges are asked independently to segment a text.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
(2009) on Portuguese (Grac¸a et al.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The logistic function, whose outputs are in [0, 1], forces pp(s, t) <_ po(s, t).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Clearly it is possible to write a rule that states that if an analysis Modal+ Verb is available, then that is to be preferred over Noun+ Verb: such a rule could be stated in terms of (finite-state) local grammars in the sense of Mohri (1993).
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
It is sometimes claimed that one of the advantages of dependency grammar over approaches based on constituency is that it allows a more adequate treatment of languages with variable word order, where discontinuous syntactic constructions are more common than in languages like English (Mel’ˇcuk, 1988; Covington, 1990).
Here both parametric and non-parametric models are explored.
0
The Bayes models were able to achieve significantly higher precision than their non-parametric counterparts.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
We use v1.0 mainly because previous studies on joint inference reported results w.r.t. v1.0 only.5 We expect that using the same setup on v2.0 will allow a crosstreebank comparison.6 We used the first 500 sentences as our dev set and the rest 4500 for training and report our main results on this split.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
The links can solve the problem.
Replacing this with a ranked evaluation seems to be more suitable.
0
A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?