source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Overall, it gives improvements ranging from 1.1% for German to 14.7% for Italian, for an average improvement of 8.3% over the unsupervised feature-HMM model.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
θ has a belief value of 1.0, indicating complete certainty that the correct hypothesis is included in the set, and a plausibility value of 1.0, indicating that there is no evidence for competing hypotheses.5 As evidence is collected and the likely hypotheses are whittled down, belief is redistributed to subsets of θ.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Table 2 Similarity matrix for segmentation judgments.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
This remains as future work.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
On the surface, our model may seem as a special case of Cohen and Smith in which α = 0.
The AdaBoost algorithm was developed for supervised learning.
0
A contextual rule considers words surrounding the string in the sentence in which it appears (e.g., a rule that any proper name modified by an appositive whose head is president is a person).
All the texts were annotated by two people.
0
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Since our destructor is an efficient call to munmap, bypassing the destructor favors only other packages.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
With the additional assumption that (s, t) can be restricted to the support of co(s, t), this is equivalent to a “flat” alternative to (6) in which each non-zero co(s, t) is set to one.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
We were intentionally lenient with our baselines: bilingual information by projecting POS tags directly across alignments in the parallel data.
The corpus was annoted with different linguitic information.
0
The portions of information in the large window can be individually clicked visible or invisible; here we have chosen to see (from top to bottom) • the full text, • the annotation values for the activated annotation set (co-reference), • the actual annotation tiers, and • the portion of text currently ‘in focus’ (which also appears underlined in the full text).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
By contrast, BerkeleyLM’s hash and compressed variants will return incorrect results based on an n −1-gram.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
In our implementation, we make perhaps the simplest choice of weak hypothesis.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
We maintain a separate array for each length n containing all n-gram entries sorted in suffix order.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
When translating the sentence monotonically from left to right, the translation of the German finite verb 'kann', which is the left verbal brace in this case, is postponed until the German noun phrase 'mein Kollege' is translated, which is the subject of the sentence.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We assume that M is in an existential state qA, with integers i1 and i2 representing zi in the (2i — 1)th and 22th work tape, for 1 < i < k. For each rule p : A fp(B, C) such that fp is mapped onto the function fp defined by the following rule. jp((xi,.. • ,rnt), (1ii, • • • • Yn3))= (Zi , • • • , Zk) M breaks xi , zk into substrings xi, , xn, and yi,...,y&quot; conforming to the definition of fp.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
On the other hand, we can expect Head+Path to be the most useful representation for reconstructing the underlying non-projective dependency graph.
There are clustering approaches that assign a single POS tag to each word type.
0
On several languages, we report performance exceeding that of state-of-the art systems.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Information from a sentence is sometimes insufficient to classify a name correctly.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
This paper, however, provides a comprehensive overview of the data collection effort and its current state.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The number of NE instance pairs used in their experiment is less than half of our method.
This corpus has several advantages: it is annotated at different levels.
0
A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.
They focused on phrases which two Named Entities, and proceed in two stages.
0
The most frequent NE category pairs are “Person - Person (209,236), followed by “Country - Coun- try” (95,123) and “Person - Country” (75,509).
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
As expected, the vanilla HMM trained with EM performs the worst.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
The first stage identifies a keyword in each phrase and joins phrases with the same keyword into sets.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Note that in our construction arcs can never cross token boundaries.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Thus we have some confidence that our own performance is at least as good as that of Chang et al.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Hence, the resulting trees will no longer have two branches of equal size.
Combining multiple highly-accurate independent parsers yields promising results.
0
C is the union of the sets of constituents suggested by the parsers. r(c) is a binary function returning t (for true) precisely when the constituent c E C should be included in the hypothesis.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
0 57.2 43.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Finally, we make some improvements to baseline approaches.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The record for wn1 stores the offset at which its extensions begin.
There are clustering approaches that assign a single POS tag to each word type.
0
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
We respond to this on the one hand with a format for its underspecification (see 2.4) and on the other hand with an additional level of annotation that attends only to connectives and their scopes (see 2.5), which is intended as an intermediate step on the long road towards a systematic and objective treatment of rhetorical structure.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
First, a non-anaphoric NP classifier identifies definite noun phrases that are existential, using both syntactic rules and our learned existential NP recognizer (Bean and Riloff, 1999), and removes them from the resolution process.
This paper talks about Unsupervised Models for Named Entity Classification.
0
At each iteration the algorithm increases the number of rules, while maintaining a high level of agreement between the spelling and contextual decision lists.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Email: cls@bell-labs.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The zone to which a token belongs is used as a feature.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Morphological segmentation decisions in our model are delegated to a lexeme-based PCFG and we show that using a simple treebank grammar, a data-driven lexicon, and a linguistically motivated unknown-tokens handling our model outperforms (Tsarfaty, 2006) and (Cohen and Smith, 2007) on the joint task and achieves state-of-the-art results on a par with current respective standalone models.2
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
5.2 Setup.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Juri Ganitkevitch answered questions about Joshua.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
01 75.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
It was also proposed to allow annotators to skip sentences that they are unable to judge.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
att.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
This range is collapsed to a number of buckets, typically by taking the hash modulo the number of buckets.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Linear probing hash tables must have more buckets than entries, or else an empty bucket will never be found.
This corpus has several advantages: it is annotated at different levels.
0
We thus decided to pay specific attention to them and introduce an annotation layer for connectives and their scopes.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Finally, the concatenated 5 * 20% output is used to train the reference resolution component.
It is probably the first analysis of Arabic parsing of this kind.
0
As we have said, parse quality decreases with sentence length.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
This design does not guarantee “structural zeros,” but biases towards sparsity.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
They are: 5We are grateful to an anonymous reviewer for pointing this out.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Section 2.1 describes how BABAR generates training examples to use in the learning process.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The implementation is In a similar effort, (G¨otze 2003) developed a proposal for the theory-neutral annotation of information structure (IS) — a notoriously difficult area with plenty of conflicting and overlapping terminological conceptions.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
However, lazy mapping is generally slow because queries against uncached pages must wait for the disk.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
We pick the parse that is most similar to the other parses by choosing the one with the highest sum of pairwise similarities.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
To evaluate the performance on the segmentation task, we report SEG, the standard harmonic means for segmentation Precision and Recall F1 (as defined in Bar-Haim et al. (2005); Tsarfaty (2006)) as well as the segmentation accuracy SEGTok measure indicating the percentage of input tokens assigned the correct exact segmentation (as reported by Cohen and Smith (2007)).
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The reference medicine for Silapo is EPREX/ERYPO, which contains epoetin alfa.
They have made use of local and global features to deal with the instances of same token in a document.
0
Note that we check for , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr., etc are not part of person names, whereas corporate suffixes like Corp., Inc., etc are part of corporate names.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Second, comparisons of different methods are not meaningful unless one can eval­ uate them on the same corpus.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
2.2.1 The Caseframe Representation Information extraction (IE) systems use extraction patterns to identify noun phrases that play a specific role in 1 Our implementation only resolves NPs that occur in the same document, but in retrospect, one could probably resolve instances of the same existential NP in different documents too.
It is probably the first analysis of Arabic parsing of this kind.
0
Various verbal (e.g., �, .::) and adjectival.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
For these models we limit the options provided for OOV words by not considering the entire token as a valid segmentation in case at least some prefix segmentation exists.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
To date we have not done a separate evaluation of foreign-name recognition.
It is probably the first analysis of Arabic parsing of this kind.
0
Formally, for a lexicon L and segments I ∈ L, O ∈/ L, each word automaton accepts the language I∗(O + I)I∗.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
52 15.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
One class comprises words derived by productive morphologi­ cal processes, such as plural noun formation using the suffix ir, menD.
This paper conducted research in the area of automatic paraphrase discovery.
0
First, from a large corpus, we extract all the NE instance pairs.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Experiments in two domains showed that the contextual role knowledge improved coreference performance, especially on pronouns.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
It would have therefore also been possible to use the integer programming (IP) based approach of Ravi and Knight (2009) instead of the feature-HMM for POS induction on the foreign side.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We also mark all tags that dominate a word with the feminine ending :: taa mar buuTa (markFeminine).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The implementation is In a similar effort, (G¨otze 2003) developed a proposal for the theory-neutral annotation of information structure (IS) — a notoriously difficult area with plenty of conflicting and overlapping terminological conceptions.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
However, there is a strong relationship between ni1s and the number of hanzi in the class.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
For example, if FCC and Federal Communications Commission are both found in a document, then Federal has A begin set to 1, Communications has A continue set to 1, Commission has A end set to 1, and FCC has A unique set to 1.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Members of LCFRS whose operations have this property can be translated into the ILFP notation (Rounds, 1985).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
.., Tn ) T W φ E w : Token word seqs (obs) t : Token tag assigns (det by T ) PARAMETERS ψ : Lexicon parameters θ : Token word emission parameters φ : Token tag transition parameters φ φ t1 t2 θ θ w1 w2 K φ T tm O K θ E wN m N N Figure 1: Graphical depiction of our model and summary of latent variables and parameters.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In figure 4, reverse relations are indicated by `*’ next to the frequency.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Many researchers have developed coreference resolvers, so we will only discuss the methods that are most closely related to BABAR.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We evaluated BABAR on two domains: terrorism and natural disasters.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
As with lexical expections, the semantic classes of co-referring expressions are 4 They may not be perfectly substitutable, for example one NP may be more specific (e.g., “he” vs. “John F. Kennedy”).
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Next, for each pair of NE categories, we collect all the contexts and find the keywords which are topical for that NE category pair.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
3.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The set of n-grams appearing in a model is sparse, and we want to efficiently find their associated probabilities and backoff penalties.
It is probably the first analysis of Arabic parsing of this kind.
0
The difference is due to more careful S-NOM NP NP NP VP VBG :: b NP restoring NP ADJP NN :: b NP NN NP NP ADJP DTJJ ADJP DTJJ NN :: b NP NP NP ADJP ADJP DTJJ J ..i NN :: b NP NP NP ADJP ADJP DTJJ NN _;� NP PRP DTJJ DTJJ J ..i _;� PRP J ..i NN _;� NP PRP DTJJ NN _;� NP PRP DTJJ J ..i role its constructive effective (b) Stanford (c) Berkeley (d) Bik el (a) Reference Figure 4: The constituent Restoring of its constructive and effective role parsed by the three different models (gold segmentation).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.
Here we present two algorithms.
0
Unfortunately, modifying the model to account for these kind of dependencies is not at all straightforward.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
For example, we might have VP → VB NP PP, where the NP is the subject.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabil­ ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Within the RST “user community” there has also been discussion whether two levels of discourse structure should not be systematically distinguished (intentional versus informational).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
7 Big 5 is the most popular Chinese character coding standard in use in Taiwan and Hong Kong.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
However, in existing systems, this expansion come with a steep increase in model complexity.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The approach recursively evaluates a quantity Q(C; j), where C is the set of already visited cities and sj is the last visited city.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
This algorithm can be applied to statistical machine translation.
This paper talks about Unsupervised Models for Named Entity Classification.
0
In our implementation, we make perhaps the simplest choice of weak hypothesis.