source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
The relevant variables are the set of token-level tags that appear before and after each instance of the ith word type; we denote these context pairs with the set {(tb, ta)} and they are contained in t(−i).
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This is not completely surprising, since all systems use very similar technology.
Here we present two algorithms.
0
The weak hypothesis can abstain from predicting the label of an instance x by setting h(x) = 0.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We call this technique constituent voting.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Each xt E 2x is the set of features constituting the ith example.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
During the search process, a partial hypothesis is extended by choosing a source sentence position, which has not been aligned with a target sentence position yet.
Their results show that their high performance NER use less training data than other systems.
0
If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.
They focused on phrases which two Named Entities, and proceed in two stages.
0
There has also been work using a bootstrap- ping approach [Brin 98; Agichtein and Gravano 00; Ravichandran and Hovy 02].
They focused on phrases which two Named Entities, and proceed in two stages.
0
However, those methods need initial seeds, so the relation between entities has to be known in advance.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
It also incorporates the Good-Turing method (Baayen 1989; Church and Gale 1991) in estimating the likelihoods of previously unseen con­ structions, including morphological derivatives and personal names.
This corpus has several advantages: it is annotated at different levels.
0
rhetorical analysis We are experimenting with a hybrid statistical and knowledge-based system for discourse parsing and summarization (Stede 2003), (Hanneforth et al. 2003), again targeting the genre of commentaries.
A beam search concept is applied as in speech recognition.
0
The above auxiliary quantity satisfies the following recursive DP equation: Qe0 (e; C; j) = Initial Skip Verb Final 1.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Section 2.1 describes how BABAR generates training examples to use in the learning process.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The hash variant is a reverse trie with hash tables, a more memory-efficient version of SRILM’s default.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3.
All the texts were annotated by two people.
0
And time is short.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
I = 1X21 N and N is a "medium" sized number so that it is feasible to collect 0(N) unlabeled examples.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
2.3 Assigning Evidence Values.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This best instance-weighting model beats the equivalant model without instance weights by between 0.6 BLEU and 1.8 BLEU, and beats the log-linear baseline by a large margin.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Statistical methods seem particularly applicable to the problem of unknown-word identification, especially for constructions like names, where the linguistic constraints are minimal, and where one therefore wants to know not only that a particular se­ quence of hanzi might be a name, but that it is likely to be a name with some probabil­ ity.
The AdaBoost algorithm was developed for supervised learning.
0
88,962 (spelling,context) pairs were extracted as training data.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
The form fmnh, for example, can be understood as the verb “lubricated”, the possessed noun “her oil”, the adjective “fat” or the verb “got fat”.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Many human evaluation metrics have been proposed.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
For example, let us consider a tree set containing trees of the form shown in Figure 4a.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
State will ultimately be used as context in a subsequent query.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Commentaries argue in favor of a specific point of view toward some political issue, often dicussing yet dismissing other points of view; therefore, they typically offer a more interesting rhetorical structure than, say, narrative text or other portions of newspapers.
This assumption, however, is not inherent to type-based tagging models.
0
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
8 66.4 52.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Lexical rules are estimated in a similar manner.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Hence, we use the bootstrap resampling method described by Koehn (2004).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The rhetorical structure annotations of PCC have all been converted to URML.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
To make the projection practical, we rely on the twelve universal part-of-speech tags of Petrov et al. (2011).
The AdaBoost algorithm was developed for supervised learning.
0
In addition to the named-entity string (Maury Cooper or Georgia), a contextual predictor was also extracted.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
An IG can be viewed as a CFG in which each nonterminal is associated with a stack.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
What both of these approaches presume is that there is a sin­ gle correct segmentation for a sentence, against which an automatic algorithm can be compared.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
However, supervised methods rely on labeled training data, which is time-consuming and expensive to generate.
This paper conducted research in the area of automatic paraphrase discovery.
0
Here a set is represented by the keyword and the number in parentheses indicates the number of shared NE pair instances.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
The computing time is low, since no reordering is carried out.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Queries detect the invalid probability, using the node only if it leads to a longer match.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Its only purpose is 3 This follows since each θt has St − 1 parameters and.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Similarly, there is no compelling evidence that either of the syllables of f.ifflll binllang2 'betelnut' represents a morpheme, since neither can occur in any context without the other: more likely fjfflll binllang2 is a disyllabic morpheme.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Future extensions of the system might include: 1) An extended translation model, where we use more context to predict a source word.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
A simple extension will be used to handle this problem.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
In both cases, the instanceweighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline, and gains of between 0.6 and 1.8 over an equivalent mixture model (with an identical training procedure but without instance weighting).
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Mikheev et al.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
All language model queries issued by machine translation decoders follow a left-to-right pattern, starting with either the begin of sentence token or null context for mid-sentence fragments.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Acknowledgments We thank Steven Bethard, Evan Rosen, and Karen Shiells for material contributions to this work.
This paper talks about Pseudo-Projective Dependency Parsing.
0
This may seem surprising, given the experiments reported in section 4, but the explanation is probably that the non-projective dependencies that can be recovered at all are of the simple kind that only requires a single lift, where the encoding of path information is often redundant.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Due to many similarly performing systems, we are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Although the tag distributions of the foreign words (Eq.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Call the crossing constituents A and B.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Backoff-smoothed models estimate this probability based on the observed entry with longest matching history wnf , returning where the probability p(wn|wn−1 f ) and backoff penalties b(wn−1 i ) are given by an already-estimated model.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We do not adapt the alignment procedure for generating the phrase table from which the TM distributions are derived.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
However, the characterization given in the main body of the text is correct sufficiently often to be useful.
Here both parametric and non-parametric models are explored.
0
The counts represent portions of the approximately 44000 constituents hypothesized by the parsers in the development set.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
annotation guidelines that tell annotators what to do in case of doubt.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
Since trees in a tree set are adjoined together, the addressing scheme uses a sequence of pairings of the address and name of the elementary tree adjoined at that address.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
6 Joint Segmentation and Parsing.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
There has been a lot of research on such lexical relations, along with the creation of resources such as WordNet.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
The availability of comparable corpora is limited, which is a significant limitation on the approach.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
P St = n. β T VARIABLES ψ Y W : Word types (W1 ,.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
2.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Methods for expanding the dictionary include, of course, morphological rules, rules for segmenting personal names, as well as numeral sequences, expressions for dates, and so forth (Chen and Liu 1992; Wang, Li, and Chang 1992; Chang and Chen 1993; Nie, Jin, and Hannan 1994).
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This significantly underperforms log-linear combination.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
During training, we treat as observed the language word types W as well as the token-level corpus w. We utilize Gibbs sampling to approximate our collapsed model posterior: P (T ,t|W , w, α, β) ∝ P (T , t, W , w|α, β) 0.7 0.6 0.5 0.4 0.3 English Danish Dutch Germany Portuguese Spanish Swedish = P (T , t, W , w, ψ, θ, φ, w|α, β)dψdθdφ Note that given tag assignments T , there is only one setting of token-level tags t which has mass in the above posterior.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
For simplicity, we assume that OUT is homogeneous.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Judges where excluded from assessing the quality of MT systems that were submitted by their institution.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Considerable amount of work has been done in recent years on the named entity recognition task, partly due to the Message Understanding Conferences (MUC).
They have made use of local and global features to deal with the instances of same token in a document.
0
Lexicon Feature: The string of the token is used as a feature.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Other kinds of productive word classes, such as company names, abbreviations (termed fijsuolxie3 in Mandarin), and place names can easily be 20 Note that 7 in E 7 is normally pronounced as leO, but as part of a resultative it is liao3..
These clusters are computed using an SVD variant without relying on transitional structure.
0
The equation for sampling a single type-level assignment Ti is given by, 0.2 0 5 10 15 20 25 30 Iteration Figure 2: Graph of the one-to-one accuracy of our full model (+FEATS) under the best hyperparameter setting by iteration (see Section 5).
There are clustering approaches that assign a single POS tag to each word type.
0
2 70.7 52.
They found replacing it with a ranked evaluation to be more suitable.
0
We aligned the texts at a sentence level across all four languages, resulting in 1064 sentence per language.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Morphological analyzers for Hebrew that analyze a surface form in isolation have been proposed by Segal (2000), Yona and Wintner (2005), and recently by the knowledge center for processing Hebrew (Itai et al., 2006).
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
64 76.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Note that since all English vertices were extracted from the parallel text, we will have an initial label distribution for all vertices in Ve.
There are clustering approaches that assign a single POS tag to each word type.
0
Instead, we condition on the type-level tag assignments T . Specifically, let St = {i|Ti = t} denote the indices of theword types which have been assigned tag t accord ing to the tag assignments T . Then θt is drawn from DIRICHLET(α, St), a symmetric Dirichlet which only places mass on word types indicated by St. This ensures that each word will only be assigned a single tag at inference time (see Section 4).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This is not to say that a set of standards by which a particular segmentation would count as correct and another incorrect could not be devised; indeed, such standards have been proposed and include the published PRCNSC (1994) and ROCLING (1993), as well as the unpublished Linguistic Data Consortium standards (ca.
It is probably the first analysis of Arabic parsing of this kind.
0
segmentation (Table 2).
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
As discussed in more detail in §3, we use two types of vertices in our graph: on the foreign language side vertices correspond to trigram types, while the vertices on the English side are individual word types.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis­ tance matrix, and plotting the first two most significant dimensions.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
2.1 Overview.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Evalb, the standard parsing metric, is biased toward such corpora (Sampson and Babarczy, 2003).
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The test data was again drawn from a segment of the Europarl corpus from the fourth quarter of 2000, which is excluded from the training data.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
4 To be sure, it is not always true that a hanzi represents a syllable or that it represents a morpheme.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Two common cases are the attribu tive adjective and the process nominal _; maSdar, which can have a verbal reading.4 At tributive adjectives are hard because they are or- thographically identical to nominals; they are inflected for gender, number, case, and definiteness.