source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
In the label propagation stage, we propagate the automatic English tags to the aligned Italian trigram types, followed by further propagation solely among the Italian vertices. the Italian vertices are connected to an automatically labeled English vertex.
There are clustering approaches that assign a single POS tag to each word type.
0
A more forceful approach for encoding sparsity is posterior regularization, which constrains the posterior to have a small number of expected tag assignments (Grac¸a et al., 2009).
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
(7) φ s,t This is a somewhat less direct objective than used by Matsoukas et al, who make an iterative approximation to expected TER.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Unfortunately, modifying the model to account for these kind of dependencies is not at all straightforward.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
This paper conducted research in the area of automatic paraphrase discovery.
0
Evaluation results within sets Table 1 shows the evaluation result based on the number of phrases in a set.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Hence, for each token, one of the four features zone-TXT, zone- HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The bias of automatic methods in favor of statistical systems seems to be less pronounced on out-of-domain test data.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The first four affixes are so-called resultative affixes: they denote some prop­ erty of the resultant state of a verb, as in E7 wang4bu4-liao3 (forget-not-attain) 'cannot forget.'
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
BABAR applies a DempsterShafer probabilistic model to make resolutions based on evidence from the contextual role knowledge sources as well as general knowledge sources.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Most of these groups follow a phrase-based statistical approach to machine translation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The interdependence between fb or 1/!i, and 5:2 is not captured by our model, but this could easily be remedied.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
There is no global pruning.
0
Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
4.4 Chinese Personal Names.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
BABAR employs information extraction techniques to represent and learn role relationships.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We thank members of the MIT NLP group for their suggestions and comments.
Here we present two algorithms.
0
Thus at each iteration the algorithm is forced to pick features for the location, person and organization in turn for the classifier being trained.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Clearly this poses a number of research challenges, though, such as the applicability of tag sets across different languages.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Evaluation results for links
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
In MSA, SVO usually appears in non-matrix clauses.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
One striking example is the error reduction for Spanish, which reduces error by 36.5% and 24.7% for the best and median settings respectively.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Je voudrais pr´eciser, a` l’adresse du commissaire Liikanen, qu’il n’est pas ais´e de recourir aux tribunaux nationaux.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
What both of these approaches presume is that there is a sin­ gle correct segmentation for a sentence, against which an automatic algorithm can be compared.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
To conserve memory at the expense of accuracy, values may be quantized using q bits per probability and r bits per backoff6.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
19 We note that it is not always clear in Wang, Li, and Chang's examples which segmented words.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
1 1 0.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The ratio of buckets to entries is controlled by space multiplier m > 1.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
La ng ua ge 1T W + P RI O R + F E A T S E ng lis h D a ni s h D u tc h G e r m a n P or tu g u e s e S p a ni s h S w e di s h 2 1.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
This is manifest in the lexical choices but 1 www.coli.unisb.de/∼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
”).
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
(a) I f f fi * fi :1 }'l ij 1§: {1M m m s h e n 3 m e 0 shi2 ho u4 wo 3 cai2 ne ng 2 ke4 fu 2 zh e4 ge 4 ku n4 w h a t ti m e I just be abl e ov er co m e thi s C L dif fic 'When will I be able to overcome this difficulty?'
This paper conducted research in the area of automatic paraphrase discovery.
0
One obvious application is information extraction.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Each trie node contains a sorted array of entries and they use binary search.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
This is similar to using the Linux MAP POPULATE flag that is our default loading mechanism.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Global features are extracted from other occurrences of the same token in the whole document.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
3.1 Gross Statistics.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In general, m; l; l0 6= fl1; l2; l3g and in line umber 3 and 4, l0 must be chosen not to violate the above reordering restriction.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
(2009).
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
This is manifest in the lexical choices but 1 www.coli.unisb.de/∼thorsten/tnt/ Dagmar Ziegler is up to her neck in debt.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
99 94.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Figure 1 provides some statistics about this corpus.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
2.5 Connectives with scopes.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Hereafter, each pair of NE categories will be called a domain; e.g. the “Company – Company” domain, which we will call CC- domain (Step 2).
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Unlike the WSJ corpus which has a high frequency of rules like VP →VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Training and testing is based on the Europarl corpus.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
For English POS tagging, BergKirkpatrick et al. (2010) found that this direct gradient method performed better (>7% absolute accuracy) than using a feature-enhanced modification of the Expectation-Maximization (EM) algorithm (Dempster et al., 1977).8 Moreover, this route of optimization outperformed a vanilla HMM trained with EM by 12%.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
3.2 Inter-annotator Agreement.
The corpus was annoted with different linguitic information.
0
For the ‘core’ portion of PCC, we found that on average, 35% of the coherence relations in our RST annotations are explicitly signalled by a lexical connective.6 When adding the fact that connectives are often ambiguous, one has to conclude that prospects for an automatic analysis of rhetorical structure using shallow methods (i.e., relying largely on connectives) are not bright — but see Sections 3.2 and 3.3 below.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We tagged each noun with the top-level semantic classes assigned to it in Word- Net.
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
First, we use a novel graph-based framework for projecting syntactic information across language boundaries.
This corpus has several advantages: it is annotated at different levels.
0
One key issue here is to seek a discourse-based model of information structure.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Only IRSTLM does not support threading.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
As a first step towards modeling transliterated names, we have collected all hanzi occurring more than once in the roughly 750 foreign names in our dictionary, and we estimate the probabil­ ity of occurrence of each hanzi in a transliteration (pTN(hanzi;)) using the maximum likelihood estimate.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
47 78.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
Using the terminology of Kahane et al. (1998), we say that jedna is the syntactic head of Z, while je is its linear head in the projectivized representation.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Graph construction does not require any labeled data, but makes use of two similarity functions.
This paper conducted research in the area of automatic paraphrase discovery.
0
For example, we can easily imagine that the number of paraphrases for “A buys B” is enormous and it is not possible to create a comprehensive inventory by hand.
These clusters are computed using an SVD variant without relying on transitional structure.
0
This design does not guarantee “structural zeros,” but biases towards sparsity.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
By considering derivation trees, and thus abstracting away from the details of the composition operation and the structures being manipulated, we are able to state the similarities and differences between the 'This work was partially supported by NSF grants MCS42-19116-CER, MCS82-07294 and DCR-84-10413, ARO grant DAA 29-84-9-0027, and DARPA grant N00014-85-K0018.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In many cases, there is an even stronger restriction: over large portions of the source string, the alignment is monotone.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Fortunately, there are only a few hundred hanzi that are particularly common in transliterations; indeed, the commonest ones, such as E. bal, m er3, and iij al are often clear indicators that a sequence of hanzi containing them is foreign: even a name like !:i*m xia4mi3-er3 'Shamir,' which is a legal Chi­ nese personal name, retains a foreign flavor because of liM.
Their results show that their high performance NER use less training data than other systems.
0
Named Entity Recognition: A Maximum Entropy Approach Using Global Information
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
(4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
There has been a lot of research on such lexical relations, along with the creation of resources such as WordNet.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Step 1.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
As can be seen from the last column in Table 1, both Head and Head+Path may theoretically lead to a quadratic increase in the number of distinct arc labels (Head+Path being worse than Head only by a constant factor), while the increase is only linear in the case of Path.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
In addition, each feature function is a binary function.
This assumption, however, is not inherent to type-based tagging models.
0
Specifically, we (+FEATS) utilizes the tag prior as well as features (e.g., suffixes and orthographic features), discussed in Section 3, for the P (W |T , ψ) component.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Using this heuristic, BABAR identifies existential definite NPs in the training corpus using our previous learning algorithm (Bean and Riloff, 1999) and resolves all occurrences of the same existential NP with each another.1 2.1.2 Syntactic Seeding BABAR also uses syntactic heuristics to identify anaphors and antecedents that can be easily resolved.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Also, we don’t know how many such paraphrase sets are necessary to cover even some everyday things or events.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The problem of "noise" items that do not fall into any of the three categories also needs to be addressed.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Phrase-level granularity distinguishes our work from previous work by Matsoukas et al (2009), who weight sentences according to sub-corpus and genre membership.
It is probably the first analysis of Arabic parsing of this kind.
0
We propose a limit of 70 words for Arabic parsing evaluations.
There are clustering approaches that assign a single POS tag to each word type.
0
2 56.2 32.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
This group consists of (3 total number of possible zones) features.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
The choice of the genre commentary resulted from the fact that an investigation of rhetorical structure, its interaction with other aspects of discourse structure, and the prospects for its automatic derivation are the key motivations for building up the corpus.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The human judges were presented with the following definition of adequacy and fluency, but no additional instructions:
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This paper discusses the use of unlabeled examples for the problem of named entity classification.
Their results show that their high performance NER use less training data than other systems.
0
This group contains a large number of features (one for each token string present in the training data).
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
This approach needs a phrase as an initial seed and thus the possible relationships to be extracted are naturally limited.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
We apply a beam search concept as in speech recognition.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
3.2 The DempsterShafer Decision Model.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Matsoukas et al (2009) generalize it by learning weights on sentence pairs that are used when estimating relative-frequency phrase-pair probabilities.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
However, lazy mapping is generally slow because queries against uncached pages must wait for the disk.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
In fact, we found that enabling IRSTLM’s cache made it slightly slower, so results in Table 1 use IRSTLM without caching.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
M spawns as many processes as there are ways of breaking up ri , .. • , zt, and rules with A on their left-hand-side.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
As with HG's derivation structures are annotated; in the case of TAG's, by the trees used for adjunction and addresses of nodes of the elementary tree where adjunctions occurred.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
49 99.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Ex: The regime gives itself the right...
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The simplest approach involves scoring the various analyses by costs based on word frequency, and picking the lowest cost path; variants of this approach have been described in Chang, Chen, and Chen (1991) and Chang and Chen (1993).