source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
A corollary of the result of Section 4.3 is that polynomial time recognition of MCTAG's is possible.
Their results show that their high performance NER use less training data than other systems.
0
For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
First, we identify sources of syntactic ambiguity understudied in the existing parsing literature.
They have made use of local and global features to deal with the instances of same token in a document.
0
MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.
Here we present two algorithms.
0
This left 962 examples, of which 85 were noise.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
A Person-Prefix-List is compiled in an analogous way.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
So, this was a surprise element due to practical reasons, not malice.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo].
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
In the next section, we show how an ATM can accept the strings generated by a grammar in a LCFRS formalism in logspace, and hence show that each family can be recognized in polynomial time.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The human evaluators were a non-native, fluent Arabic speaker (the first author) for the ATB and a native English speaker for the WSJ.7 Table 5 shows type- and token-level error rates for each corpus.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).
It is probably the first analysis of Arabic parsing of this kind.
0
But Rehbein and van Genabith (2007) showed that Evalb should not be used as an indication of real difference— or similarity—between treebanks.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.
Their results show that their high performance NER use less training data than other systems.
0
of Tokens MENERGI 318 160,000 200 180,000 IdentiFinder – 650,000 – 790,000 MENE – – 350 321,000 Table 4: Training Data MUC7 test accuracy.2 For MUC6, the reduction in error due to global features is 27%, and for MUC7,14%.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We adopted the MUC6 guidelines for evaluating coreference relationships based on transitivity in anaphoric chains.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Each lattice arc corresponds to a segment and its corresponding PoS tag, and a path through the lattice corresponds to a specific morphological segmentation of the utterance.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
We focus here instead on adapting the two most important features: the language model (LM), which estimates the probability p(wIh) of a target word w following an ngram h; and the translation models (TM) p(slt) and p(t1s), which give the probability of source phrase s translating to target phrase t, and vice versa.
They have made use of local and global features to deal with the instances of same token in a document.
0
It uses a maximum entropy framework and classifies each word given its features.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
This might be because our features are more comprehensive than those used by Borthwick.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Thus, provided at least this amount of IN data is available—as it is in our setting—adapting these weights is straightforward.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
The graph was constructed using 2 million trigrams; we chose these by truncating the parallel datasets up to the number of sentence pairs that contained 2 million trigrams.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For instance, the common "suffixes," -nia (e.g.,.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
For natural disasters, BABAR generated 20,479 resolutions: 11,652 from lexical seeding and 8,827 from syntactic seeding.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Certainly these linguistic factors increase the difficulty of syntactic disambiguation.
Their results show that their high performance NER use less training data than other systems.
0
Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The alignment model uses two kinds of parameters: alignment probabilities p(aj jaj􀀀1; I; J), where the probability of alignment aj for position j depends on the previous alignment position aj􀀀1 (Ney et al., 2000) and lexicon probabilities p(fj jeaj ).
The AdaBoost algorithm was developed for supervised learning.
0
The second algorithm extends ideas from boosting algorithms, designed for supervised learning tasks, to the framework suggested by (Blum and Mitchell 98).
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
This technique was introduced by Clarkson and Rosenfeld (1997) and is also implemented by IRSTLM and BerkeleyLM’s compressed option.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
First, we learn weights on individual phrase pairs rather than sentences.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
While the paper mentioned a sorted variant, code was never released.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Later, BerkeleyLM (Pauls and Klein, 2011) described ideas similar to ours.
This paper talks about Unsupervised Models for Named Entity Classification.
0
This paper discusses the use of unlabeled examples for the problem of named entity classification.
Here we present two algorithms.
0
, for A. T.&T. nonalpha.. .
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Our code has been publicly available and intergrated into Moses since October 2010.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
If these sets do not overlap, then the words cannot be coreferent.
It is probably the first analysis of Arabic parsing of this kind.
0
Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2–5% F1.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Our primary goal is to exploit the resources that are most appropriate for the task at hand, and our secondary goal is to allow for comparison of our models’ performance against previously reported results.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Finally, we find links between sets of phrases, based on the NE instance pair data (for example, different phrases which link “IBM” and “Lotus”) (Step 4).
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, in the sentence that starts with “Bush put a freeze on . . .
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This is the first time that we organized a large-scale manual evaluation.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Two of the Mainlanders also cluster close together but, interestingly, not particularly close to the Taiwan speakers; the third Mainlander is much more similar to the Taiwan speakers.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
This information is readily available in TRIE where adjacent records with equal pointers indicate no further extension of context is possible.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
(a) ;IE shi4 'be' => ;IE;IE shi4bu2-shi4 (be-not-be) 'is it?'
A beam search concept is applied as in speech recognition.
0
Depending on the threshold t0, the search algorithm may miss the globally optimal path which typically results in additional translation errors.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
3.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
The work of Thatcher (1973) and Rounds (1969) define formal systems that generate tree sets that are related to CFG's and IG's.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
Here, we process only full-form words within the translation procedure.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Full Chinese personal names are in one respect simple: they are always of the form family+given.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
In the first part of the experiment, dependency graphs from the treebanks were projectivized using the algorithm described in section 2.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Morphologically derived words such as, xue2shengl+men0.
The AdaBoost algorithm was developed for supervised learning.
0
In particular, it may not be possible to learn functions fi (x f2(x2,t) for i = m + 1...n: either because there is some noise in the data, or because it is just not realistic to expect to learn perfect classifiers given the features used for representation.
It is probably the first analysis of Arabic parsing of this kind.
0
10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ­ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
The parallel data came from the Europarl corpus (Koehn, 2005) and the ODS United Nations dataset (UN, 2006).
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Its success depends on the two domains being relatively close, and on the OUT corpus not being so large as to overwhelm the contribution of IN.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Previous reports on Chinese segmentation have invariably cited performance either in terms of a single percent-correct score, or else a single precision-recall pair.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Maximizing (7) is thus much faster than a typical MERT run. where co(s, t) are the counts from OUT, as in (6).
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Consider the case where IX].
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
This smooth guarantees that there are no zeroes estimated.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
We thank United Informatics for providing us with our corpus of Chinese text, and BDC for the 'Behavior ChineseEnglish Electronic Dictionary.'
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For example, given a sequence F1G1G2, where F1 is a legal single-hanzi family name, and Plural Nouns X g 0 g "' X X 0 T!i c"'.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Ltd., then organization will be more probable.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
This flexibility, along with the simplicity of implementation and expansion, makes this framework an attractive base for continued research.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
In the case of adverbial reduplication illustrated in (3b) an adjective of the form AB is reduplicated as AABB.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÆcient to consider only the best 50 words.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
(a) 1 § . ;m t 7 leO z h e 4 pil m a 3 lu 4 sh an g4 bi ng 4 t h i s CL (assi fier) horse w ay on sic k A SP (ec t) 'This horse got sick on the way' (b) 1§: . til y zhe4 tiao2 ma3lu4 hen3 shao3 this CL road very few 'Very few cars pass by this road' :$ chel jinglguo4 car pass by 2.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
(b) After they were released...
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
The method just described segments dictionary words, but as noted in Section 1, there are several classes of words that should be handled that are not found in a standard dictionary.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Le m´edicament de r´ef´erence de Silapo est EPREX/ERYPO, qui contient de l’´epo´etine alfa.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Our TRIE implements the popular reverse trie, in which the last word of an n-gram is looked up first, as do SRILM, IRSTLM’s inverted variant, and BerkeleyLM except for the scrolling variant.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
For alif with hamza, normalization can be seen as another level of devocalization.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
However, this argument is only plausible if the formal framework allows non-projective dependency structures, i.e. structures where a head and its dependents may correspond to a discontinuous constituent.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
This kind of dependency arises from the use of the composition operation to compose two arbitrarily large categories.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
30 75.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The average agreement among the human judges is .76, and the average agreement between ST and the humans is .75, or about 99% of the interhuman agreement.15 One can better visualize the precision-recall similarity matrix by producing from that matrix a distance matrix, computing a classical metric multidimensional scaling (Torgerson 1958; Becker, Chambers, Wilks 1988) on that dis­ tance matrix, and plotting the first two most significant dimensions.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
to explore how well we can induce POS tags using only the one-tag-per-word constraint.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
There are thus some very good reasons why segmentation into words is an important task.
They have made use of local and global features to deal with the instances of same token in a document.
0
In this paper, we show that the maximum entropy framework is able to make use of global information directly, and achieves performance that is comparable to the best previous machine learning-based NERs on MUC6 and MUC7 test data.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
However, it is almost universally the case that no clear definition of what constitutes a "correct" segmentation is given, so these performance measures are hard to evaluate.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
In this case, we have no finite-state restrictions for the search space.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
However, the data sparsity induced by vocalization makes it difficult to train statistical models on corpora of the size of the ATB, so vocalizing and then parsing may well not help performance.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
We tabulate this increase in Table 3.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The final estimating equation is then: (3) Since the total of all these class estimates was about 10% off from the Turing estimate n1/N for the probability of all unseen hanzi, we renormalized the estimates so that they would sum to n 1jN.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Evaluation metrics used are Attachment Score (AS), i.e. the proportion of tokens that are attached to the correct head, and Exact Match (EM), i.e. the proportion of sentences for which the dependency graph exactly matches the gold standard.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).
Two general approaches are presented and two combination techniques are described for each approach.
0
For example, we may have semantic information (e.g. database query operations) associated with the productions in a grammar.
This paper conducted research in the area of automatic paraphrase discovery.
0
We used the TF/ITF metric to identify keywords.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
The availability of these resources guided our selection of foreign languages.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
For each feature type f and tag t, a multinomial ψtf is drawn from a symmetric Dirichlet distribution with concentration parameter β.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
More formally, we start by representing the dictionary D as a Weighted Finite State Trans­ ducer (WFST) (Pereira, Riley, and Sproat 1994).
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
1
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
For parsing, the most challenging form of ambiguity occurs at the discourse level.
It is probably the first analysis of Arabic parsing of this kind.
0
Figure 4 shows a constituent headed by a process nominal with an embedded adjective phrase.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
It has been shown for English (Wang and Hirschberg 1992; Hirschberg 1993; Sproat 1994, inter alia) that grammatical part of speech provides useful information for these tasks.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
For all variants, we found that BerkeleyLM always rounds the floating-point mantissa to 12 bits then stores indices to unique rounded floats.