source_text
stringlengths 27
368
| label
int64 0
1
| target_text
stringlengths 1
5.38k
|
---|---|---|
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
| 0 |
We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.
|
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
| 0 |
When OUT is large and distinct, its contribution can be controlled by training separate IN and OUT models, and weighting their combination.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on.
|
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
| 0 |
While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks.
|
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
| 0 |
9 66.4 47.
|
Human judges also pointed out difficulties with the evaluation of long sentences.
| 0 |
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
Our system fails in (a) because of$ shenl, a rare family name; the system identifies it as a family name, whereas it should be analyzed as part of the given name.
|
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
| 0 |
For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.
|
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
| 0 |
We used C = 1.0 as the L2 regularization constant in (Eq.
|
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
| 0 |
0 X u} "' o; .2 X X><X X XX X X X X X X x X X X X X x X V X X X X .;t'*- XXX:OX X X X X X X 9 x X X XX XX X X X X X X X XXX:< X X>O<XX>!KXX XI<>< »C X X XX :X: X X "' X X XX >OO<X>D<XIK X X X X X X --XX»: XXX X X»C X X«X...C:XXX X Xll< X X ><XX>IIC:liiC:oiiiiCI--8!X:liiOC!I!S8K X X X 10 100 1000 10000 log(F)_base: R"2=0.20 (p < 0.005) X 100000 Figure 6 Plot of log frequency of base noun, against log frequency of plural nouns.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
Note that in our model, conditioned on T , there is precisely one t which has nonzero probability for the token component, since for each word, exactly one θt has support.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
The normalized judgement per sentence is the raw judgement plus (0 minus average raw judgement for this judge on this sentence).
|
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
| 0 |
For each language, we took the same number of sentences from the bitext as there are in its treebank, and trained a supervised feature-HMM.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
A cell in the bottom row of the parse chart is required for each potential whitespace boundary.
|
Combining multiple highly-accurate independent parsers yields promising results.
| 0 |
Two general approaches are presented and two combination techniques are described for each approach.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
We use the HSPELL9 (Har’el and Kenigsberg, 2004) wordlist as a lexeme-based lexicon for pruning segmentations involving invalid segments.
|
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
| 0 |
If one system is perfect, another has slight flaws and the third more flaws, a judge is inclined to hand out judgements of 5, 4, and 3.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
We represent all morphological analyses of a given utterance using a lattice structure.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
:zhong1 : 0.0 tjl :huo2 :0.0 (R:spub:/ic of Ch:ina) + .,_,...I : jlong4 :0.0 (mUifaty genG181) 0 £: _NC: 40.0 Figure 3 Partial Chinese Lexicon (NC = noun; NP = proper noun).c=- - I â¢=- :il: .;ss:;zhangt ⢠'-:.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
|
They found replacing it with a ranked evaluation to be more suitable.
| 0 |
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
|
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
| 0 |
Our representation of contextual roles is based on information extraction patterns that are converted into simple caseframes.
|
Here we present two algorithms.
| 0 |
The weak learner for two-class problems computes a weak hypothesis h from the input space into the reals (h : 2x -4 R), where the sign4 of h(x) is interpreted as the predicted label and the magnitude I h(x)I is the confidence in the prediction: large numbers for I h(x)I indicate high confidence in the prediction, and numbers close to zero indicate low confidence.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
Denote by g3(x) = Et crithl(x) , j E {1,2} the unthresholded strong-hypothesis (i.e., f3 (x) = sign(gi (x))).
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
token-level HMM to reflect lexicon sparsity.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.
|
This paper presents a maximum entropy-based named entity recognizer (NER).
| 0 |
We will show that by giving the first model some global features, MENERGI outperforms Borthwick' s reference resolution classifier.
|
There is no global pruning.
| 0 |
In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
|
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
| 0 |
However, we needed to restrict ourselves to these languages in order to be able to evaluate the performance of our approach.
|
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
| 0 |
We asked six native speakers-three from Taiwan (TlT3), and three from the Mainland (M1M3)-to segment the corpus.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
31 75.
|
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
| 0 |
We therefore used the arithmetic mean of each interjudge precision-recall pair as a single measure of interjudge similarity.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
Beyond optimizing the memory size of TRIE, there are alternative data structures such as those in Guthrie and Hepple (2010).
|
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
| 0 |
Although the best published results for the Collins parser is 80% UAS (Collins, 1999), this parser reaches 82% when trained on the entire training data set, and an adapted version of Charniak’s parser (Charniak, 2000) performs at 84% (Jan Hajiˇc, pers. comm.).
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
The verbal reading arises when the maSdar has an NP argument which, in vocalized text, is marked in the accusative case.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
(1998) have also used a maximum entropy classifier that uses already tagged entities to help tag other entities.
|
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
| 0 |
Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm.
|
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
| 0 |
Contextual role knowledge provides evidence as to whether a candidate is a plausible antecedent for an anaphor.
|
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
| 0 |
Particular relations are also consistent with particular hypotheses about the segmentation of a given sentence, and the scores for particular relations can be incremented or decremented depending upon whether the segmentations with which they are consistent are "popular" or not.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
For a given partial hypothesis (C; j), the order in which the cities in C have been visited can be ignored (except j), only the score for the best path reaching j has to be stored.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
As one can see, only the trigrams [suo incarceramento ,], [suo iter ,] and [suo carattere ,] are connected to English words.
|
Here we present two algorithms.
| 0 |
This allow the learners to "bootstrap" each other by filling the labels of the instances on which the other side has abstained so far.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.
|
This assumption, however, is not inherent to type-based tagging models.
| 0 |
For this experiment, we compare our model with the uniform tag assignment prior (1TW) with the learned prior (+PRIOR).
|
The approach has been successfully tested on the 8 000-word Verbmobil task.
| 0 |
= p(fj je) max Ã;e00 j02Cnfjg np(jjj0; J) p(Ã) pÃ(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).
|
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
| 0 |
The constituent voting and naïve Bayes techniques are equivalent because the parameters learned in the training set did not sufficiently discriminate between the three parsers.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
A Brief Introduction to the Chinese Writing System Most readers will undoubtedly be at least somewhat familiar with the nature of the Chinese writing system, but there are enough common misunderstandings that it is as well to spend a few paragraphs on properties of the Chinese script that will be relevant to topics discussed in this paper.
|
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
| 0 |
The second experiment is limited to data from PDT.5 The training part of the treebank was projectivized under different encoding schemes and used to train memory-based dependency parsers, which were run on the test part of the treebank, consisting of 7,507 sentences and 125,713 tokens.6 The inverse transformation was applied to the output of the parsers and the result compared to the gold standard test set.
|
The PROBING data structure uses linear probing hash tables and is designed for speed.
| 0 |
This would result in better rest cost estimation and better pruning.10 In general, tighter, but well factored, integration between the decoder and language model should produce a significant speed improvement.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
The 4th block contains instance-weighting models trained on all features, used within a MAP TM combination, and with a linear LM mixture.
|
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
| 0 |
Matsoukas et al (2009) generalize it by learning weights on sentence pairs that are used when estimating relative-frequency phrase-pair probabilities.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Unigrams also have 64-bit overhead for vocabulary lookup.
|
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
| 0 |
For example, co-occurring caseframes may reflect synonymy (e.g., â<patient> kidnappedâ and â<patient> abductedâ) or related events (e.g., â<patient> kidnappedâ and â<patient> releasedâ).
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
Finally, we show that in application settings, the absence of gold segmentation lowers parsing performance by 2â5% F1.
|
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
| 0 |
An ATM has two types of states, existential and universal.
|
This paper talks about Unsupervised Models for Named Entity Classification.
| 0 |
This "default" feature type has 100% coverage (it is seen on every example) but a low, baseline precision.
|
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
| 0 |
36.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
Another way to view the judgements is that they are less quality judgements of machine translation systems per se, but rankings of machine translation systems.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first âPresident George Bushâ then âBushâ).
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add ân, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.
|
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
| 0 |
We again adopt an approach where we alternate between two classifiers: one classifier is modified while the other remains fixed.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
The original tag set for the CoNLL-X Dutch data set consists of compounded tags that are used to tag multi-word units (MWUs) resulting in a tag set of over 300 tags.
|
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
| 0 |
4 70.4 46.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
(c) Coordination ambiguity is shown in dependency scores by e.g., âSSS R) and âNP NP NP R).
|
Replacing this with a ranked evaluation seems to be more suitable.
| 0 |
This work was supported in part under the GALE program of the Defense Advanced Research Projects Agency, Contract No.
|
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
| 0 |
85 82.
|
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
| 0 |
Automatic Paraphrase Discovery based on Context and Keywords between NE Pairs
|
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
| 0 |
Thatcher (1973) describes a tee pumping lemma for recognizable sets related to the string pumping lemma for regular sets.
|
The resulting model is compact, efficiently learnable and linguistically expressive.
| 0 |
As is standard, we report the greedy one-to-one (Haghighi and Klein, 2006) and the many-to-one token-level accuracy obtained from mapping model states to gold POS tags.
|
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
| 0 |
Figure 2: Order in which source positions are visited for the example given in Fig.1.
|
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
| 0 |
While possible to utilize the feature-based log-linear approach described in Berg-Kirkpatrick et al.
|
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
| 0 |
10) and trained both EM and L-BFGS for 1000 iterations.
|
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
| 0 |
The translation of one position in the source sentence may be postponed for up to L = 3 source positions, and the translation of up to two source positions may be anticipated for at most R = 10 source positions.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
The morphological analÂysis itself can be handled using well-known techniques from finite-state morphol 9 The initial estimates are derived from the frequencies in the corpus of the strings of hanzi making up.
|
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
| 0 |
Particles are uninflected.
|
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
| 0 |
An IG can be viewed as a CFG in which each nonterminal is associated with a stack.
|
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
| 0 |
Evaluation We use 8 different measures to evaluate the performance of our system on the joint disambiguation task.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
both MENE and IdentiFinder used more training data than we did (we used only the official MUC 6 and MUC7 training data).
|
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
| 0 |
In the terrorism domain, 1600 texts were used for training and the 40 test docu X â©Y =â
All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence.
|
It is probably the first analysis of Arabic parsing of this kind.
| 0 |
with the number of exactly matching guess trees.
|
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
| 0 |
As shown in Table 3, the proportion of sentences containing some non-projective dependency ranges from about 15% in DDT to almost 25% in PDT.
|
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
| 0 |
The judgement of 4 in the first case will go to a vastly better system output than in the second case.
|
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
| 0 |
This WFST represents the segmentation of the text into the words AB and CD, word boundaries being marked by arcs mapping between f and part-of-speech labels.
|
They have made use of local and global features to deal with the instances of same token in a document.
| 0 |
We feel that information from a whole corpus might turn out to be noisy if the documents in the corpus are not of the same genre.
|
Their results show that their high performance NER use less training data than other systems.
| 0 |
The reason why we did not train with both MUC6 and MUC7 training data at the same time is because the task specifications for the two tasks are not identical.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
But for eign learners are often surprised by the verbless predications that are frequently used in Arabic.
|
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
| 0 |
The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
Table 2 compares the performance of our system on the setup of Cohen and Smith (2007) to the best results reported by them for the same tasks.
|
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
| 0 |
Further, we report current resident memory and peak virtual memory because these are the most applicable statistics provided by the kernel.
|
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
| 0 |
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
|
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
| 0 |
Cohen and Smith (2007) later on based a system for joint inference on factored, independent, morphological and syntactic components of which scores are combined to cater for the joint inference task.
|
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
| 0 |
The sentence length probability p(JjI) is omitted without any loss in performance.
|
The second algorithm builds on a boosting algorithm called AdaBoost.
| 0 |
Future work should also extend the approach to build a complete named entity extractor - a method that pulls proper names from text and then classifies them.
|
The manual evaluation of scoring translation on a graded scale from 1â5 seems to be very hard to perform.
| 0 |
Human judges also pointed out difficulties with the evaluation of long sentences.
|
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
| 0 |
We suggest that in unlexicalized PCFGs the syntactic context may be explicitly modeled in the derivation probabilities.
|
This corpus has several advantages: it is annotated at different levels.
| 0 |
Two aspects of the corpus have been presented in previous papers ((Re- itter, Stede 2003) on underspecified rhetorical structure; (Stede 2003) on the perspective of knowledge-based summarization).
|
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
| 0 |
Note that it is in precision that our over all performance would appear to be poorer than the reported performance of Chang et al., yet based on their published examples, our system appears to be doing better precisionwise.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.