source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Human judges also pointed out difficulties with the evaluation of long sentences.
0
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
Label propagation is used to propagate these tags inwards and results in tag distributions for the middle word of each Italian trigram.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
During coreference resolution, BABAR checks (1) whether the anaphor is among the lexical expectations for the caseframe that extracts the candidate antecedent, and (2) whether the candidate is among the lexical expectations for the caseframe that extracts the anaphor.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Since foreign names can be of any length, and since their original pronunciation is effectively unlimited, the identi­ fication of such names is tricky.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
If so, the CF Network reports that the anaphor and candidate may be coreferent.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Denote the unthresholded classifiers after t — 1 rounds by git—1 and assume that it is the turn for the first classifier to be updated while the second one is kept fixed.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.
It is probably the first analysis of Arabic parsing of this kind.
0
It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Almost all annotators expressed their preference to move to a ranking-based evaluation in the future.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
When the signal is a coordinating conjunction, the second span is usually the clause following the conjunction; the first span is often the clause preceding it, but sometimes stretches further back.
This assumption, however, is not inherent to type-based tagging models.
0
Model Overview The model starts by generating a tag assignment T for each word type in a vocabulary, assuming one tag per word.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy. mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d↑ instead of d↑h (AuxP↑ instead of AuxP↑Sb).
The AdaBoost algorithm was developed for supervised learning.
0
The only supervision is in the form of 7 seed rules (namely, that New York, California and U.S. are locations; that any name containing Mr is a person; that any name containing Incorporated is an organization; and that I.B.M. and Microsoft are organizations).
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
We focus on this difference between the tree sets of CFG's and IG's, and formalize the notion of dependence between paths in a tree set in Section 3.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Assuming unseen objects within each class are equiprobable, their probabilities are given by the Good-Turing theorem as: cis E( n'J.ls) Po oc N * E(N8ls) (2) where p815 is the probability of one unseen hanzi in class cls, E(n'J.15 ) is the expected number of hanzi in cls seen once, N is the total number of hanzi, and E(N(/ 5 ) is the expected number of unseen hanzi in class cls.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, we have reason to doubt Chang et al.'s performance claims.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
A detailed description of the search procedure used is given in this patent.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The computing time, the number of search errors, and the multi-reference WER (mWER) are shown as a function of t0.
This paper talks about Pseudo-Projective Dependency Parsing.
0
Table 2 shows the features used in the current version of the parser.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
Thus, the arc from je to jedna will be labeled 5b↓ (to indicate that there is a syntactic head below it).
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
(Carlson, Marcu 2001) responded to this situation with relatively precise (and therefore long!)
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
This is the parse that is closest to the centroid of the observed parses under the similarity metric.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
In the appositive case, the contextual predictor was the head of the modifying appositive (president in the Maury Cooper example); in the second case, the contextual predictor was the preposition together with the noun it modifies (plant_in in the Georgia example).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Any NLP application that presumes as input unrestricted text requires an initial phase of text analysis; such applications involve problems as diverse as machine translation, information retrieval, and text-to-speech synthesis (TIS).
This paper talks about Unsupervised Models for Named Entity Classification.
0
Future work should also extend the approach to build a complete named entity extractor - a method that pulls proper names from text and then classifies them.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
A dynamic programming recursion similar to the one in Eq. 2 is evaluated.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
We use a squared loss to penalize neighboring vertices that have different label distributions: kqi − qjk2 = Ey(qi(y) − qj(y))2, and additionally regularize the label distributions towards the uniform distribution U over all possible labels Y.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
This feature imposes constraints Table 1: Features based on the token string that are based on the probability of each name class during training.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Suppose M has to determine whether the k substrings ,.. .,ak can be derived from some symbol A.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
We will describe the evaluation of such clusters in the next subsection.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
This is the first time that we organized a large-scale manual evaluation.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Finally, Section 5 explains how BABAR relates to previous work, and Section 6 summarizes our conclusions.
This assumption, however, is not inherent to type-based tagging models.
0
Recent research has demonstrated that incorporating this sparsity constraint improves tagging accuracy.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.
They have made use of local and global features to deal with the instances of same token in a document.
0
Sequence of Initial Caps (SOIC): In the sentence Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement., a NER may mistake Even News Broadcasting Corp. as an organization name.
They have made use of local and global features to deal with the instances of same token in a document.
0
By using the output of a hand-coded system such as Proteus, MENE can improve its performance, and can even outperform IdentiFinder (Borthwick, 1999).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
In this work, we take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
:zhong1 : 0.0 tjl :huo2 :0.0 (R:spub:/ic of Ch:ina) + .,_,...I : jlong4 :0.0 (mUifaty genG181) 0 £: _NC: 40.0 Figure 3 Partial Chinese Lexicon (NC = noun; NP = proper noun).c=- - I •=- :il: .;ss:;zhangt • '-:.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The model described here thus demonstrates great potential for use in widespread applications.
It is probably the first analysis of Arabic parsing of this kind.
0
splitPUNC restores the convention of the WSJ.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
It uses a maximum entropy framework and classifies each word given its features.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
1
Employing a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions is not only theoretically clean and linguistically justified and but also probabilistically apropriate and empirically sound.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
(student+plural) 'students,' which is derived by the affixation of the plural affix f, menD to the nounxue2shengl.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
We see from these results that the behavior of the parametric techniques are robust in the presence of a poor parser.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
It can also be seen clearly in this plot that two of the Taiwan speakers cluster very closely together, and the third Tai­ wan speaker is also close in the most significant dimension (the x axis).
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This has led previous workers to adopt ad hoc linear weighting schemes (Finch and Sumita, 2008; Foster and Kuhn, 2007; L¨u et al., 2007).
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
In this paper k = 3 (the three labels are person, organization, location), and we set a = 0.1.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winning.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This has the potential drawback of increasing the number of features, which can make MERT less stable (Foster and Kuhn, 2009).
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
We address the question of whether or not a formalism can generate only structural descriptions with independent paths.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
2.2 Contextual Role Knowledge.
There are clustering approaches that assign a single POS tag to each word type.
0
In addition, this formulation results in a dramatic reduction in the number of model parameters thereby, enabling unusually rapid training.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.'
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
For comparison to information-retrieval inspired baselines, eg (L¨u et al., 2007), we select sentences from OUT using language model perplexities from IN.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
!!:\ :yu2 e:_nc [::!!:zen3 l!f :moO t:_adv il!:shuot ,:_vb i i i 1 • 10.03 13...
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Terrorism systems must distinguish between people who perpetrate a crime and people who are victims of a crime.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010).
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
For example, syntactic decoders (Koehn et al., 2007; Dyer et al., 2010; Li et al., 2009) perform dynamic programming parametrized by both backward- and forward-looking state.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
IdentiFinder ' 99' s results are considerably better than IdentiFinder ' 97' s. IdentiFinder' s performance in MUC7 is published in (Miller et al., 1998).
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
And if one is interested in TIS, one would probably consider the single orthographic word ACL to consist of three phonological words-lei s'i d/-corresponding to the pronunciation of each of the letters in the acronym.
The AdaBoost algorithm was developed for supervised learning.
0
It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
We evaluated BABAR on two domains: terrorism and natural disasters.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
from the subset of the United Informatics corpus not used in the training of the models.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
While Gan's system incorporates fairly sophisticated models of various linguistic information, it has the drawback that it has only been tested with a very small lexicon (a few hundred words) and on a very small test set (thirty sentences); there is therefore serious concern as to whether the methods that he discusses are scalable.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
7 Acknowledgements.
All the texts were annotated by two people.
0
7 www.cis.upenn.edu/∼pdtb/ 8 www.eml-research.de/english/Research/NLP/ Downloads had to buy a new car.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
The average individual parser accuracy was reduced by more than 5% when we added this new parser, but the precision of the constituent voting technique was the only result that decreased significantly.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
where the husband's family name is optionally prepended to the woman's full name; thus ;f:*lf#i xu3lin2-yan2hai3 would represent the name that Ms. Lin Yanhai would take if she married someone named Xu.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Table 2 shows these similarity measures.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
However, there is a strong relationship between ni1s and the number of hanzi in the class.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This extends previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and using a simpler training procedure.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The use of the Good-Turing equation presumes suitable estimates of the unknown expectations it requires.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
The following auxiliary quantity is defined: Qe0 (e; C; j) := probability of the best partial hypothesis (ei 1; bi 1), where C = fbkjk = 1; ; ig, bi = j, ei = e and ei􀀀1 = e0.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The development of the very first Hebrew Treebank (Sima’an et al., 2001) called for the exploration of general statistical parsing methods, but the application was at first limited.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
Our coreference resolver also incorporates an existential noun phrase recognizer and a DempsterShafer probabilistic model to make resolution decisions.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Although these authors report better gains than ours, they are with respect to a non-adapted baseline.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
A non-optimal analysis is shown with dotted lines in the bottom frame.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Since all long sentence translation are somewhat muddled, even a contrastive evaluation between systems was difficult.
This assumption, however, is not inherent to type-based tagging models.
0
We report token- and type-level accuracy in Table 3 and 6 for all languages and system settings.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
Similarly, the tokens and are tested against each list, and if found, a corresponding feature will be set to 1.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
3.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Further, Maamouri and Bies (2004) argued that the English guidelines generalize well to other languages.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The accuracy of the sets in representing paraphrase ranged from 73% to 99%, depending on the NE categories and set sizes; the accuracy of the links for two evaluated domains was 73% and 86%.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This is a standard adaptation problem for SMT.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
(2009).
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
from the subset of the United Informatics corpus not used in the training of the models.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
Sentence (2) and (3) help to disambiguate one way or the other.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Evaluation of the Segmentation as a Whole.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
3.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
In this section, we will explain the algorithm step by step with examples.