source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
All the texts were annotated by two people.
0
Since 170 annotated texts constitute a fairly small training set, Reitter found that an overall recognition accuracy of 39% could be achieved using his method.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The model is often further restricted so that each source word is assigned to exactly one target word (Brown et al., 1993; Ney et al., 2000).
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
The similar explanation applies to the link to the “stake” set.
They found replacing it with a ranked evaluation to be more suitable.
0
While it is essential to be fluent in the target language, it is not strictly necessary to know the source language, if a reference translation was given.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
However, when the linear LM is combined with a linear TM (lm+lin tm) or MAP TM (lm+map TM), the results are much better than a log-linear combination for the EMEA setting, and on a par for NIST.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
While sorted arrays could be used to implement the same data structure as PROBING, effectively making m = 1, we abandoned this implementation because it is slower and larger than a trie implementation.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The semantic caseframe expectations are used in two ways.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
IRSTLM (Federico et al., 2008) is an open-source toolkit for building and querying language models.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
While we had up to 11 submissions for a translation direction, we did decide against presenting all 11 system outputs to the human judge.
A beam search concept is applied as in speech recognition.
0
In Section 4, we present the performance measures used and give translation results on the Verbmobil task.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
We briefly review it here for completeness.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
Judges varied in the average score they handed out.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Using the virtual distribution Di (i) and pseudo-labels"y.,„ values for Wo, W± and W_ can be calculated for each possible weak hypothesis (i.e., for each feature x E Xi); the weak hypothesis with minimal value for Wo + 2/WW _ can be chosen as before; and the weight for this weak hypothesis at = ln ww+411:) can be calculated.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
However, they list two sets, one consisting of 28 fragments and the other of 22 fragments, in which they had 0% recall and precision.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
However, lexically similar NPs usually refer to the same entity in two cases: proper names and existential noun phrases.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Of course, since the number of attested (phonemic) Mandarin syllables (roughly 1400, including tonal distinctions) is far smaller than the number of morphemes, it follows that a given syllable could in principle be written with any of several different hanzi, depending upon which morpheme is intended: the syllable zhongl could be lfl 'middle,''clock,''end,' or ,'loyal.'
Here we present two algorithms.
0
Our first algorithm is similar to Yarowsky's, but with some important modifications motivated by (Blum and Mitchell 98).
There is no global pruning.
0
3) A tight coupling with the speech recognizer output.
Two general approaches are presented and two combination techniques are described for each approach.
0
A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.'
There are clustering approaches that assign a single POS tag to each word type.
0
Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
raphy: A ren2 'person' is a fairly uncontroversial case of a monographemic word, and rplil zhong1guo2 (middle country) 'China' a fairly uncontroversial case of a di­ graphernic word.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
We developed a first version of annotation guidelines for co-reference in PCC (Gross 2003), which served as basis for annotating the core corpus but have not been empirically evaluated for inter-annotator agreement yet.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
The results are given in Table 4.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
We tabulate this increase in Table 3.
The AdaBoost algorithm was developed for supervised learning.
0
The 2(Yarowsky 95) describes the use of more sophisticated smoothing methods.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The task is to learn a function from an input string (proper name) to its type, which we will assume to be one of the categories Person, Organization, or Location.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The probability of the classes assigned to the words in a sentence in a document is defined as follows: where is determined by the maximum entropy classifier.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Training and testing is based on the Europarl corpus.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Queries detect the invalid probability, using the node only if it leads to a longer match.
These clusters are computed using an SVD variant without relying on transitional structure.
0
5 64.7 42.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The cost is computed as follows, where N is the corpus size and f is the frequency: (1) Besides actual words from the base dictionary, the lexicon contains all hanzi in the Big 5 Chinese code/ with their pronunciation(s), plus entries for other characters that can be found in Chinese text, such as Roman letters, numerals, and special symbols.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Cur In order to ground such approaches in linguistic observation and description, a multi-level anno 10 For an exposition of the idea as applied to the task of text planning, see (Chiarcos, Stede 2004).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Presenting the output of several system allows the human judge to make more informed judgements, contrasting the quality of the different systems.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Instead, we condition on the type-level tag assignments T . Specifically, let St = {i|Ti = t} denote the indices of theword types which have been assigned tag t accord ing to the tag assignments T . Then θt is drawn from DIRICHLET(α, St), a symmetric Dirichlet which only places mass on word types indicated by St. This ensures that each word will only be assigned a single tag at inference time (see Section 4).
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
1
We incorporate instance weighting into a mixture-model framework, and find that it yields consistent improvements over a wide range of baselines.
Here both parametric and non-parametric models are explored.
0
The estimation of the probabilities in the model is carried out as shown in Equation 4.
They focused on phrases which two Named Entities, and proceed in two stages.
0
buy - acquire (5) buy - agree (2) buy - purchase (5) buy - acquisition (7) buy - pay (2)* buy - buyout (3) buy - bid (2) acquire - purchase (2) acquire - acquisition (2) acquire - pay (2)* purchase - acquisition (4) purchase - stake (2)* acquisition - stake (2)* unit - subsidiary (2) unit - parent (5) It is clear that these links form two clusters which are mostly correct.
There is no global pruning.
0
1 is given in Fig.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
For every sequence of initial capitalized words, its longest substring that occurs in the same document as a sequence of initCaps is identified.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Each token may admit multiple analyses, each of which a sequence of one or more lexemes (we use li to denote a lexeme) belonging a presupposed Hebrew lexicon LEX.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
This highly effective approach is not directly applicable to the multinomial models used for core SMT components, which have no natural method for combining split features, so we rely on an instance-weighting approach (Jiang and Zhai, 2007) to downweight domain-specific examples in OUT.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
We confirm the finding by Callison-Burch et al. (2006) that the rule-based system of Systran is not adequately appreciated by BLEU.
They have made use of local and global features to deal with the instances of same token in a document.
0
This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).
Combining multiple highly-accurate independent parsers yields promising results.
0
Each decision determines the inclusion or exclusion of a candidate constituent.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
If is seen infrequently during training (less than a small count), then will not be selected as a feature and all features in this group are set to 0.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Thus an explicit assumption about the redundancy of the features — that either the spelling or context alone should be sufficient to build a classifier — has been built into the algorithm.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Here a set is represented by the keyword and the number in parentheses indicates the number of shared NE pair instances.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
This is not completely surprising, since all systems use very similar technology.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
In this case we are interested in finding' the maximum probability parse, ri, and Mi is the set of relevant (binary) parsing decisions made by parser i. ri is a parse selected from among the outputs of the individual parsers.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
The key to the methods we describe is redundancy in the unlabeled data.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
But the city name Sharm Al- Sheikh is also iDafa, hence the possibility for the incorrect annotation in (b).
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
Specifically, the lexicon is generated as: P (T , W |ψ) =P (T )P (W |T ) Word Type Features (FEATS): Past unsupervised POS work have derived benefits from features on word types, such as suffix and capitalization features (Hasan and Ng, 2009; Berg-Kirkpatrick et al.,2010).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Then, it can be verified that We can now derive the CoBoost algorithm as a means of minimizing Zco.
There are clustering approaches that assign a single POS tag to each word type.
0
to represent the ith word type emitted by the HMM: P (t(i)|Ti, t(−i), w, α) ∝ n P (w|Ti, t(−i), w(−i), α) (tb ,ta ) P (Ti, t(i)|T , W , t(−i), w, α, β) = P (T |tb, t(−i), α)P (ta|T , t(−i), α) −i (i) i i (−i) P (Ti|W , T −i, β)P (t |Ti, t , w, α) All terms are Dirichlet distributions and parameters can be analytically computed from counts in t(−i)where T −i denotes all type-level tag assignment ex cept Ti and t(−i) denotes all token-level tags except and w (−i) (Johnson, 2007).
They have made use of local and global features to deal with the instances of same token in a document.
0
If is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where appears.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Email: rlls@bell-labs.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (‘Sonderforschungsbereich’) revolving around 9 2.7 Information structure.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Particular instances of relations are associated with goodness scores.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Local features are features that are based on neighboring tokens, as well as the token itself.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
If so, the CF Network reports that the anaphor and candidate may be coreferent.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Sheffield Guthrie and Hepple (2010) explore several randomized compression techniques, but did not release code.
This assumption, however, is not inherent to type-based tagging models.
0
(2009) also report results on English, but on the reduced 17 tag set, which is not comparable to ours).
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
21 In Chinese, numerals and demonstratives cannot modify nouns directly, and must be accompanied by.
This paper talks about Unsupervised Models for Named Entity Classification.
0
Set the decision list to include all rules whose (smoothed) strength is above some threshold Pmin.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
of Arabic.
There are clustering approaches that assign a single POS tag to each word type.
0
The second row represents the performance of the median hyperparameter setting.
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
For Experiment 1 it is meaningless as a baseline, since it would result in 0% accuracy. mation on path labels but drop the information about the syntactic head of the lifted arc, using the label d↑ instead of d↑h (AuxP↑ instead of AuxP↑Sb).
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Future work should also extend the approach to build a complete named entity extractor - a method that pulls proper names from text and then classifies them.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
The corpora for both settings are summarized in table 1.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
We will refer to our system as MENERGI (Maximum Entropy Named Entity Recognizer using Global Information).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
For more on the participating systems, please refer to the respective system description in the proceedings of the workshop.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Our clue is the NE instance pairs.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Benchmarks use the package’s binary format; our code is also the fastest at building a binary file.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For each case- frame, BABAR collects the head nouns of noun phrases that were extracted by the caseframe in the training corpus.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
Evaluating parsing results in our joint framework, as argued by Tsarfaty (2006), is not trivial under the joint disambiguation task, as the hypothesized yield need not coincide with the correct one.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
10.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
These tag distributions are used to initialize the label distributions over the English vertices in the graph.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
For example, the formalisms in the hierarchy described above generate semilinear languages although their path sets become increasingly more complex as one moves up the hierarchy.
They have made use of local and global features to deal with the instances of same token in a document.
0
4.2 Global Features.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This is not completely surprising, since all systems use very similar technology.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
This is to allow for fair comparison between the statistical method and GR, which is also purely dictionary-based.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
On several languages, we report performance exceeding that of more complex state-of-the art systems.1
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
This paper conducted research in the area of automatic paraphrase discovery.
0
We can make several observations on the cause of errors.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Evaluation of Morphological Analysis.
Combining multiple highly-accurate independent parsers yields promising results.
0
One hybridization strategy is to let the parsers vote on constituents' membership in the hypothesized set.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The numbers falling into the location, person, organization categories were 186, 289 and 402 respectively.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
A few pointed out that adequacy should be broken up into two criteria: (a) are all source words covered?
The overall parsing accuracy obtained with the pseudo-projective approach is still lower than for the best projective parsers.
0
If (wi, r, wj) E A, we say that wi is the head of wj and wj a dependent of wi.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
This is an issue that we have not addressed at the current stage of our research.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Several papers report the use of part-of-speech information to rank segmentations (Lin, Chiang, and Su 1993; Peng and Chang 1993; Chang and Chen 1993); typically, the probability of a segmentation is multiplied by the probability of the tagging(s) for that segmentation to yield an estimate of the total probability for the analysis.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
30 75.
They found replacing it with a ranked evaluation to be more suitable.
0
Given a set of n sentences, we can compute the sample mean x� and sample variance s2 of the individual sentence judgements xi: The extend of the confidence interval [x−d, x+df can be computed by d = 1.96 ·�n (6) Pairwise Comparison: As for the automatic evaluation metric, we want to be able to rank different systems against each other, for which we need assessments of statistical significance on the differences between a pair of systems.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Given a PCFG grammar G and a lattice L with nodes n1 ... nk, we construct the weighted grammar GL as follows: for every arc (lexeme) l E L from node ni to node nj, we add to GL the rule [l --+ tni, tni+1, ... , tnj_1] with a probability of 1 (this indicates the lexeme l spans from node ni to node nj).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Since Daneˇs’ proposals of ‘thematic development patterns’, a few suggestions have been made as to the existence of a level of discourse structure that would predict the information structure of sentences within texts.