source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
In fact, we found that enabling IRSTLM’s cache made it slightly slower, so results in Table 1 use IRSTLM without caching.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
Queries take the form p(wn|wn−1 1 ) where wn1 is an n-gram.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
This is less effective in our setting, where IN and OUT are disparate.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
This paper talks about Unsupervised Models for Named Entity Classification.
0
). context=x The context for the entity.
All the texts were annotated by two people.
0
The implementation is In a similar effort, (G¨otze 2003) developed a proposal for the theory-neutral annotation of information structure (IS) — a notoriously difficult area with plenty of conflicting and overlapping terminological conceptions.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
We place certain restrictions on the composition operations of LCFRS's, restrictions that are shared by the composition operations of the constrained grammatical systems that we have considered.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
This is especially effective at reducing load time, since raw bytes are read directly to memory—or, as happens with repeatedly used models, are already in the disk cache.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
The price shocked her.), or same-kind (e.g., Her health insurance paid for the hospital fees, but the automobile insurance did not cover the repair.).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
It has been shown for English (Wang and Hirschberg 1992; Hirschberg 1993; Sproat 1994, inter alia) that grammatical part of speech provides useful information for these tasks.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
We dropped, however, one of the languages, Finnish, partly to keep the number of tracks manageable, partly because we assumed that it would be hard to find enough Finnish speakers for the manual evaluation.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The major problem for our seg­ menter, as for all segmenters, remains the problem of unknown words (see Fung and Wu [1994]).
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
In the small window on the left, search queries can be entered, here one for an NP that has been annotated on the co-reference layer as bridging.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
We evaluate our approach on eight European languages (§6), and show that both our contributions provide consistent and statistically significant improvements.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Our parsing performance measures (SY N) thus report the PARSEVAL extension proposed in Tsarfaty (2006).
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
The sentence-selection approach is crude in that it imposes a binary distinction between useful and non-useful parts of OUT.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
This paper presents a maximum entropy-based named entity recognizer (NER).
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The normalization factor plays an important role in the AdaBoost algorithm.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Preprocessing the raw trees improves parsing performance considerably.9 We first discard all trees dominated by X, which indicates errors and non-linguistic text.
Two general approaches are presented and two combination techniques are described for each approach.
0
We call such a constituent an isolated constituent.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
To summarize, we provided: The performance of the baseline system is similar to the best submissions in last year’s shared task.
Their results show that their high performance NER use less training data than other systems.
0
All our results are obtained by using only the official training data provided by the MUC conferences.
Their results show that their high performance NER use less training data than other systems.
0
For example, if Barry is not in commonWords and is found in the list of person first names, then the feature PersonFirstName will be set to 1.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Making the reasonable assumption that similar information is relevant for solving these problems in Chinese, it follows that a prerequisite for intonation-boundary assignment and prominence assignment is word segmentation.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Our code is thread-safe, and integrated into the Moses, cdec, and Joshua translation systems.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
2.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
For instance, on Spanish, the absolute gap on median performance is 10%.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For pronouns, however, all of the knowledge sources increased recall, often substantially, and with little if any decrease in precision.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
We again assume a training set of n examples {x1 . xri} where the first m examples have labels {y1 ... yin}, and the last (n — m) examples are unlabeled.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Interpolation search formalizes the notion that one opens a dictionary near the end to find the word “zebra.” Initially, the algorithm knows the array begins at b +— 0 and ends at e +— |A|−1.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
splitIN captures the verb/preposition idioms that are widespread in Arabic.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
The prediction based on these features is a knearest neighbor classification, using the IB1 algorithm and k = 5, the modified value difference metric (MVDM) and class voting with inverse distance weighting, as implemented in the TiMBL software package (Daelemans et al., 2003).
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
6 Our knowledge sources return some sort of probability estimate, although in some cases this estimate is not especially well-principled (e.g., the Recency KS).
There are clustering approaches that assign a single POS tag to each word type.
0
A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
We use N(u) to denote the neighborhood of vertex u, and fixed n = 5 in our experiments.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
In future work, we plan to follow-up on this approach and investigate other ways that contextual role knowledge can be used.
There are clustering approaches that assign a single POS tag to each word type.
0
The P (W |T , ψ) term in the lexicon component now decomposes as: n P (W |T , ψ) = n P (Wi|Ti, ψ) i=1 n   tions are not modeled by the standard HMM, which = n  n P (v|ψTi f ) instead can model token-level frequency.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Section 2.2 then describes our representation for contextual roles and four types of contextual role knowledge that are learned from the training examples.
The approach has been successfully tested on the 8 000-word Verbmobil task.
1
The approach has been successfully tested on the 8 000-word Verbmobil task.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Recall.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Because of this, we retokenized and lowercased submitted output with our own tokenizer, which was also used to prepare the training and test data.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Again, we can compute average scores for all systems for the different language pairs (Figure 6).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
8 1 8.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
For example, we may have semantic information (e.g. database query operations) associated with the productions in a grammar.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Vocabulary lookup is a sorted array of 64-bit word hashes.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
3.1 Gross Statistics.
This paper talks about Unsupervised Models for Named Entity Classification.
0
It was motivated by the observation that the (Yarowsky 95) algorithm added a very large number of rules in the first few iterations.
This paper talks about Unsupervised Models for Named Entity Classification.
0
We first define &quot;pseudo-labels&quot;,-yt, as follows: = Yi t sign(g 0\ 2— kx2,m < i < n Thus the first m labels are simply copied from the labeled examples, while the remaining (n — m) examples are taken as the current output of the second classifier.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Several coreference resolvers have used supervised learning techniques, such as decision trees and rule learners (Aone and Bennett, 1995; McCarthy and Lehnert, 1995; Ng and Cardie, 2002; Soon et al., 2001).
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
We refer to (T , W ) as the lexicon of a language and ψ for the parameters for their generation; ψ depends on a single hyperparameter β.
Here we present two algorithms.
0
Note that Zt is a normalization constant that ensures the distribution Dt+i sums to 1; it is a function of the weak hypothesis ht and the weight for that hypothesis at chosen at the tth round.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
na me =>1 ha nzi fa mi ly 2 ha nzi gi ve n 3.
This paper talks about Unsupervised Models for Named Entity Classification.
0
For each label (Per s on, organization and Location), take the n contextual rules with the highest value of Count' (x) whose unsmoothed3 strength is above some threshold pmin.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
6 Results and Analysis.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Label propagation can then be used to transfer the labels to the peripheral foreign vertices (i.e. the ones adjacent to the English vertices) first, and then among all of the foreign vertices (§4).
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
Graph construction does not require any labeled data, but makes use of two similarity functions.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Note that in our construction arcs can never cross token boundaries.
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The BLEU metric, as all currently proposed automatic metrics, is occasionally suspected to be biased towards statistical systems, especially the phrase-based systems currently in use.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The judgement of 4 in the first case will go to a vastly better system output than in the second case.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
= p(fj je) max Æ;e00 j02Cnfjg np(jjj0; J) p(Æ) pÆ(eje0; e00) Qe00 (e0;C n fjg; j 0 )o: The DP equation is evaluated recursively for each hypothesis (e0; e; C; j).
These clusters are computed using an SVD variant without relying on transitional structure.
0
Table 5 provides insight into the behavior of different models in terms of the tagging lexicon they generate.
It is probably the first analysis of Arabic parsing of this kind.
0
We report micro-averaged (whole corpus) and macro-averaged (per sentence) scores along add a constraint on the removal of punctuation, which has a single tag (PUNC) in the ATB.
They have made use of local and global features to deal with the instances of same token in a document.
0
Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
The second weakness is purely conceptual, and probably does not affect the per­ formance of the model.
They have made use of local and global features to deal with the instances of same token in a document.
0
For MUC6, for example, Corporate- Suffix-List is made up of ltd., associates, inc., co, corp, ltd, inc, committee, institute, commission, university, plc, airlines, co., corp. and Person-Prefix- List is made up of succeeding, mr., rep., mrs., secretary, sen., says, minister, dr., chairman, ms. . For a token that is in a consecutive sequence of init then a feature Corporate-Suffix is set to 1.
The AdaBoost algorithm was developed for supervised learning.
0
The approach gains leverage from natural redundancy in the data: for many named-entity instances both the spelling of the name and the context in which it appears are sufficient to determine its type.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
The most frequently occurring last words of organization names in cslist are compiled into a list of corporate suffixes, Corporate-Suffix- List.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The feature-based model replaces the emission distribution with a log-linear model, such that: on the word identity x, features checking whether x contains digits or hyphens, whether the first letter of x is upper case, and suffix features up to length 3.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
A better approach would be to distin guish between these cases, possibly by drawing on the vast linguistic work on Arabic connectives (AlBatal, 1990).
The corpus was annoted with different linguitic information.
0
This withdrawal by the treasury secretary is understandable, though.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
So we decided to use semantic class information only to rule out candidates.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
This paper describes the several performance techniques used and presents benchmarks against alternative implementations.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
However, to classify a token , while Borthwick uses tokens from to (from two tokens before to two tokens after ), we used only the tokens , , and . Even with local features alone, MENERGI outperforms MENE (Borthwick, 1999).
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
For example, kidnapping victims should be extracted from the subject of the verb “kidnapped” when it occurs in the passive voice (the shorthand representation of this pattern would be “<subject> were kidnapped”).
Here we present two algorithms.
0
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Table 2 Similarity matrix for segmentation judgments.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We further report SYNCS, the parsing metric of Cohen and Smith (2007), to facilitate the comparison.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
Table 3 contains the results for evaluating our systems on the test set (section 22).
The AdaBoost algorithm was developed for supervised learning.
0
Several extensions of AdaBoost for multiclass problems have been suggested (Freund and Schapire 97; Schapire and Singer 98).
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
9 65.5 46.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Other strategies could readily 6 As a reviewer has pointed out, it should be made clear that the function for computing the best path is. an instance of the Viterbi algorithm.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
e0; e are the last two target words, C is a coverage set for the already covered source positions and j is the last position visited.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
(Other classes handled by the current system are discussed in Section 5.)
There is no global pruning.
0
In this paper, we have presented a new, eÆcient DP-based search procedure for statistical machine translation.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In this example there are four "input characters," A, B, C and D, and these map respectively to four "pronunciations" a, b, c and d. Furthermore, there are four "words" represented in the dictionary.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
To facilitate the comparison of our results to those reported by (Cohen and Smith, 2007) we use their data set in which 177 empty and “malformed”7 were removed.
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of author’s intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The first probability is estimated from a name count in a text database, and the rest of the probabilities are estimated from a large list of personal names.n Note that in Chang et al.'s model the p(rule 9) is estimated as the product of the probability of finding G 1 in the first position of a two-hanzi given name and the probability of finding G2 in the second position of a two-hanzi given name, and we use essentially the same estimate here, with some modifications as described later on.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Since our objective is to compare distributions of bracketing discrepancies, we do not use heuristics to prune the set of nuclei.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
Our motivation for using DempsterShafer is that it provides a well-principled framework for combining evidence from multiple sources with respect to competing hypotheses.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Also surprising is the low test set OOV rate given the possibility of morphological variation in Arabic.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Two processes are spawned requiring B to derive z,.., and C to derive yi , , y,.
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
Out-of-domain test data is from the Project Syndicate web site, a compendium of political commentary.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.