source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Unfortunately, modifying the model to account for these kind of dependencies is not at all straightforward.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
Since different judges judged different systems (recall that judges were excluded to judge system output from their own institution), we normalized the scores.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The first value reports resident size after loading; the second is the gap between post-loading resident memory and peak virtual memory.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Recent work by Finkel and Manning (2009) which re-casts Daum´e’s approach in a hierarchical MAP framework may be applicable to this problem.
There are clustering approaches that assign a single POS tag to each word type.
0
We present several variations for the lexical component P (T , W |ψ), each adding more complex pa rameterizations.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
It was our hope that this competition, which included the manual and automatic evaluation of statistical systems and one rulebased commercial system, will give further insight into the relation between automatic and manual evaluation.
This paper conducted research in the area of automatic paraphrase discovery.
0
In this subsection, we will report the results of the experiment, in terms of the number of words, phrases or clusters.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Mixing, smoothing, and instance-feature weights are learned at the same time using an efficient maximum-likelihood procedure that relies on only a small in-domain development corpus.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
We model po(s|t) using a MAP criterion over weighted phrase-pair counts: and from the similarity to (5), assuming y = 0, we see that wλ(s, t) can be interpreted as approximating pf(s, t)/po(s, t).
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Whether a language even has orthographic words is largely dependent on the writing system used to represent the language (rather than the language itself); the notion "orthographic word" is not universal.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The learned patterns are then normalized and applied to the corpus.
This paper talks about Unsupervised Models for Named Entity Classification.
0
(4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
For illustration, an English translation of one of the commentaries is given in Figure 1.
This paper talks about Unsupervised Models for Named Entity Classification.
0
So the success of the algorithm may well be due to its success in maximizing the number of unlabeled examples on which the two decision lists agree.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
It then computes a normalized Levenshtein edit distance between the extracted chain and the reference.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Previous Work.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
For instance, by altering the emission distribution parameters, Johnson (2007) encourages the model to put most of the probability mass on few tags.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The inverted alignment probability p(bijbi􀀀1; I; J) and the lexicon probability p(fbi jei) are obtained by relative frequency estimates from the Viterbi alignment path after the final training iteration.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
As can be seen, GR and this "pared-down" statistical method perform quite similarly, though the statistical method is still slightly better.16 AG clearly performs much less like humans than these methods, whereas the full statistical algorithm, including morphological derivatives and names, performs most closely to humans among the automatic methods.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
The importance of this property becomes clear in contrasting theories underlying GPSG (Gazdar, Klein, Pulluna, and Sag, 1985), and GB (as described by Berwick, 1984) with those underlying LFG and FUG.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
For seven out of eight languages a threshold of 0.2 gave the best results for our final model, which indicates that for languages without any validation set, r = 0.2 can be used.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Annotation of syntactic structure for the core corpus has just begun.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Lack of correct reference translations was pointed out as a short-coming of our evaluation.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
If there is a frequent multi-word sequence in a domain, we could use it as a keyword candidate.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
summit Sharm (a) Al-Sheikh summit Sharm (b) DTNNP Al-Sheikh in a corpus position without a bracketing label, then we also add ∗n, NIL) to M. We call the set of unique n-grams with multiple labels in M the variation nuclei of C. Bracketing variation can result from either annotation errors or linguistic ambiguity.
Replacing this with a ranked evaluation seems to be more suitable.
0
In fact, it is very difficult to maintain consistent standards, on what (say) an adequacy judgement of 3 means even for a specific language pair.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
At first glance, the problem seems quite complex: a large number of rules is needed to cover the domain, suggesting that a large number of labeled examples is required to train an accurate classifier.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
0 70.9 42.
These clusters are computed using an SVD variant without relying on transitional structure.
0
our full model yields 39.3% average error reduction across languages when compared to the basic configuration (1TW).
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Tsarfaty (2006) was the first to demonstrate that fully automatic Hebrew parsing is feasible using the newly available 5000 sentences treebank.
They focused on phrases which two Named Entities, and proceed in two stages.
0
If a phrase does not contain any keywords, the phrase is discarded.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Various verbal (e.g., �, .::) and adjectival.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
We define the following function: If Zco is small, then it follows that the two classifiers must have a low error rate on the labeled examples, and that they also must give the same label on a large number of unlabeled instances.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
It is worth noting that, although nonprojective constructions are less frequent in DDT than in PDT, they seem to be more deeply nested, since only about 80% can be projectivized with a single lift, while almost 95% of the non-projective arcs in PDT only require a single lift.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
and “H” represents “Hanson Plc”.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
A direct-mapped cache makes BerkeleyLM faster on repeated queries, but their fastest (scrolling) cached version is still slower than uncached PROBING, even on cache-friendly queries.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
The gender, number, and scoping KSs eliminate candidates from consideration.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Many hanzi have more than one pronunciation, where the correct.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
These are written to the state s(wn1) and returned so that they can be used for the following query.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The extent to which this constraint is enforced varies greatly across existing methods.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
gaolgaolxing4xing4 'happily' In the particular form of A-not-A reduplication illustrated in (3a), the first syllable of the verb is copied, and the negative markerbu4 'not' is inserted between the copy and the full verb.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique.
This paper talks about Unsupervised Models for Named Entity Classification.
0
The core of Yarowsky's algorithm is as follows: where h is defined by the formula in equation 2, with counts restricted to training data examples that have been labeled in step 2.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
The work reported here is closely related to [Ha- segawa et al. 04].
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
In both cases the investigators were able to achieve significant improvements over the previous best tagging results.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
73 81.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
At the same time, the n-gram error rate is sensitive to samples with extreme n-gram counts.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
This decreases the statistical significance of our results compared to those studies.
Replacing this with a ranked evaluation seems to be more suitable.
0
All systems (except for Systran, which was not tuned to Europarl) did considerably worse on outof-domain training data.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
In section 5, we then evaluate the entire parsing system by training and evaluating on data from the Prague Dependency Treebank.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Not surprisingly some semantic classes are better for names than others: in our corpora, many names are picked from the GRASS class but very few from the SICKNESS class.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
In MUC6 and MUC7, the named entity task is defined as finding the following classes of names: person, organization, location, date, time, money, and percent (Chinchor, 1998; Sundheim, 1995) Machine learning systems in MUC6 and MUC 7 achieved accuracy comparable to rule-based systems on the named entity task.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This significantly underperforms log-linear combination.
They found replacing it with a ranked evaluation to be more suitable.
0
At the very least, we are creating a data resource (the manual annotations) that may the basis of future research in evaluation metrics.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The statistical systems seem to still lag behind the commercial rule-based competition when translating into morphological rich languages, as demonstrated by the results for English-German and English-French.
BABAR's performance in both domains of terrorism and natural disaster, and the contextual-role knowledge in pronouns have shown successful results.
0
Experiments in two domains showed that the contextual role knowledge improved coreference performance, especially on pronouns.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
In (1) the sequencema3lu4 cannot be resolved locally, but depends instead upon broader context; similarly in (2), the sequence :::tcai2neng2 cannot be resolved locally: 1.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
A number of PCC commentaries will be read by professional news speakers and prosodic features be annotated, so that the various annotation layers can be set into correspondence with intonation patterns.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
4 Traditional Arabic linguistic theory treats both of these types as subcategories of noun � '.i . Figure 1: The Stanford parser (Klein and Manning, 2002) is unable to recover the verbal reading of the unvocalized surface form 0 an (Table 1).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
While there are other obstacles to completing this idea, we believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
In Table 7 we give results for several evaluation metrics.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
By taking the ratio of matching n-grams to the total number of n-grams in the system output, we obtain the precision pn for each n-gram order n. These values for n-gram precision are combined into a BLEU score: The formula for the BLEU metric also includes a brevity penalty for too short output, which is based on the total number of words in the system output c and in the reference r. BLEU is sensitive to tokenization.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We therefore also normalized judgements on a per-sentence basis.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
The form fmnh, for example, can be understood as the verb “lubricated”, the possessed noun “her oil”, the adjective “fat” or the verb “got fat”.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Each set is assigned two values: belief and plausibility.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Feature weights were set using Och’s MERT algorithm (Och, 2003).
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
We would like to thank Prof. Ralph Grish- man, Mr. Takaaki Hasegawa and Mr. Yusuke Shinyama for useful comments, discussion and evaluation.
Here we present two algorithms.
0
Unlabeled examples in the named-entity classification problem can reduce the need for supervision to a handful of seed rules.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
On the surface, our model may seem as a special case of Cohen and Smith in which α = 0.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
10 Here we use the Good-Turing estimate (Baayen 1989; Church and Gale 1991), whereby the aggregate probability of previously unseen instances of a construction is estimated as ni/N, where N is the total number of observed tokens and n1 is the number of types observed only once.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
Tables 4 and 5 also show that putting all of the contextual role KSs in play at the same time produces the greatest performance gain.
There are clustering approaches that assign a single POS tag to each word type.
0
8 1 2.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
Input: Ja , wunderbar . Konnen wir machen . MonS: Yes, wonderful.
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The effect of the pruning threshold t0 is shown in Table 5.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
— similar results have been observed across multiple languages.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
In contrast to the Bayesian HMM, θt is not drawn from a distribution which has support for each of the n word types.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
Gather phrases using keywords Next, we select a keyword for each phrase – the top-ranked word based on the TF/IDF metric.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
This token may further embed into a larger utterance, e.g., ‘bcl hneim’ (literally “in-the-shadow the-pleasant”, meaning roughly “in the pleasant shadow”) in which the dominated Noun is modified by a proceeding space-delimited adjective.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Conversely, we can use the full rhetorical tree from the annotations and tune the co-reference module.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
98 15.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
On the other hand, we are interested in the application of rhetorical analysis or ‘discourse parsing’ (3.2 and 3.3), in text generation (3.4), and in exploiting the corpus for the development of improved models of discourse structure (3.5).
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
At first glance, this seems only peripherally related to our work, since the specific/general distinction is made for features rather than instances.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
By design, they readily capture regularities at the token-level.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
2.4 Underspecified rhetorical structure.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
This is especially true in the case of quotations—which are common in the ATB—where (1) will follow a verb like (2) (Figure 1).
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Human judges also pointed out difficulties with the evaluation of long sentences.
Here both parametric and non-parametric models are explored.
0
The machine learning community has been in a similar situation and has studied the combination of multiple classifiers (Wolpert, 1992; Heath et al., 1996).
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
The high 1 tone of J1l would not normally neutralize in this fashion if it were functioning as a word on its own.
These clusters are computed using an SVD variant without relying on transitional structure.
0
The difference between the featureless model (+PRIOR) and our full model (+FEATS) is 13.6% and 7.7% average error reduction on best and median settings respectively.
NER is useful in many NLP applications such as information extraction, question answering, etc .Chieu and Ng have shown that the maximum entropy framework is able to use global information directly from various sources.
0
This group of features attempts to capture such information.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Each unlabeled pair (x1,i, x2,i) is represented as an edge between nodes corresponding to x1,i and X2,i in the graph.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
The compressed variant uses block compression and is rather slow as a result.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Table 6: Example Translations for the Verbmobil task.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
Moreover, they are used as substantives much 2 Unlike machine translation, constituency parsing is not significantly affected by variable word order.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
This is can not be the only explanation, since the discrepancy still holds, for instance, for out-of-domain French-English, where Systran receives among the best adequacy and fluency scores, but a worse BLEU score than all but one statistical system.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The easiest language pair according to BLEU (English-French: 28.33) received worse manual scores than the hardest (English-German: 14.01).
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Along with locking and background kernel operations such as prefaulting, this explains why wall time is not one-eighth that of the single-threaded case. aLossy compression with the same weights. bLossy compression with retuned weights. the non-lossy options.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
Graph construction does not require any labeled data, but makes use of two similarity functions.