source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
In our forth model GTnph we add the definiteness status of constituents following Tsarfaty and Sima’an (2007).
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
We carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
The first value reports use immediately after loading while the second reports the increase during scoring. dBerkeleyLM is written in Java which requires memory be specified in advance.
They focused on phrases which two Named Entities, and proceed in two stages.
0
Among these 32 sets, we found the following pairs of sets which have two or more links.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Here, annotation proceeds in two phases: first, the domains and the units of IS are marked as such.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Personal names such as 00, 3R; zhoulenl-lai2 'Zhou Enlai.'
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The BLEU score has been shown to correlate well with human judgement, when statistical machine translation systems are compared (Doddington, 2002; Przybocki, 2004; Li, 2005).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
2.2 Contextual Role Knowledge.
Here we present two algorithms.
0
(6), with W+ > W_.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
structure Besides the applications just sketched, the over- arching goal of developing the PCC is to build up an empirical basis for investigating phenomena of discourse structure.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
The resulting model is compact, efficiently learnable and linguistically expressive.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Formalisms such as the restricted indexed grammars (Gazdar, 1985) and members of the hierarchy of grammatical systems given by Weir (1987) have independent paths, but more complex path sets.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
For a sentence x and a state sequence z, a first order Markov model defines a distribution: (9) where Val(X) corresponds to the entire vocabulary.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
Within this framework, we use features intended to capture degree of generality, including the output from an SVM classifier that uses the intersection between IN and OUT as positive examples.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
We therefore also normalized judgements on a per-sentence basis.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The CoBoost algorithm described above divides the function Zco into two parts: Zco = 40 + 40.
There is no global pruning.
0
For each source word f, the list of its possible translations e is sorted according to p(fje) puni(e), where puni(e) is the unigram probability of the English word e. It is suÆcient to consider only the best 50 words.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
The correct resolution in sentence (b) comes from knowledge that people who are kidnapped are often subsequently released.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
When a comparison against previous results requires additional pre-processing, we state it explicitly to allow for the reader to replicate the reported results.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
Gather phrases using keywords Next, we select a keyword for each phrase – the top-ranked word based on the TF/IDF metric.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
Phrase-level granularity distinguishes our work from previous work by Matsoukas et al (2009), who weight sentences according to sub-corpus and genre membership.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
For the LM, adaptive weights are set as follows: where α is a weight vector containing an element αi for each domain (just IN and OUT in our case), pi are the corresponding domain-specific models, and ˜p(w, h) is an empirical distribution from a targetlanguage training corpus—we used the IN dev set for this.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Let H be the set of hanzi, p be the set of pinyin syllables with tone marks, and P be the set of grammatical part-of-speech labels.
This assumption, however, is not inherent to type-based tagging models.
0
For inference, we are interested in the posterior probability over the latent variables in our model.
This assumption, however, is not inherent to type-based tagging models.
0
Hyperparameter settings are sorted according to the median one-to-one metric over runs.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
(Riloff and Shepherd 97) describe a bootstrapping approach for acquiring nouns in particular categories (such as "vehicle" or "weapon" categories).
Their method did not assume any knowledge about the target language, making it applicable to a wide array of resource-poor languages.
0
The feature-HMM model works better for all languages, generalizing the results achieved for English by Berg-Kirkpatrick et al. (2010).
They focused on phrases which two Named Entities, and proceed in two stages.
0
Rather we believe several methods have to be developed using different heuristics to discover wider variety of paraphrases.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
∗NP NP PP R) and ∗NP NP ADJP R) are both iDafa attachment.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We will report the evaluation results in the next subsection.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
For each label (Per s on, organization and Location), take the n contextual rules with the highest value of Count' (x) whose unsmoothed3 strength is above some threshold pmin.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Then, the German infinitive 'besuchen' and the negation particle 'nicht' are translated.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
Here we use a slightly different notion of lift, applying to individual arcs and moving their head upwards one step at a time: Intuitively, lifting an arc makes the word wk dependent on the head wi of its original head wj (which is unique in a well-formed dependency graph), unless wj is a root in which case the operation is undefined (but then wj —* wk is necessarily projective if the dependency graph is well-formed).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Reflexive pronouns with only 1 NP in scope..
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
We are unaware of prior results for the Stanford parser.
All the texts were annotated by two people.
0
For displaying and querying the annoated text, we make use of the Annis Linguistic Database developed in our group for a large research effort (‘Sonderforschungsbereich’) revolving around 9 2.7 Information structure.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
Pairwise comparison is done using the sign test.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
Language models are widely applied in natural language processing, and applications such as machine translation make very frequent queries.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
Specifically, we assume each word type W consists of feature-value pairs (f, v).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Until now, all evaluations of Arabic parsing—including the experiments in the previous section—have assumed gold segmentation.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Thus we have some confidence that our own performance is at least as good as that of Chang et al.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
(a) IDictionary D I D:d/0.000 B:b/0.000 B:b/0.000 ( b ) ( c ) ( d ) I B e s t P a t h ( I d ( I ) o D * ) I cps:nd4.!l(l() Figure 2 An abstract example illustrating the segmentation algorithm.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
98 15.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
To formalize the approach, we introduce four verbgroup states S: Initial (I): A contiguous, initial block of source positions is covered.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
See Figure 3 for a screenshot of the evaluation tool.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
We group the features used into feature groups.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
That is, given a choice between segmenting a sequence abc into abc and ab, c, the former will always be picked so long as its cost does not exceed the summed costs of ab and c: while; it is possible for abc to be so costly as to preclude the larger grouping, this will certainly not usually be the case.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Note that Wang, Li, and Chang's.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
We first make use of our morphological analyzer to find all segmentation possibilities by chopping off all prefix sequence possibilities (including the empty prefix) and construct a lattice off of them.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
The most popular approach to dealing with seg­ mentation ambiguities is the maximum matching method, possibly augmented with further heuristics.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
In future work, we plan to follow-up on this approach and investigate other ways that contextual role knowledge can be used.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
For alif with hamza, normalization can be seen as another level of devocalization.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
As can be seen in Figure 3, the phrases in the “agree” set include completely different relationships, which are not paraphrases.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Our implementation permits jumping to any n-gram of any length with a single lookup; this appears to be unique among language model implementations.
The overall performance of their joint framework demonstrated that a probability distribution obtained over mere syntactic contexts using a Treebank grammar and a data-driven lexicon outperformed upper bounds proposed by previous joint disambiguation systems and achieved segmentation and parsing results on a par with state-of-the-art standalone applications results.
0
This is akin to PoS tags sequences induced by different parses in the setup familiar from English and explored in e.g.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In this paper, we describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
basically complete, yet some improvements and extensions are still under way.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
18 We are grateful to ChaoHuang Chang for providing us with this set.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
As a partial solution, for pairs of hanzi that co-occur sufficiently often in our namelists, we use the estimated bigram cost, rather than the independence-based cost.
They found replacing it with a ranked evaluation to be more suitable.
0
Training and testing is based on the Europarl corpus.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Once again we present both a non-parametric and a parametric technique for this task.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
We use label propagation in two stages to generate soft labels on all the vertices in the graph.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
was done by the participants.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Recent work by Finkel and Manning (2009) which re-casts Daum´e’s approach in a hierarchical MAP framework may be applicable to this problem.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
We present several variations for the lexical component P (T , W |ψ), each adding more complex pa rameterizations.
A beam search concept is applied as in speech recognition.
0
For the translation experiments, Eq. 2 is recursively evaluated.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Up to now, most IE researchers have been creating paraphrase knowledge (or IE patterns) by hand and for specific tasks.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
Put another way, the minimum of Equ.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Finally, note that while most feature concepts are lexicalized, others, such as the suffix concept, are not.
A beam search concept is applied as in speech recognition.
0
In the second and third translation examples, the IbmS word reordering performs worse than the QmS word reordering, since it can not take properly into account the word reordering due to the German verbgroup.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The Recency KS computes the distance between the candidate and the anaphor relative to its scope.
They found replacing it with a ranked evaluation to be more suitable.
0
Making the ten judgements (2 types for 5 systems) takes on average 2 minutes.
Two general approaches are presented and two combination techniques are described for each approach.
0
This drastic tree manipulation is not appropriate for situations in which we want to assign particular structures to sentences.
Replacing this with a ranked evaluation seems to be more suitable.
0
The main disadvantage of manual evaluation is that it is time-consuming and thus too expensive to do frequently.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
9 66.4 47.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The 2nd block contains the IR system, which was tuned by selecting text in multiples of the size of the EMEA training corpus, according to dev set performance.
This paper talks about Unsupervised Models for Named Entity Classification.
0
It is a sequence of proper nouns within an NP; its last word Cooper is the head of the NP; and the NP has an appositive modifier (a vice president at S.&P.) whose head is a singular noun (president).
Two general approaches are presented and two combination techniques are described for each approach.
0
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank, leaving only sections 22 and 23 completely untouched during the development of any of the parsers.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphor’s sentence and two preceding sentences, and for definite NPs it is the anaphor’s sentence and eight preceding sentences.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The task can be considered to be one component of the MUC (MUC-6, 1995) named entity task (the other task is that of segmentation, i.e., pulling possible people, places and locations from text before sending them to the classifier).
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The differences in difficulty are better reflected in the BLEU scores than in the raw un-normalized manual judgements.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
However, their inverted variant implements a reverse trie using less CPU and the same amount of memory7.
Using a treebank grammar, the data-driven lexicon, and a linguistically motivated unknown-tokens handling technique their model outperformed previous pipelined, integrated or factorized systems for Hebrew morphological and syntactic processing, yielding an error reduction of 12% over the best published results so far.
0
This heuristics is used to prune all segmentation possibilities involving “lexically improper” segments.
Human judges also pointed out difficulties with the evaluation of long sentences.
0
In all figures, we present the per-sentence normalized judgements.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
3.2 Reordering with IBM Style.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Let n be some node labeled X in a tree -y (see Figure 3).
This paper talks about Unsupervised Models for Named Entity Classification.
0
From here on we will refer to the named-entity string itself as the spelling of the entity, and the contextual predicate as the context.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Naseem et al. (2009) and Snyder et al.
The corpus was annoted with different linguitic information.
0
There are still some open issues to be resolved with the format, but it represents a first step.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
Word Head Of Complement POS 1 '01 inna “Indeed, truly” VP Noun VBP 2 '01 anna “That” SBAR Noun IN 3 01 in “If” SBAR Verb IN 4 01 an “to” SBAR Verb IN Table 1: Diacritized particles and pseudo-verbs that, after orthographic normalization, have the equivalent surface form 0 an.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
The average fluency judgement per judge ranged from 2.33 to 3.67, the average adequacy judgement ranged from 2.56 to 4.13.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Since the segmentation corresponds to the sequence of words that has the lowest summed unigram cost, the segmenter under discussion here is a zeroth-order model.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Unigrams also have 64-bit overhead for vocabulary lookup.
Two general approaches are presented and two combination techniques are described for each approach.
0
Again we notice that the isolated constituent precision is larger than 0.5 only in those partitions that contain very few samples.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Recall.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We implement two data structures: PROBING, designed for speed, and TRIE, optimized for memory.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Inspection of the data shows that at n = 2500, the two classifiers both give labels on 44,281 (49.2%) of the unlabeled examples, and give the same label on 99.25% of these cases.
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
The system of Berg-Kirkpatrick et al.
The AdaBoost algorithm was developed for supervised learning.
0
A large number of rules is needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classi- However, we show that the use of data can reduce the requirements for supervision to just 7 simple "seed" rules.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The results of the manual and automatic evaluation of the participating system translations is detailed in the figures at the end of this paper.