source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The corpora for both settings are summarized in table 1.
It is probably the first analysis of Arabic parsing of this kind.
0
It has no syntactic function.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
Word Re-ordering and DP-based Search in Statistical Machine Translation
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
We believe it is natural for authors to use abbreviations in subsequent mentions of a named entity (i.e., first “President George Bush” then “Bush”).
They have made use of local and global features to deal with the instances of same token in a document.
0
Multiple features can be used for the same token.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Next, we represent the input sentence as an unweighted finite-state acceptor (FSA) I over H. Let us assume the existence of a function Id, which takes as input an FSA A, and produces as output a transducer that maps all and only the strings of symbols accepted by A to themselves (Kaplan and Kay 1994).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
We computed BLEU scores for each submission with a single reference translation.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
“Agree” is a subject control verb, which dominates another verb whose subject is the same as that of “agree”; the latter verb is generally the one of interest for extraction.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
We are currently exploring other methods that employ similar ideas and their formal properties.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The first method uses a similar algorithm to that of (Yarowsky 95), with modifications motivated by (Blum and Mitchell 98).
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
There has also been some work on adapting the word alignment model prior to phrase extraction (Civera and Juan, 2007; Wu et al., 2005), and on dynamically choosing a dev set (Xu et al., 2007).
This paper talks about Pseudo-Projective Dependency Parsing.
0
On the other hand, given that all schemes have similar parsing accuracy overall, this means that the Path scheme is the least likely to introduce errors on projective arcs.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
As can be seen from the last column in Table 1, both Head and Head+Path may theoretically lead to a quadratic increase in the number of distinct arc labels (Head+Path being worse than Head only by a constant factor), while the increase is only linear in the case of Path.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
As noted in Section 1, our code finds the longest matching entry wnf for query p(wn|s(wn−1 f ) The probability p(wn|wn−1 f ) is stored with wnf and the backoffs are immediately accessible in the provided state s(wn−1 When our code walks the data structure to find wnf , it visits wnn, wnn−1, ... , wnf .
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
While we used the standard metrics of the community, the we way presented translations and prompted for assessment differed from other evaluation campaigns.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
We assumed that such a contrastive assessment would be beneficial for an evaluation that essentially pits different systems against each other.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
However, when we pre- tag the input—as is recommended for English— we notice a 0.57% F1 improvement.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
We evaluate our approach on seven languages: English, Danish, Dutch, German, Portuguese, Spanish, and Swedish.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
We collected around 300–400 judgements per judgement type (adequacy or fluency), per system, per language pair.
The AdaBoost algorithm was developed for supervised learning.
0
Finally, we would like to note that it is possible to devise similar algorithms based with other objective functions than the one given in Equ.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The text type are editorials instead of speech transcripts.
This paper conducted research in the area of automatic paraphrase discovery.
0
Step 4.
In this paper, the authors proposed an approach for instance-weighting phrase pairs in an out-of-domain corpus in order to improve in-domain performance.
0
The weight on each sentence is a value in [0, 1] computed by a perceptron with Boolean features that indicate collection and genre membership.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This leads to a linear combination of domain-specific probabilities, with weights in [0, 1], normalized to sum to 1.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
We use the universal POS tagset of Petrov et al. (2011) in our experiments.10 This set C consists of the following 12 coarse-grained tags: NOUN (nouns), VERB (verbs), ADJ (adjectives), ADV (adverbs), PRON (pronouns), DET (determiners), ADP (prepositions or postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), PUNC (punctuation marks) and X (a catch-all for other categories such as abbreviations or foreign words).
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The probabilities are incorporated into the DempsterShafer model using Equation 1.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Our monolingual similarity function (for connecting pairs of foreign trigram types) is the same as the one used by Subramanya et al. (2010).
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
An additional case of super-segmental morphology is the case of Pronominal Clitics.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
For instance, on Spanish, the absolute gap on median performance is 10%.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
By design, they readily capture regularities at the token-level.
The approach assumes that the word reordering is restricted to a few positions in the source sentence.
0
The translation search is carried out with the category markers and the city names are resubstituted into the target sentence as a postprocessing step.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Our new approach, which is called quasi-monotone search, processes the source sentence monotonically, while explicitly taking into account the positions of the German verbgroup.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Recently, lattices have been used successfully in the parsing of Hebrew (Tsarfaty, 2006; Cohen and Smith, 2007), a Semitic language with similar properties to Arabic.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Also, we don’t know how many such paraphrase sets are necessary to cover even some everyday things or events.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
This suggests a strategy: run interpolation search until the range narrows to 4096 or fewer entries, then switch to binary search.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
Again, the idea is that having a picture of syntax, co-reference, and sentence-internal information structure at one’s disposal should aid in finding models of discourse structure that are more explanatory and can be empirically supported.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
The first modification — cautiousness — is a relatively minor change.
It is probably the first analysis of Arabic parsing of this kind.
0
Despite their simplicity, uni- gram weights have been shown as an effective feature in segmentation models (Dyer, 2009).13 The joint parser/segmenter is compared to a pipeline that uses MADA (v3.0), a state-of-the-art Arabic segmenter, configured to replicate ATB segmentation (Habash and Rambow, 2005).
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Figure 1 shows an example.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In the denomi 11 We have two such lists, one containing about 17,000 full names, and another containing frequencies of.
Here we present two algorithms.
0
This modification brings the method closer to the DL-CoTrain algorithm described earlier, and is motivated by the intuition that all three labels should be kept healthily populated in the unlabeled examples, preventing one label from dominating — this deserves more theoretical investigation.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
TPT Germann et al. (2009) describe tries with better locality properties, but did not release code.
They found replacing it with a ranked evaluation to be more suitable.
0
The text type are editorials instead of speech transcripts.
It is annotated with several data: morphology, syntax, rhetorical structure, connectors, correference and informative structure.
0
Annotation of syntactic structure for the core corpus has just begun.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Finally, we wish to reiterate an important point.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
3 Throughout this paper we shall give Chinese examples in traditional orthography, followed.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Each treelet (an internal node with all its children) represents the use of a rule that is encapsulated by the grammar The grammar encapsulates (either explicitly or implicitly) a finite number of rules that can be written as follows: n > 0 In the case of CFG's, for each production In the case of TAG's, a derivation step in which the derived trees RI, • • • , On are adjoined into fi at rhe addresses • • • • in. would involve the use of the following rule2.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
θ has a belief value of 1.0, indicating complete certainty that the correct hypothesis is included in the set, and a plausibility value of 1.0, indicating that there is no evidence for competing hypotheses.5 As evidence is collected and the likely hypotheses are whittled down, belief is redistributed to subsets of θ.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
The normalized judgement per judge is the raw judgement plus (3 minus average raw judgement for this judge).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Figure 4 shows some such phrase sets based on keywords in the CC-domain.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
In line with perplexity results from Table 1, the PROBING model is the fastest followed by TRIE, and subsequently other packages.
The corpus was annoted with different linguitic information.
0
Trying to integrate constituent ordering and choice of referring expressions, (Chiarcos 2003) developed a numerical model of salience propagation that captures various factors of author’s intentions and of information structure for ordering sentences as well as smaller constituents, and picking appropriate referring expressions.10 Chiarcos used the PCC annotations of co-reference and information structure to compute his numerical models for salience projection across the generated texts.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Our TRIE implementation is designed to improve upon IRSTLM using a reverse trie with improved search, bit level packing, and stateful queries.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
The PROBING data structure is a rather straightforward application of these hash tables to store Ngram language models.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
The most frequent NE category pairs are “Person - Person (209,236), followed by “Country - Coun- try” (95,123) and “Person - Country” (75,509).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
More complex approaches such as the relaxation technique have been applied to this problem Fan and Tsai (1988}.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
More details on the memory-based prediction can be found in Nivre et al. (2004) and Nivre and Scholz (2004).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
was done by the participants.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
Our initial experimentation with the evaluation tool showed that this is often too overwhelming.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Exploiting Diversity in Natural Language Processing: Combining Parsers
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
We can model this probability straightforwardly enough with a probabilistic version of the grammar just given, which would assign probabilities to the individual rules.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For example, a person’s full name will match with just their last name (e.g., “George Bush” and “Bush”), and a company name will match with and without a corporate suffix (e.g., “IBM Corp.” and “IBM”).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Their best model yields 44.5% one-to-one accuracy, compared to our best median 56.5% result.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Raj and Whittaker (2003) show that integers in a trie implementation can be compressed substantially.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
In this shared task, we were also confronted with this problem, and since we had no funding for paying human judgements, we asked participants in the evaluation to share the burden.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
In various dialects of Mandarin certain phonetic rules apply at the word.
Two general approaches are presented and two combination techniques are described for each approach.
0
The precision and recall measures (described in more detail in Section 3) used in evaluating Treebank parsing treat each constituent as a separate entity, a minimal unit of correctness.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
A simple lexicalized PCFG with second order Markovization gives relatively poor performance: 75.95% F1 on the test set.8 But this figure is surprisingly competitive with a recent state-of-the-art baseline (Table 7).
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
We are focusing on phrases which have two Named Entities (NEs), as those types of phrases are very important for IE applications.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Equ.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The first step in the learning process is to generate training examples consisting of anaphor/antecedent resolutions.
All the texts were annotated by two people.
0
There is a ‘core corpus’ of ten commentaries, for which the range of information (except for syntax) has been completed; the remaining data has been annotated to different degrees, as explained below.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
To measure the contribution of each modification, a third, intermediate algorithm, Yarowsky-cautious was also tested.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
Clearly this is not the only way to estimate word-frequencies, however, and one could consider applying other methods: in partic­ ular since the problem is similar to the problem of assigning part-of-speech tags to an untagged corpus given a lexicon and some initial estimate of the a priori probabilities for the tags, one might consider a more sophisticated approach such as that described in Kupiec (1992); one could also use methods that depend on a small hand-tagged seed corpus, as suggested by one reviewer.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
This actually happens quite frequently (more below), so that the rankings are broad estimates.
Das and Petrov, in this paper, approached inducing unsupervised part-of-speech taggers for languages that had no labeled training data, but had translated text in a resource-rich language.
0
Our “Projection” baseline is able to benefit from the bilingual information and greatly improves upon the monolingual baselines, but falls short of the “No LP” model by 2.5% on an average.
There is no global pruning.
0
Sie.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
Evaluation of the Segmentation as a Whole.
Here we present two algorithms.
0
The likelihood of the observed data under the model is where P(yi, xi) is defined as in (9).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For each pair of judges consider one judge as the standard,.
They have made use of local and global features to deal with the instances of same token in a document.
0
By making use of global context, it has achieved excellent results on both MUC6 and MUC7 official test data.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
(Other classes handled by the current system are discussed in Section 5.)
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
This smooth guarantees that there are no zeroes estimated.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
We experiment with four values for each hyperparameter resulting in 16 (α, β) combinations: α β 0.001, 0.01, 0.1, 1.0 0.01, 0.1, 1.0, 10 Iterations In each run, we performed 30 iterations of Gibbs sampling for the type assignment variables W .4 We use the final sample for evaluation.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
The semantic agreement KS eliminates some candidates, but also provides positive evidence in one case: if the candidate and anaphor both have semantic tags human, company, date, or location that were assigned via NER or the manually labeled dictionary entries.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
We asked participants to each judge 200–300 sentences in terms of fluency and adequacy, the most commonly used manual evaluation metrics.
Nevertheless, only a part of this corpus (10 texts), which the authors name "core corpus", is annotated with all this information.
0
Asking the annotator to also formulate the question is a way of arriving at more reproducible decisions.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
Intuitively, as suggested by the example in the introduction, this is the right granularity to capture domain effects.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
Lexicalizing several POS tags improves performance.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
While the Bootstrap method is slightly more sensitive, it is very much in line with the sign test on text blocks.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
These are shown, with their associated costs, as follows: ABj nc 4.0 AB C/jj 6.0 CD /vb 5.
The bias of automatic methods in favour of statistical systems seems to be less pronounced on out-of-domain test data.
0
For instance: if 10 systems participate, and one system does better than 3 others, worse then 2, and is not significant different from the remaining 4, its rank is in the interval 3–7.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
Using cn to denote the number of n-grams, total memory consumption of TRIE, in bits, is plus quantization tables, if used.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Many hanzi have more than one pronunciation, where the correct.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
In the experiments below, we employ a data-driven deterministic dependency parser producing labeled projective dependency graphs,3 previously tested on Swedish (Nivre et al., 2004) and English (Nivre and Scholz, 2004).
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
However, the point of RandLM is to scale to even larger data, compensating for this loss in quality.
The AdaBoost algorithm was developed for supervised learning.
0
For the experiments in this paper we made a couple of additional modifications to the CoBoost algorithm.
It is probably the first analysis of Arabic parsing of this kind.
0
49 99.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
na me =>2 ha nzi fa mi ly 1 ha nzi gi ve n 6.1 ha nzi fa mi ly => ha nz ii 7.2 ha nzi fa mi ly => ha nzi i ha nz ij 8.1 ha nzi gi ve n => ha nz ii 9.2 ha nzi giv en => ha nzi i ha nz ij The difficulty is that given names can consist, in principle, of any hanzi or pair of hanzi, so the possible given names are limited only by the total number of hanzi, though some hanzi are certainly far more likely than others.