source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
From a DP-based solution to the traveling salesman problem, they present a novel technique to restrict the possible word reordering between source and target language in order to achieve an eÆcient search algorithm.
0
The baseline alignment model does not permit that a source word is aligned to two or more target words, e.g. for the translation direction from German toEnglish, the German compound noun 'Zahnarztter min' causes problems, because it must be translated by the two target words dentist's appointment.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
In the terrorism domain, 1600 texts were used for training and the 40 test docu X ∩Y =∅ All sets of hypotheses (and their corresponding belief values) in the current model are crossed with the sets of hypotheses (and belief values) provided by the new evidence.
Here both parametric and non-parametric models are explored.
0
Combining multiple highly-accurate independent parsers yields promising results.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
These 140 NE categories are designed by extending MUC’s 7 NE categories with finer sub-categories (such as Company, Institute, and Political Party for Organization; and Country, Province, and City for Location) and adding some new types of NE categories (Position Title, Product, Event, and Natural Object).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
5.2 Setup.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
(4) is minimized by setting Since a feature may be present in only a few examples, W_ can be in practice very small or even 0, leading to extreme confidence values.
They focused on phrases which two Named Entities, and proceed in two stages.
0
keywords Step 3 Sets of phrases based on keywords Step 4 Links between sets of phrases All the contexts collected for a given domain are gathered in a bag and the TF/ITF scores are calculated for all the words except stopwords in the bag.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
For irt the Good-Turing estimate just discussed gives us an estimate of p(unseen(f,) I f,)-the probability of observing a previously unseen instance of a construction in ft given that we know that we have a construction in f,.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
Although we have not done any experiments on other languages, this way of using global features from a whole document should be applicable to other languages.
This paper talks about Unsupervised Models for Named Entity Classification.
0
(Blum and Mitchell 98) give an example that illustrates just how powerful the second constraint can be.
There is no global pruning.
0
For the translation model Pr(fJ 1 jeI 1), we go on the assumption that each source word is aligned to exactly one target word.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
Typically, judges initially spent about 3 minutes per sentence, but then accelerate with experience.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.
They have made use of local and global features to deal with the instances of same token in a document.
0
We propose maximizing , where is the sequence of named- entity tags assigned to the words in the sentence , and is the information that can be extracted from the whole document containing . Our system is built on a maximum entropy classifier.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
For example, in .., says Mr. Cooper, a vice president of.. both a spelling feature (that the string contains Mr.) and a contextual feature (that president modifies the string) are strong indications that Mr. Cooper is of type Person.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
TRIE uses less memory and has better locality.
Here we present two algorithms.
0
Schapire and Singer show that the training error is bounded above by Thus, in order to greedily minimize an upper bound on training error, on each iteration we should search for the weak hypothesis ht and the weight at that minimize Z.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
Before explaining our method in detail, we present a brief overview in this subsection.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
If one of these checks fails then this knowledge source reports that the candidate is not a viable antecedent for the anaphor.
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
One obvious application is information extraction.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
In this way, the method reported on here will necessarily be similar to a greedy method, though of course not identical.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
21 In Chinese, numerals and demonstratives cannot modify nouns directly, and must be accompanied by.
Here we present two algorithms.
0
Note that on some examples (around 2% of the test set) CoBoost abstained altogether; in these cases we labeled the test example with the baseline, organization, label.
The corpus was annoted with different linguitic information.
0
Section 4 draws some conclusions from the present state of the effort.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
In a few cases, the criteria for correctness are made more explicit.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
Extract NE instance pairs with contexts First, we extract NE pair instances with their context from the corpus.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
A corpus of German newspaper commentaries has been assembled and annotated with different information (and currently, to different degrees): part-of-speech, syntax, rhetorical structure, connectives, co-reference, and information structure.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
The developers aimed to reduce memory consumption at the expense of time.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
Table 2 shows our complete set of results.
Combining multiple highly-accurate independent parsers yields promising results.
0
Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
10 Other orthographic normalization schemes have been suggested for Arabic (Habash and Sadat, 2006), but we observe negligible parsing performance differences between these and the simple scheme used in this evaluation.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
While possible to utilize the feature-based log-linear approach described in Berg-Kirkpatrick et al.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
We observe similar trends when using another measure – type-level accuracy (defined as the fraction of words correctly assigned their majority tag), according to which La ng ua ge M etr ic B K 10 E M B K 10 L B F G S G 10 F EA T S B es t F EA T S M ed ia n E ng lis h 1 1 m 1 4 8 . 3 6 8 . 1 5 6 . 0 7 5 . 5 – – 5 0 . 9 6 6 . 4 4 7 . 8 6 6 . 4 D an is h 1 1 m 1 4 2 . 3 6 6 . 7 4 2 . 6 5 8 . 0 – – 5 2 . 1 6 1 . 2 4 3 . 2 6 0 . 7 D ut ch 1 1 m 1 5 3 . 7 6 7 . 0 5 5 . 1 6 4 . 7 – – 5 6 . 4 6 9 . 0 5 1 . 5 6 7 . 3 Po rtu gu es e 1 1 m 1 5 0 . 8 7 5 . 3 4 3 . 2 7 4 . 8 44 .5 69 .2 6 4 . 1 7 4 . 5 5 6 . 5 7 0 . 1 S pa ni sh 1 1 m 1 – – 4 0 . 6 7 3 . 2 – – 5 8 . 3 6 8 . 9 5 0 . 0 5 7 . 2 Table 4: Comparison of our method (FEATS) to state-of-the-art methods.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
12 One class of full personal names that this characterization does not cover are married women's names.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Several extensions of AdaBoost for multiclass problems have been suggested (Freund and Schapire 97; Schapire and Singer 98).
There are clustering approaches that assign a single POS tag to each word type.
0
The observed performance gains, coupled with the simplicity of model implementation, makes it a compelling alternative to existing more complex counterparts.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
The method halves the error rate in comparison to a method using the labeled examples alone.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
In our case multi-threading is trivial because our data structures are read-only and uncached.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
2.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
More judgements would have enabled us to make better distinctions, but it is not clear what the upper limit is.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Thus, if one wants to segment words-for any purpose-from Chinese sentences, one faces a more difficult task than one does in English since one cannot use spacing as a guide.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Historically, Arabic grammar has identified two sentences types: those that begin with a nominal (� '.i �u _..
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Formally, let el (62) be the number of classification errors of the first (second) learner on the training data, and let Eco be the number of unlabeled examples on which the two classifiers disagree.
They have made use of local and global features to deal with the instances of same token in a document.
0
MENE (without the help of hand-coded systems) has been shown to be somewhat inferior in performance.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
Figure 2 shows examples of extracted NE pair instances and their contexts.
Combining multiple highly-accurate independent parsers yields promising results.
0
This is the first set that gives us a fair evaluation of the Bayes models, and the Bayes switching model performs significantly better than its non-parametric counterpart.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Tokens tagged as PUNC are not discarded unless they consist entirely of punctuation.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This is motivated by taking β po(s|t) to be the parameters of a Dirichlet prior on phrase probabilities, then maximizing posterior estimates p(s|t) given the IN corpus.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Many morphological decisions are based on long distance dependencies, and when the global syntactic evidence disagrees with evidence based on local linear context, the two models compete with one another, despite the fact that the PCFG takes also local context into account.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
1
In this paper, we show how non-projective dependency parsing can be achieved by combining a datadriven projective parser with special graph transformation techniques.
The contextual rules are restricted and may not be applicable to every example, but the spelling rules are generally applicable and should have good coverage.
0
Now assume we have n pairs (xi,, x2,i) drawn from X1 X X2, where the first m pairs have labels whereas for i = m+ 1...n the pairs are unlabeled.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
The number of ATB n-grams also falls below the WSJ sample size as the largest WSJ sample appeared in only 162 corpus positions.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
An important reason for separating the two types of features is that this opens up the possibility of theoretical analysis of the use of unlabeled examples.
All the texts were annotated by two people.
0
The PCC is not the result of a funded project.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
This approach is similar to BABAR in that they both acquire knowledge from earlier resolutions.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
na me =>1 ha nzi fa mi ly 2 ha nzi gi ve n 3.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
1 61.2 43.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
Following the system devised under the Qing emperor Kang Xi, hanzi have traditionally been classified according to a set of approximately 200 semantic radicals; members of a radical class share a particular structural component, and often also share a common meaning (hence the term 'semantic').
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
The first point we need to address is what type of linguistic object a hanzi repre­ sents.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
A Stochastic Finite-State Word-Segmentation Algorithm for Chinese
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
This is the only important case, because otherwise the simple majority combining technique would pick the correct constituent.
The texts were annotated with the RSTtool.
0
Finally, the focus/background partition is annotated, together with the focus question that elicits the corresponding answer.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
The natural baseline (baseline) outperforms the pure IN system only for EMEA/EP fren.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
If the key distribution’s range is also known (i.e. vocabulary identifiers range from 0 to the number of words), then interpolation search can use this information instead of reading A[0] and A[|A |− 1] to estimate pivots; this optimization alone led to a 24% speed improvement.
The authors cluster NE instance pairs based on the words in the context using bag-of-words methods.
0
3.1 Corpora.
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
A position is presented by the word at that position.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
The input for the segmentation task is however highly ambiguous for Semitic languages, and surface forms (tokens) may admit multiple possible analyses as in (BarHaim et al., 2007; Adler and Elhadad, 2006).
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
The combining algorithm is presented with the candidate parses and asked to choose which one is best.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
Unsupervised Part-of-Speech Tagging with Bilingual Graph-Based Projections
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
We have argued that the proposed method performs well.
They found replacing it with a ranked evaluation to be more suitable.
0
For each sentence, we counted how many n-grams in the system output also occurred in the reference translation.
In this paper the author evaluates machine translation performance for six European language pairs that participated in a shared task: translating French, German, Spanish texts to English and back.
0
We computed BLEU scores for each submission with a single reference translation.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
In both cases, SRILM walks its trie an additional time to minimize context as mentioned in Section 4.1.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
The Stanford parser includes both the manually annotated grammar (§4) and an Arabic unknown word model with the following lexical features: 1.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Furthermore, even the size of the dictionary per se is less important than the appropriateness of the lexicon to a particular test corpus: as Fung and Wu (1994) have shown, one can obtain substantially better segmentation by tailoring the lexicon to the corpus to be segmented.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
We simulate lexical constraints by using an external lexical resource against which we verify whether OOV segments are in fact valid Hebrew lexemes.
Combining multiple highly-accurate independent parsers yields promising results.
0
In the interest of testing the robustness of these combining techniques, we added a fourth, simple nonlexicalized PCFG parser.
The texts were annotated with the RSTtool.
0
Section 4 draws some conclusions from the present state of the effort.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
Across all languages, +PRIOR consistently outperforms 1TW, reducing error on average by 9.1% and 5.9% on best and median settings respectively.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
TRIE uses less memory and has better locality.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
Even when there is training data available in the domain of interest, there is often additional data from other domains that could in principle be used to improve performance.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
This is by now a fairly standard representation for multiple morphological segmentation of Hebrew utterances (Adler, 2001; Bar-Haim et al., 2005; Smith et al., 2005; Cohen and Smith, 2007; Adler, 2007).
Their empirical results demonstrate that the type-based tagger rivals state-of-the-art tag-level taggers which employ more sophisticated learning mechanisms to exploit similar constraints.
0
Overall, the difference between our most basic model (1TW) and our full model (+FEATS) is 21.2% and 13.1% for the best and median settings respectively.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
SRILM (Stolcke, 2002) is widely used within academia.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Applications such as machine translation use language model probability as a feature to assist in choosing between hypotheses.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
To check for this, we do pairwise bootstrap resampling: Again, we repeatedly sample sets of sentences, this time from both systems, and compare their BLEU scores on these sets.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Hyperparameters Our model has two Dirichlet concentration hyperparameters: α is the shared hyperparameter for the token-level HMM emission and transition distributions.
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Table 4 shows translation results for the three approaches.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Ex: Mr. Cristiani is the president ...
They have made use of local and global features to deal with the instances of same token in a document.
0
We have shown that the maximum entropy framework is able to use global information directly.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
For example, hanzi containing the INSECT radical !R tend to denote insects and other crawling animals; examples include tr wal 'frog,' feng1 'wasp,' and !Itt she2 'snake.'
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
For each domain, we created a semantic dictionary by doing two things.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
For IE, the system must be able to distinguish between semantically similar noun phrases that play different roles in an event.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
In approaching this problem, a variety of different methods are conceivable, including a more or less sophisticated use of machine learning.
They employed a PCFG-based generative framework to make both syntactic and morphological disambiguation decisions which were not only theoretically clean and linguistically justified but also probabilistically appropriate and empirically sound.
0
Segmental morphology Hebrew consists of seven particles m(“from”) f(“when”/“who”/“that”) h(“the”) w(“and”) k(“like”) l(“to”) and b(“in”). which may never appear in isolation and must always attach as prefixes to the following open-class category item we refer to as stem.
Vijay-Shankar et all considered the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate.
0
This aspect of the formalism is both linguistically and computationally important.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
By applying an inverse transformation to the output of the parser, arcs with non-standard labels can be lowered to their proper place in the dependency graph, giving rise 1The dependency graph has been modified to make the final period a dependent of the main verb instead of being a dependent of a special root node for the sentence. to non-projective structures.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Only IRSTLM does not support threading.
The authors believe automatic paraphrase discovery is an important component for building a fully automatic information extraction system.
0
A total of 13,976 phrases were grouped.