source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
We use the default inference parameters.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
At present, the ‘Potsdam Commentary Corpus’ (henceforth ‘PCC’ for short) consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
We also add an annotation for one-level iDafa (oneLevelIdafa) constructs since they make up more than 75% of the iDafa NPs in the ATB (Gabbard and Kulick, 2008).
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
Other errors include NE tagging errors and errors due to a phrase which includes other NEs.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
(Berland and Charniak 99) describe a method for extracting parts of objects from wholes (e.g., "speedometer" from "car") from a large corpus using hand-crafted patterns.
They plan on extending instance-weighting to other standard SMT components and capture the degree of generality of phrase pairs.
0
A similar maximumlikelihood approach was used by Foster and Kuhn (2007), but for language models only.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Since the parameter and token components will remain fixed throughout experiments, we briefly describe each.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
(Hartmann 1984), for example, used the term Reliefgebung to characterize the distibution of main and minor information in texts (similar to the notion of nuclearity in RST).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The horizontal axis in this plot represents the most significant dimension, which explains 62% of the variation.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Thus, the derivation trees for TAG's have the same structure as local sets.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Further, the special hash 0 suffices to flag empty buckets.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Trees are composed using an operation called adjoining, which is defined as follows.
Most IE researchers have been creating paraphrase knowledge by hand and specific tasks.
0
3.3 Evaluation Results.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
This remains as future work.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
Confidence Interval: Since BLEU scores are not computed on the sentence level, traditional methods to compute statistical significance and confidence intervals do not apply.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Suppose -y results from the adjunction of 71, ,-y, at the k distinct tree addresses 141, , nk in some elementary tree 7', respectively.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
Uniform Tag Prior (1TW) Our initial lexicon component will be uniform over possible tag assignments as well as word types.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
The toplevel weights are trained to maximize a metric such as BLEU on a small development set of approximately 1000 sentence pairs.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
To evaluate proper-name identification, we randomly se­ lected 186 sentences containing 12,000 hanzi from our test corpus and segmented the text automatically, tagging personal names; note that for names, there is always a sin­ gle unambiguous answer, unlike the more general question of which segmentation is correct.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
We present KenLM, a library that implements two data structures for efficient language model queries, reducing both time and costs.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Examples will usually be accompanied by a translation, plus a morpheme-by-morpheme gloss given in parentheses whenever the translation does not adequately serve this purpose.
A beam search concept is applied as in speech recognition.
0
The proof is given in (Tillmann, 2000).
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
For the automatic evaluation, we used BLEU, since it is the most established metric in the field.
Two general approaches are presented and two combination techniques are described for each approach.
0
Table 3 contains the results for evaluating our systems on the test set (section 22).
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
The other half was replaced by other participants, so we ended up with roughly the same number.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
In other words, the set of hidden states F was chosen to be the fine set of treebank tags.
Here both parametric and non-parametric models are explored.
0
The next two rows are results of oracle experiments.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
The question is how to normalize the probabilities in such a way that smaller groupings have a better shot at winning.
They used graph-based label propagation for cross-lingual knowledge transfer and used the projected labels as features in an unsupervised model.
0
To establish a soft correspondence between the two languages, we use a second similarity function, which leverages standard unsupervised word alignment statistics (§3.3).3 Since we have no labeled foreign data, our goal is to project syntactic information from the English side to the foreign side.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The trie data structure is commonly used for language modeling.
This assumption, however, is not inherent to type-based tagging models.
0
4 65.9 48.
It is probably the first analysis of Arabic parsing of this kind.
0
Unlike the WSJ corpus which has a high frequency of rules like VP →VB PP, Arabic verb phrases usually have lexi calized intervening nodes (e.g., NP subjects and direct objects).
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
Otherwise for the predecessor search hypothesis, we would have chosen a position that would not have been among the first n uncovered positions.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
95 76.
We chose one of four labels for each example: location, person, organization, or noise where the noise category was used for items that were outside the three categories.
0
This section describes AdaBoost, which is the basis for the CoBoost algorithm.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Finally, quite a few hanzi are homographs, meaning that they may be pronounced in several different ways, and in extreme cases apparently represent different morphemes: The prenominal modifi­ cation marker eg deO is presumably a different morpheme from the second morpheme of §eg mu4di4, even though they are written the same way.4 The second point, which will be relevant in the discussion of personal names in Section 4.4, relates to the internal structure of hanzi.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The question of what soft function to pick, and how to design' algorithms which optimize it, is an open question, but appears to be a promising way of looking at the problem.
This corpus has several advantages: it is annotated at different levels.
0
In focus is in particular the correlation with rhetorical structure, i.e., the question whether specific rhetorical relations — or groups of relations in particular configurations — are signalled by speakers with prosodic means.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
Similar advances have been made in machine translation (Frederking and Nirenburg, 1994), speech recognition (Fiscus, 1997) and named entity recognition (Borthwick et al., 1998).
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
To define a similarity function between the English and the foreign vertices, we rely on high-confidence word alignments.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
The features are weighted within a logistic model to give an overall weight that is applied to the phrase pair’s frequency prior to making MAP-smoothed relative-frequency estimates (different weights are learned for each conditioning direction).
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
Each round is composed of two stages; each stage updates one of the classifiers while keeping the other classifier fixed.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Process statistics are already collected by the kernel (and printing them has no meaningful impact on performance).
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
For eight judges, ranging k between 1 and 8 corresponded to a precision score range of 90% to 30%, meaning that there were relatively few words (30% of those found by the automatic segmenter) on which all judges agreed, whereas most of the words found by the segmenter were such that one human judge agreed.
While many systems had similar performance, the results offered interesting insights, especially, about the relative performance of statistical and rule-based systems.
0
Sentences and systems were randomly selected and randomly shuffled for presentation.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
After adding a ROOT node to all trees, we train a grammar using six split-and- merge cycles and no Markovization.
A beam search concept is applied as in speech recognition.
0
The traveling salesman problem is an optimization problem which is defined as follows: given are a set of cities S = s1; ; sn and for each pair of cities si; sj the cost dij > 0 for traveling from city si to city sj . We are looking for the shortest tour visiting all cities exactly once while starting and ending in city s1.
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
We have modified Moses (Koehn et al., 2007) to keep our state with hypotheses; to conserve memory, phrases do not keep state.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
An initial step of any text­ analysis task is the tokenization of the input into words.
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
We will evaluate various specific aspects of the segmentation, as well as the overall segmentation per­ formance.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
The scoping heuristics are based on the anaphor type: for reflexive pronouns the scope is the current clause, for relative pronouns it is the prior clause following its VP, for personal pronouns it is the anaphor’s sentence and two preceding sentences, and for definite NPs it is the anaphor’s sentence and eight preceding sentences.
Finally, several coreference systems have successfully incorporated anaphoricity determination modules.
0
The contextual role knowledge had the greatest impact on pronouns: +13% recall for terrorism and +15% recall for disasters, with a +1% precision gain in terrorism and a small precision drop of -3% in disasters.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
However, while Dienes and Dubey recognize empty categories in a pre-processing step and only let the parser find their antecedents, we use the parser both to detect dislocated dependents and to predict either the type or the location of their syntactic head (or both) and use post-processing only to transform the graph in accordance with the parser’s analysis.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
This FSA I can be segmented into words by composing Id(I) with D*, to form the WFST shown in Figure 2(c), then selecting the best path through this WFST to produce the WFST in Figure 2(d).
The AdaBoost algorithm was developed for supervised learning.
0
The method shares some characteristics of the decision list algorithm presented in this paper.
This assumption, however, is not inherent to type-based tagging models.
0
encodes the one tag per word constraint and is uni form over type-level tag assignments.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
Here, we conThis model is equivalent to the standard HMM ex cept that it enforces the one-word-per-tag constraint.
Across eight European languages, their approach results in an average absolute improvement of 10.4% over a state-of-the-art baseline, and 16.7% over vanilla hidden Markov models induced with the Expectation Maximization algorithm.
0
For comparison, the completely unsupervised feature-HMM baseline accuracy on the universal POS tags for English is 79.4%, and goes up to 88.7% with a treebank dictionary.
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
0
This task measures how well each package performs in machine translation.
However, these approaches are ill-equipped to directly represent type-based constraints such as sparsity.
0
The English data comes from the WSJ portion of the Penn Treebank and the other languages from the training set of the CoNLL-X multilingual dependency parsing shared task.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
To this end, we picked 100 sentences at random containing 4,372 total hanzi from a test corpus.14 (There were 487 marks of punctuation in the test sentences, including the sentence-final periods, meaning that the average inter-punctuation distance was about 9 hanzi.)
In order to handle the necessary word reordering as an optimization problem within the dynamic programming approach, they describe a solution to the traveling salesman problem (TSP) which is based on dynamic programming.
0
(S; C; j); Not only the coverage set C and the positions j; j0, but also the verbgroup states S; S0 are taken into account.
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
It did 402 queries/ms using 1.80 GB. cMemory use increased during scoring due to batch processing (MIT) or caching (Rand).
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Parent Head Modif er Dir # gold F1 Label # gold F1 NP NP TAG R 946 0.54 ADJP 1216 59.45 S S S R 708 0.57 SBAR 2918 69.81 NP NP ADJ P R 803 0.64 FRAG 254 72.87 NP NP N P R 2907 0.66 VP 5507 78.83 NP NP SBA R R 1035 0.67 S 6579 78.91 NP NP P P R 2713 0.67 PP 7516 80.93 VP TAG P P R 3230 0.80 NP 34025 84.95 NP NP TAG L 805 0.85 ADVP 1093 90.64 VP TAG SBA R R 772 0.86 WHN P 787 96.00 S VP N P L 961 0.87 (a) Major phrasal categories (b) Major POS categories (c) Ten lowest scoring (Collins, 2003)-style dependencies occurring more than 700 times Table 8: Per category performance of the Berkeley parser on sentence lengths ≤ 70 (dev set, gold segmentation).
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
Search hypotheses are processed separately according to their coverage set C. The best scored hypothesis for each coverage set is computed: QBeam(C) = max e;e0 ;S;j Qe0 (e; S; C; j) The hypothesis (e0; e; S; C; j) is pruned if: Qe0 (e; S; C; j) < t0 QBeam(C); where t0 is a threshold to control the number of surviving hypotheses.
Due to many similarly performing systems, the author was not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Again, we can compute average scores for all systems for the different language pairs (Figure 6).
They focused on phrases which two Named Entities, and proceed in two stages.
0
We realize the importance of paraphrase; however, the major obstacle is the construction of paraphrase knowledge.
However, using the top-level semantic classes of WordNet proved to be problematic as the class distinctions are too coarse.
0
Since pronouns carry little semantics of their own, resolving them depends almost entirely on context.
They believe that global context is useful in most languages, as it is a natural tendency for authors to use abbreviations on entities already mentioned previously.
0
If somewhere else in the document we see “restrictions put in place by President Bush”, then we can be surer that Bush is a name.
Their results show that their high performance NER use less training data than other systems.
0
For fair comparison, we have tabulated all results with the size of training data used (Table 5 and Table 6).
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
For that application, at a minimum, one would want to know the phonological word boundaries.
Two general approaches are presented and two combination techniques are described for each approach.
0
As seen by the drop in average individual parser performance baseline, the introduced parser does not perform very well.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Email: rlls@bell-labs.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
suffixes (e.g., �=) Other notable parameters are second order vertical Markovization and marking of unary rules.
This architecture provides a uniform framework in which it is easy to incorporate not only listed dictionary entries but also morphological derivatives, and models for personal names and foreign names in transliteration.
0
Table 3 Classes of words found by ST for the test corpus.
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
Our oracles took advantage of the labeled treebanks: While we tried to minimize the number of free parameters in our model, there are a few hyperparameters that need to be set.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
This is because our corpus is not annotated, and hence does not distinguish between the various words represented by homographs, such as, which could be /adv jiangl 'be about to' orInc jiang4 '(military) general'-as in 1j\xiao3jiang4 'little general.'
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Performance improvements transfer to the Moses (Koehn et al., 2007), cdec (Dyer et al., 2010), and Joshua (Li et al., 2009) translation systems where our code has been integrated.
They incorporated instance-weighting into a mixture-model framework, and found that it yielded consistent improvements over a wide range of baselines.
0
Experiments are presented in section 4.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Let us consider an example of ambiguity caused by devocalization.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
A more rigid mechanism for modeling sparsity is proposed by Ravi and Knight (2009), who minimize the size of tagging grammar as measured by the number of transition types.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Most languages that use Roman, Greek, Cyrillic, Armenian, or Semitic scripts, and many that use Indian-derived scripts, mark orthographic word boundaries; however, languages written in a Chinese-derived writ­ ing system, including Chinese and Japanese, as well as Indian-derived writing systems of languages like Thai, do not delimit orthographic words.1 Put another way, written Chinese simply lacks orthographic words.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
This allow the learners to &quot;bootstrap&quot; each other by filling the labels of the instances on which the other side has abstained so far.
The code is open source, has minimal dependencies, and offers both C++ and Java interfaces for integration.
0
Figure 2 shows timing results.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
Zt can be written as follows Following the derivation of Schapire and Singer, providing that W+ > W_, Equ.
In this work, they take a more direct approach and treat a word type and its allowed POS tags as a primary element of the model.
0
A crucial difference is that the number of parameters is greatly reduced as is the number of variables that are sampled during each iteration.
Replacing this with a ranked evaluation seems to be more suitable.
0
For the automatic scoring method BLEU, we can distinguish three quarters of the systems.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
1 53.8 47.
Koehn and Monz carried out an extensive manual and automatic evaluation of machine translation performance on European language pairs.
0
The predominate focus of building systems that translate into English has ignored so far the difficult issues of generating rich morphology which may not be determined solely by local context.
There are clustering approaches that assign a single POS tag to each word type.
0
Specifically, for both settings we report results on the median run for each setting.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
Throughout this paper we compare with several packages: SRILM 1.5.12 (Stolcke, 2002) is a popular toolkit based on tries used in several decoders.
Foster et all describe a new approach to SMT adaptation that weights out-of-domain phrase pairs according to their relevance to the target domain, determined by both how similar to it they appear to be, and whether they belong to general language or not.
0
Experiments are presented in section 4.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
It has no syntactic function.
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
This is appropriate in cases where it is sanctioned by Bayes’ law, such as multiplying LM and TM probabilities, but for adaptation a more suitable framework is often a mixture model in which each event may be generated from some domain.
They showed better grammars to improve performance on both morphological and syntactic tasks, providing support for the advantage of a joint framework over pipelined or factorized ones.
0
Firstly, Hebrew unknown tokens are doubly unknown: each unknown token may correspond to several segmentation possibilities, and each segment in such sequences may be able to admit multiple PoS tags.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
These make left-to-right query patterns convenient, as the application need only provide a state and the word to append, then use the returned state to append another word, etc.
A large number of rules are needed for coverage of the domain, suggesting that a fairly large number of labeled examples should be required to train a classifier.
0
The normalization factor plays an important role in the AdaBoost algorithm.
The fact that projective dependency parsers can never exactly reproduce the analyses found in non-projective tree banks is often neglected because of the relative scarcity of problematic constructions.
0
A graph D = (W, A) is well-formed iff it is acyclic and connected.
Through parser combination we have reduced the precision error rate by 30% and the recall error rate by 6% compared to the best previously published result.
0
Ties are rare in Bayes switching because the models are fine-grained — many estimated probabilities are involved in each decision.
Their work is closely related to recent approaches that incorporate the sparsity constraint into the POS induction process.
0
The terms on the right-hand-side denote the type-level and token-level probability terms respectively.