source_text
stringlengths
27
368
label
int64
0
1
target_text
stringlengths
1
5.38k
the "Potsdam Commentary Corpus" or PCC consists of 170 commentaries from Ma¨rkische Allgemeine Zeitung, a German regional daily.
0
What ought to be developed now is an annotation tool that can make use of the format, allow for underspecified annotations and visualize them accordingly.
They found replacing it with a ranked evaluation to be more suitable.
0
So, who won the competition?
They proposed an unsupervised method to discover paraphrases from a large untagged corpus.
0
shows some keywords with their scores.
This corpus has several advantages: it is annotated at different levels.
0
This offers the well-known advantages for inter- changability, but it raises the question of how to query the corpus across levels of annotation.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
SRILM inefficiently stores 64-bit pointers.
On the basis of that observation, they described a class of formalisms which they called Linear Context- Free Rewriting Systems (LCFRs), and showed they were recognizable in polynomial time and generated only semilinear languages.
0
Unbounded dependencies between branches are not possible in such a system.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
For verbs we add two features.
Here we show how non-projective dependency parsing can be achieved by combining a data driven projective parser with special graph transformation techniques.
0
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
Not only is concatenation of these strings possible, but head wrapping can be used to split a string and wrap it around another string.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
On the first of these-the B set-our system had 64% recall and 86% precision; on the second-the C set-it had 33% recall and 19% precision.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
67 95.
The main result is that the combined system can recover non-projective dependencies with a precision sufficient to give a significant improvement in overall parsing accuracy, especially with respect to the exact match criterion, leading to the best reported performance for robust non-projective parsing of Czech.
0
In the second scheme, Head+Path, we in addition modify the label of every arc along the lifting path from the syntactic to the linear head so that if the original label is p the new label is p↓.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
The second modification is more important, and is discussed in the next section.
The authors show that PATB is similar to other tree-banks but that annotation consistency remains low.
0
49 99.
Bean and Riloff also used bootstrapping to extend their semantic compatibility model, proposed using caseframe network for anaphora resolution, information extraction patterns to identify contextual clues for determining compatibility between NPs.
0
Our results show that BABAR achieves good performance in both domains, and that the contextual role knowledge improves performance, especially on pronouns.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
We concentrate on those sets.
They showed the efficacy of graph-based label propagation for projecting part-of-speech information across languages.
0
The following three sections elaborate these different stages is more detail.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The first author is supported by a National Defense Science and Engineering Graduate (NDSEG) fellowship.
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
Table 8b shows that verbal nouns are the hardest pre-terminal categories to identify.
The manual evaluation of scoring translation on a graded scale from 1–5 seemed to be very hard to perform.
0
For the automatic scoring method BLEU, we can distinguish three quarters of the systems.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
Turning now to (1), we have the similar problem that splitting.into.ma3 'horse' andlu4 'way' is more costly than retaining this as one word .ma3lu4 'road.'
Due to many similarly performing systems, they are not able to draw strong conclusions on the question of correlation of manual and automatic evaluation metrics.
0
Again, we can compute average scores for all systems for the different language pairs (Figure 6).
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
This class of formalisms have the properties that their derivation trees are local sets, and manipulate objects, using a finite number of composition operations that use a finite number of symbols.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In (2a), we want to split the two morphemes since the correct analysis is that we have the adverb :1 cai2 'just,' the modal verb neng2 'be able' and the main verb R: Hke4fu2 'overcome'; the competing analysis is, of course, that we have the noun :1 cai2neng2 'talent,' followed by }'lijke4fu2 'overcome.'
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
37 84.
This paper presents a maximum entropy-based named entity recognizer (NER).
0
For this example, since the sequence Even News Broadcasting Corp. only appears once in the document, its longest substring that occurs in the same document is News Broadcasting Corp. In this case, News has an additional feature of I begin set to 1, Broadcasting has an additional feature of I continue set to 1, and Corp. has an additional feature of I end set to 1.
It also incorporates the Good-Turing methodin estimating the likelihoods of previously unseen constructions, including morphological derivatives and personal names.
0
0 Figure 5 An example of affixation: the plural affix.
they evaluate various specific aspects of the segmentation, as well as the overall segmentation performance.
0
In many cases these failures in recall would be fixed by having better estimates of the actual prob­ abilities of single-hanzi words, since our estimates are often inflated.
They proposed a single joint model for performing both morphological segmentation and syntactic disambiguation which bypasses the associated circularity.
0
Furthermore, the combination of pruning and vertical markovization of the grammar outperforms the Oracle results reported by Cohen and Smith.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
As described in Sproat (1995), the Chinese segmenter presented here fits directly into the context of a broader finite-state model of text analysis for speech synthesis.
The authors use linguistic and annotation insights to develop a manually annotated grammar and evaluate it and finally provide a realistic evaluation in which segmentation is performed in a pipeline jointly with parsing.
0
85 82.
The three parsers were trained and tuned by their creators on various sections of the WSJ portion of the Penn Treebank.
0
IL+-1Proof: Assume a pair of crossing constituents appears in the output of the constituent voting technique using k parsers.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
I • JAPANS :rl4 .·········"\)··········"o·'·······"\:J········· ·········'\; . '.:: ..........0 6.51 9.51 : jj / JAPANESE OCTOPUS 10·28i£ :_nc HOW SAY f B :rl4 :il: :wen2 t '- • :zhang!
Instance-weighting approach improved over a wide range of baselines, giving gains of over 2 BLEU points over the best non-adapted baseline.
0
It is also worth pointing out a connection with Daum´e’s (2007) work that splits each feature into domain-specific and general copies.
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
This feature includes named entities, which the ATB marks with a flat NP node dominating an arbitrary number of NNP pre-terminal daughters (Figure 2).
Human judges also pointed out difficulties with the evaluation of long sentences.
0
The normalization on a per-judge basis gave very similar ranking, only slightly less consistent with the ranking from the pairwise comparisons.
The PROBING data structure uses linear probing hash tables and is designed for speed.
0
Storing state therefore becomes a time-space tradeoff; for example, we store state with partial hypotheses in Moses but not with each phrase.
Combining multiple highly-accurate independent parsers yields promising results.
0
It is the performance we could achieve if an omniscient observer told us which parser to pick for each of the sentences.
The AdaBoost algorithm was developed for supervised learning.
0
We present two algorithms.
There are clustering approaches that assign a single POS tag to each word type.
0
5 68.1 34.
The departure from the traditional token-based tagging approach allow them to explicitly capture type-level distributional properties of valid POS tag assignments as part of the model.
0
41.
There are clustering approaches that assign a single POS tag to each word type.
0
There are two key benefits of this model architecture.
This topic has been getting more attention, driven by the needs of various NLP applications.
0
First, from a large corpus, we extract all the NE instance pairs.
The approach has been successfully tested on the 8 000-word Verbmobil task.
0
3) A tight coupling with the speech recognizer output.
In order to realize the full potential of dependency-based syntactic parsing, it is desirable to allow non-projective dependency structures.
0
The Danish Dependency Treebank (DDT) comprises about 100K words of text selected from the Danish PAROLE corpus, with annotation of primary and secondary dependencies (Kromann, 2003).
This paper presents methods to query N-gram language models, minimizing time and space costs.
0
Our open-source (LGPL) implementation is also available for download as a standalone package with minimal (POSIX and g++) dependencies.
There are clustering approaches that assign a single POS tag to each word type.
0
Across all languages, high performance can be attained by selecting a single tag per word type.
The evaluation compares the performance of the system with that of several human judges and inter-human agreement on a single correct way to segment a text.
0
£ : _ADV: 5.88 If:!
It is well-known that English constituency parsing models do not generalize to other languages and treebanks.
0
In application settings, this may be a profitable strategy.
Each out-of-domain phrase pair was characterized by a set of simple features intended to reflect how useful it would be.
0
The natural baseline (baseline) outperforms the pure IN system only for EMEA/EP fren.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
The tee pumping lemma states that if there is tree, t = 22 t2t3, generated by a CFG G, whose height is more than a predetermined bound k, then all trees of the form ti tP3 for each i > 0 will also generated by G (as shown in Figure 9b).
This paper talks about KenLM: Faster and Smaller Language Model Queries.
0
These conaUses lossy compression. bThe 8-bit quantized variant returned incorrect probabilities as explained in Section 3.
This paper conducted research in the area of automatic paraphrase discovery.
0
Extract NE pair instances with contexts From the four years of newspaper corpus, we extracted 1.9 million pairs of NE instances.
Evaluation was done automatically using the BLEU score and manually on fluency and adequacy.
0
In addition to the Europarl test set, we also collected 29 editorials from the Project Syndicate website2, which are published in all the four languages of the shared task.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
Following the system devised under the Qing emperor Kang Xi, hanzi have traditionally been classified according to a set of approximately 200 semantic radicals; members of a radical class share a particular structural component, and often also share a common meaning (hence the term 'semantic').
They have made use of local and global features to deal with the instances of same token in a document.
0
For example, in predicting if a word belongs to a word class, is either true or false, and refers to the surrounding context: if = true, previous word = the otherwise The parameters are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972).
The texts were annotated with the RSTtool.
0
Indeed there are several open issues.
The features were weighted within a logistic model that gave an overall weight that was applied to the phrase pair and MAP-smoothed relative-frequency estimates which were combined linearly with relative-frequency estimates from an in-domain phrase table.
0
This highly effective approach is not directly applicable to the multinomial models used for core SMT components, which have no natural method for combining split features, so we rely on an instance-weighting approach (Jiang and Zhai, 2007) to downweight domain-specific examples in OUT.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
87 Table 7: Test set results.
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
(Webber et al., 2003)).
This paper presents a maximum entropy-based named entity recognizer (NER).
0
Recently, statistical NERs have achieved results that are comparable to hand-coded systems.
The use of global features has shown excellent result in the performance on MUC-6 and MUC-7 test data.
0
A named entity recognizer (NER) is useful in many NLP applications such as information extraction, question answering, etc. On its own, a NER can also provide users who are looking for person or organization names with quick information.
Their results suggested that it was possible to learn accurate POS taggers for languages which did not have any annotated data, but have translations into a resource-rich language.
0
The edge weights between the foreign language trigrams are computed using a co-occurence based similarity function, designed to indicate how syntactically similar the middle words of the connected trigrams are (§3.2).
It outperformed strong unsupervised baselines as well as approaches that relied on direct projections, and bridged the gap between purely supervised and unsupervised POS tagging models.
0
These universal POS categories not only facilitate the transfer of POS information from one language to another, but also relieve us from using controversial evaluation metrics,2 by establishing a direct correspondence between the induced hidden states in the foreign language and the observed English labels.
For the perplexity and translation tasks, we used SRILM to build a 5-gram English language model on 834 million tokens from Europarl v6 (Koehn, 2005) and the 2011 Workshop on Machine Translation News Crawl corpus with duplicate lines removed.
0
SRILM inefficiently stores 64-bit pointers.
Other kinds of productive word classes, such as company names, abbreviations,and place names can easily be handled given appropriate models.
0
Various segmentation approaches were then compared with human performance: 1.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
In Section 2, we brie y review our approach to statistical machine translation.
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
Acronyms (ACRO): Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM).
This paper offers a broad insight into of Arabic constituency parsing by analyzing the interplay of linguistic phenomena, annotation choices, and model design.
0
Better Arabic Parsing: Baselines, Evaluations, and Analysis
Here both parametric and non-parametric models are explored.
0
Finally we show the combining techniques degrade very little when a poor parser is added to the set.
In order to create good-sized vectors for similarity calculation, they had to set a high frequency threshold.
0
As you can see in the figure, the accuracy for the domain is quite high except for the “agree” set, which contains various expressions representing different relationships for an IE application.
The experimental tests are carried out on the Verbmobil task which is a limited-domain spoken-language task.
0
SSER: subjective sentence error rate: For a more detailed analysis, the translations are judged by a human test person.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
There is a (costless) transition between the NC node and f,.
In this paper, Ben and Riloff present a coreference resolver called BABAR that focuses on the use of contextual-role knowledge for coreference resolution.
0
In our coreference resolver, we define θ to be the set of all candidate antecedents for an anaphor.
It is probably the first analysis of Arabic parsing of this kind.
0
Better Arabic Parsing: Baselines, Evaluations, and Analysis
Using less training data than other systems, their NER can perform as well as other state-of-the-art NERs.
0
This is because in many cases, the use of hyphens can be considered to be optional (e.g., third-quarter or third quarter).
In this paper the authors present a stochastic finite-state model for segmenting Chinese text into words.
0
The method reported in this paper makes use solely of unigram probabilities, and is therefore a zeroeth-order model: the cost of a particular segmentation is estimated as the sum of the costs of the individual words in the segmentation.
The AdaBoost algorithm was developed for supervised learning.
0
The problem of "noise" items that do not fall into any of the three categories also needs to be addressed.
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
This assumption, however, is not inherent to type-based tagging models.
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
A sentence was withheld from section 22 because its extreme length was troublesome for a couple of the parsers.'
The texts were annotated with the RSTtool.
0
Figure 2 shows a screenshot (which is of somewhat limited value, though, as color plays a major role in signalling the different statuses of the information).
The bias of automatic methods in favour of statistical systems seemed to be less pronounced on out-of-domain test data.
0
This revealed interesting clues about the properties of automatic and manual scoring.
The model incorporates various recent techniques for incorporating and manipulating linguistic knowledge using finite-state transducers.
0
In the pinyin transliterations a dash(-) separates syllables that may be considered part of the same phonological word; spaces are used to separate plausible phonological words; and a plus sign (+) is used, where relevant, to indicate morpheme boundaries of interest.
The resulting model is compact, efficiently learnable and linguistically expressive.
0
The type-level posterior term can be computed according to, P (Ti|W , T −i, β) ∝ Note that each round of sampling Ti variables takes time proportional to the size of the corpus, as with the standard token-level HMM.
The second algorithm builds on a boosting algorithm called AdaBoost.
0
Roughly speaking, the new algorithm presented in this paper performs a similar search, but instead minimizes a bound on the number of (unlabeled) examples on which two classifiers disagree.
This paper talks about Exploiting Diversity in Natural Language Processing: Combining Parsers.
0
Three state-of-the-art statistical parsers are combined to produce more accurate parses, as well as new bounds on achievable Treebank parsing accuracy.
The authors in this paper describe a search procedure for statistical machine translation (MT) based on dynamic programming (DP).
0
Here, we process only full-form words within the translation procedure.
They extended previous work on discriminative weighting by using a finer granularity, focusing on the properties of instances rather than corpus components, and used simpler training procedure.
0
This is not unreasonable given the application to phrase pairs from OUT, but it suggests that an interesting alternative might be to use a plain log-linear weighting function exp(Ei Aifi(s, t)), with outputs in [0, oo].
This paper discusses the Potsdam Commentary Corpus, a corpus of german assembeled by potsdam university.
0
A corpus of German newspaper commentaries has been assembled at Potsdam University, and annotated with different linguistic information, to different degrees.
The first method builds on results from (Yarowsky 95) and (Blum and Mitchell 98).
0
To see this, note thai the first two terms in the above equation correspond to the function that AdaBoost attempts to minimize in the standard supervised setting (Equ.
they showed that the average agreement among the human judges is .76, and the average agreement between ST(system) and the humans is .75, or about 99% of the interhuman agreement.
0
Indeed, as we shall show in Section 5, even human judges differ when presented with the task of segmenting a text into words, so a definition of the criteria used to determine that a given segmentation is correct is crucial before one can interpret such measures.
The problem of coreference resolution has received considerable attention, including theoretical discourse models and supervised machine learning systems.
0
The gender, number, and scoping KSs eliminate candidates from consideration.
Explanations for this phenomenon are relative informativeness of lexicalization, insensitivity to morphology and the effect of variable word order and these factors lead to syntactic disambiguation.
0
The Penn Arabic Treebank (ATB) syntactic guidelines (Maamouri et al., 2004) were purposefully borrowed without major modification from English (Marcus et al., 1993).
In this paper, the authors are of the opinion that the sequence models-based approaches usually treat token-level tag assignment as the primary latent variable.
0
Our analysis and comparison focuses primarily on the one-to-one accuracy since it is a stricter metric than many-to-one accuracy, but also report many-to-one for completeness.
They showed that it was useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties their trees, find that several of the formalisms considered can be seen as being closely related since they have derivation tree sets with the same structure as those produced by Context-Free Grammars.
0
As will be obvious later, their derivation tree sets will be local sets as are those of CFG's.
Here we present two algorithms.
0
The model was parameterized such that the joint probability of a (label, feature-set) pair P(yi, xi) is written as The model assumes that (y, x) pairs are generated by an underlying process where the label is first chosen with some prior probability P(yi); the number of features mi is then chosen with some probability P(mi); finally the features are independently generated with probabilities P(xulyi).
An extended lexicon model is defined, and its likelihood is compared to a baseline lexicon model, which takes only single-word dependencies into account.
0
The alignment mapping is j ! i = aj from source position j to target position i = aj . The use of this alignment model raises major problems if a source word has to be aligned to several target words, e.g. when translating German compound nouns.
The TRIE model uses less memory than the smallest lossless alternative and is still faster than SRILM.
0
The binary language model from Section 5.2 and text phrase table were forced into disk cache before each run.
The manual evaluation of scoring translation on a graded scale from 1–5 seems to be very hard to perform.
0
(b) does the translation have the same meaning, including connotations?
One can trivially create situations in which strictly binary-branching trees are combined to create a tree with only the root node and the terminal nodes, a completely flat structure.
0
First we present the non-parametric version of parser switching, similarity switching: The intuition for this technique is that we can measure a similarity between parses by counting the constituents they have in common.